Transformers
(pretrained ONNX model interface connected to Huggingface)
This workload does not support batching
Many people find it necessary to run a diverse set of models, such as text-to-speech, speech-to-text, image classification and more. A semi-complete list of workloads can be found here, but implementing a new, unsupported workload is relatively simple. For small and medium sized models, this workload is perfect!
We use Transformers.js to facilitate ONNX inference.
Feature Support
All types of model are supported, except, most notably:
Large LLMs (above 1B parameters, use WebLLM instead)
Large diffusion models (most are too large)
Video classification
Check the full list of supported and unsupported model types here. If the model type you want to run is unsupported, consider direct ONNX inference
Use of experimental software
We use a transformers.js beta version to enable a WebGPU option. Note that it may cause unintended behavior, and if you do not need it, we can switch your project back to the stable version of transformers.js on request.
Example
Let's run OpenAI's whisper model on two types of speech: one obfuscated and spoken quickly, and a clear presidential speech. Note that load_options contains device: webgpu, as transformers.js defaults to the CPU (WASM) runtime, which can cause OOM errors and slower inference on large models.
Last updated