Transformers
(pretrained ONNX model interface connected to Huggingface)
Last updated
(pretrained ONNX model interface connected to Huggingface)
Last updated
This workload does not support batching
Many people find it necessary to run a diverse set of models, such as text-to-speech, speech-to-text, image classification and more. A semi-complete list of workloads can be found , but implementing a new, unsupported workload is relatively simple. For small and medium sized models, this workload is perfect!
We use to facilitate ONNX inference.
All types of model are supported, except, most notably:
Large LLMs (above 1B parameters, use WebLLM instead)
Large diffusion models (most are too large)
Video classification
Check the full list of supported and unsupported model types . If the model type you want to run is unsupported, consider
We use a to enable a WebGPU option. Note that it may cause unintended behavior, and if you do not need it, we can switch your project back to the stable version of transformers.js on request.
Let's run OpenAI's whisper model on two types of speech: one obfuscated and spoken quickly, and a clear presidential speech. Note that load_options contains device: webgpu
, as transformers.js defaults to the CPU (WASM) runtime, which can cause OOM errors and slower inference on large models.