ObitCC
  • Obit: Community Cloud
  • For Research
    • Creating your first Workload
    • Workload Types
      • WebLLM
      • WGSL
      • Transformers
      • ONNX Training
      • More ONNX Workloads
    • Entitlements
  • For server owners
    • Setting Up
    • Creating a Shop
    • Profit Sharing
    • FAQ
Powered by GitBook
On this page
  • Feature Support
  • Use of experimental software
  • Example
  1. For Research
  2. Workload Types

Transformers

(pretrained ONNX model interface connected to Huggingface)

PreviousWGSLNextONNX Training

Last updated 9 months ago

Note: Transformers.js is sadly only accessible for inference currently. If you wish to train a model, you may be interested in Microsoft's low-level ONNX training API at the "ONNX Training" workload page

This workload does not support batching

Many people find it necessary to run a diverse set of models, such as text-to-speech, speech-to-text, image classification and more. A semi-complete list of workloads can be found , but implementing a new, unsupported workload is relatively simple. For small and medium sized models, this workload is perfect!

We use to facilitate ONNX inference.

Feature Support

All types of model are supported, except, most notably:

  • Large LLMs (above 1B parameters, use WebLLM instead)

  • Large diffusion models (most are too large)

  • Video classification

Check the full list of supported and unsupported model types . If the model type you want to run is unsupported, consider

Use of experimental software

We use a to enable a WebGPU option. Note that it may cause unintended behavior, and if you do not need it, we can switch your project back to the stable version of transformers.js on request.

Example

Let's run OpenAI's whisper model on two types of speech: one obfuscated and spoken quickly, and a clear presidential speech. Note that load_options contains device: webgpu, as transformers.js defaults to the CPU (WASM) runtime, which can cause OOM errors and slower inference on large models.

// speech to text with whisper

import { markAllTasksDone } from "../modules/tools.js";

let prompts = [
    "https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav",
    "https://cdn.openai.com/whisper/draft-20220913a/micro-machines.wav"
];
const responses = [];

function shaderResponse() {
    return prompts[Math.floor(Math.random() * prompts.length)];
}

// this function will be called when the user responds with outputs
function handleOutputs(prompt, outputs) {
    console.log("User responded with response", outputs);

    prompts = prompts.filter((p) => p !== prompt);
    responses.push({ prompt, outputs });
    if (prompts.length === 0) {
        // upload to your remote server here
        console.log("All prompts are completed", ...responses);
        markAllTasksDone();
    }
    console.log("Prompts left", prompts);
}

export default {
    type: "transformers",
    action: "speech to text",
    officialName: "transformers-testuniversity-test3",
    organization: "Test University",
    hooks: {
        shaderResponse,
        handleOutputs
    },
    payload: {
        // many other translation models are supported: https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TranslationPipeline
        model: "Xenova/whisper-tiny.en",
        task: "automatic-speech-recognition",
        // we're using the alpha from https://github.com/xenova/transformers.js/pull/545
        load_options: {
            device: 'webgpu',
            //dtype: 'fp16',
        },
        runtime_options: {}
    }
}
here
Transformers.js
here
direct ONNX inference
transformers.js beta version