ObitCC
  • Obit: Community Cloud
  • For Research
    • Creating your first Workload
    • Workload Types
      • WebLLM
      • WGSL
      • Transformers
      • ONNX Training
      • More ONNX Workloads
    • Entitlements
  • For server owners
    • Setting Up
    • Creating a Shop
    • Profit Sharing
    • FAQ
Powered by GitBook
On this page
  • The basics
  • Creating your first Workload
  1. For Research

Creating your first Workload

PreviousObit: Community CloudNextWorkload Types

Last updated 9 months ago

The basics

Anything that runs on the ObitCC network is called a Workload, which is represented as a JavaScript object. This object contains details about your organization and your project, as well as configurations for the type of Workload (e.g, image generation, text generation, simulations) that you are running.

Many workloads, such as running large language models, are supported by the Scaffolding API, our set of tools that simplify many complex AI/ML use-cases.

Your use-case is called a Shader, which is a file or configuration containing your code. Shaders are fetched from our server and distributed across our network, then returned as Outputs. These can be saved temporarily on our server or uploaded directly to your servers.

Creating your first Workload

It's easy to start using ObitCC. Let's use a small Large Language Model called to run some basic arithmetic on the network.

First, we import required modules from the ObitCC tools API and define our model (all models have to be set up to be web-compatible using the spec; most popular models are already converted to this format)

import { markAllTasksDone } from "../modules/tools.js";
const model = "TinyLlama-1.1B-Chat-v1.0-q4f16_1-MLC";

Now, let's define our prompts and a default prompt.

let prompts = ["What is 2+2?", "What is 3+3?", "What is 4+4?"];
const responses = [];

const defaultPrompt = {
    content: "You are a helpful AI agent helping users.",
    role: "system",
};

Now, let's define a default response to when a client asks for a new shader (in this case it's a prompt). This just returns a random shader, as well as the default system prompt. You can pass a single prompt, or a batch of prompts in. We recommend a maximum of 10 prompts - they will be run sequentially. Here, we just pass in one prompt (hence an array of a singular array)

function shaderResponse() {
    const prompt = prompts[Math.floor(Math.random() * prompts.length)]
    return [[defaultPrompt, { content: prompt, role: "user" }]];
}

Now let's set up the code for when all prompts are done. It also removes prompts that have been already completed (you may add a counter so each prompt is completed more than once). Also, a function for when all tasks are finished is defined. It cleans up the user's machine with the markAllTasksDone() function, then exits after two minutes. You may customize this code however you want.

function handleFinished() {
    console.log("All prompts are completed", responses);
    markAllTasksDone();
}

function handleOutputs(promptList, outputs) {
    console.log("User responded with response", outputs["choices"][0]["message"]["content"]);

    let prompt;
    for (let i = 0; i < promptList.length; i++) {
        const individualPrompt = promptList[i];
        if (individualPrompt["role"] === "user") {
            prompt = individualPrompt["content"];
        }
    }

    prompts = prompts.filter((p) => p !== prompt);
    responses.push(outputs);
    if (prompts.length === 0) {
        handleFinished();
    }
    console.log("Prompts left", prompts);
}

Now let's export the final function in accordance to the WebLLM workload (full spec is available under the Workloads tab).

export default {
    type: "webllm",
    action: "Test inference",
    officialName: "webllm-testuniversity-test1",
    organization: "Your Org LLC",
    hooks: {
        shaderResponse,
        handleOutputs
    },
    payload: {
        model: model,
        config: {
            temperature: 1.0,
            top_p: 1,
        }
    }
}

You're done! Let's run it on the network and see what happens:

29194ms taken for client 99c82140fe1ca3eb to complete job (ping: 3ms) 
User responded with response 3 + 3 = 6 
Prompts left [ 'What is 2+2?', 'What is 4+4?' ] 
Opened connection to client 5a07f80ad3664a8d 
User MINECRAFT-XX_REDACTED_USER_ID_XX completed 1 minute of work 
28302ms taken for client 5a07f80ad3664a8d to complete job (ping: 12ms) 
User responded with response 2 + 2 = 4 
Prompts left [ 'What is 4+4?' ] 
7817ms taken for client 5a07f80ad3664a8d to complete job (ping: 6ms) 
User responded with response 4 + 4 equals 8.

Two clients gave responses to the three prompts.

Congratulations, you have made your first Workload on ObitCC! You can check out workload types in the next page, or expand the Workload Types category to see detailed examples for each one.

TinyLlama
MLC-LLM