# Creating your first Workload

## The basics

Anything that runs on the ObitCC network is called a **Workload**, which is represented as a JavaScript object. This object contains details about your organization and your project, as well as configurations for the type of Workload (e.g, image generation, text generation, simulations) that you are running.

Many workloads, such as running large language models, are supported by the **Scaffolding API**, our set of tools that simplify many complex AI/ML use-cases.

Your use-case is called a **Shader**, which is a file or configuration containing your code. Shaders are fetched from our server and distributed across our network, then returned as **Outputs**. These can be saved temporarily on our server or uploaded directly to your servers.

## Creating your first Workload

It's easy to start using ObitCC. Let's use a small Large Language Model called [TinyLlama](https://github.com/jzhang38/TinyLlama) to run some basic arithmetic on the network.

First, we import required modules from the ObitCC tools API and define our model (all models have to be set up to be web-compatible using the [MLC-LLM](https://llm.mlc.ai/docs/compilation/convert_weights.html) spec; most popular models are already converted to this format)

```javascript
import { markAllTasksDone } from "../modules/tools.js";
const model = "TinyLlama-1.1B-Chat-v1.0-q4f16_1-MLC";
```

Now, let's define our prompts and a default prompt.

```javascript
let prompts = ["What is 2+2?", "What is 3+3?", "What is 4+4?"];
const responses = [];

const defaultPrompt = {
    content: "You are a helpful AI agent helping users.",
    role: "system",
};
```

Now, let's define a default response to when a client asks for a new shader (in this case it's a prompt). This just returns a random shader, as well as the default system prompt. You can pass a single prompt, or a batch of prompts in. We recommend a maximum of 10 prompts - they will be run sequentially. Here, we just pass in one prompt (hence an array of a singular array)

```javascript
function shaderResponse() {
    const prompt = prompts[Math.floor(Math.random() * prompts.length)]
    return [[defaultPrompt, { content: prompt, role: "user" }]];
}
```

Now let's set up the code for when all prompts are done. It also removes prompts that have been already completed (you may add a counter so each prompt is completed more than once). Also, a function for when all tasks are finished is defined. It cleans up the user's machine with the `markAllTasksDone()` function, then exits after two minutes. You may customize this code however you want.

```javascript
function handleFinished() {
    console.log("All prompts are completed", responses);
    markAllTasksDone();
}

function handleOutputs(promptList, outputs) {
    console.log("User responded with response", outputs["choices"][0]["message"]["content"]);

    let prompt;
    for (let i = 0; i < promptList.length; i++) {
        const individualPrompt = promptList[i];
        if (individualPrompt["role"] === "user") {
            prompt = individualPrompt["content"];
        }
    }

    prompts = prompts.filter((p) => p !== prompt);
    responses.push(outputs);
    if (prompts.length === 0) {
        handleFinished();
    }
    console.log("Prompts left", prompts);
}
```

Now let's export the final function in accordance to the WebLLM workload (full spec is available under the Workloads tab).

```javascript
export default {
    type: "webllm",
    action: "Test inference",
    officialName: "webllm-testuniversity-test1",
    organization: "Your Org LLC",
    hooks: {
        shaderResponse,
        handleOutputs
    },
    payload: {
        model: model,
        config: {
            temperature: 1.0,
            top_p: 1,
        }
    }
}
```

You're done! Let's run it on the network and see what happens:

```
29194ms taken for client 99c82140fe1ca3eb to complete job (ping: 3ms) 
User responded with response 3 + 3 = 6 
Prompts left [ 'What is 2+2?', 'What is 4+4?' ] 
Opened connection to client 5a07f80ad3664a8d 
User MINECRAFT-XX_REDACTED_USER_ID_XX completed 1 minute of work 
28302ms taken for client 5a07f80ad3664a8d to complete job (ping: 12ms) 
User responded with response 2 + 2 = 4 
Prompts left [ 'What is 4+4?' ] 
7817ms taken for client 5a07f80ad3664a8d to complete job (ping: 6ms) 
User responded with response 4 + 4 equals 8.
```

Two clients gave responses to the three prompts.

Congratulations, you have made your first Workload on ObitCC! You can check out workload types in the next page, or expand the Workload Types category to see detailed examples for each one.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.obitmc.com/for-research/creating-your-first-workload.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
