Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Integrate Livepeer LLM provider #2154

Merged
merged 27 commits into from
Jan 17, 2025
Merged
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
dddad57
add livepeer on index.ts as llm provider
UD1sto Jan 11, 2025
89f3ab9
updated livepeer models
UD1sto Jan 11, 2025
89e6c2d
add livepeer as llm provider
UD1sto Jan 11, 2025
d5f9607
add retry logic on livepeer img gen
UD1sto Jan 11, 2025
52dbbe4
add handlelivepeer
UD1sto Jan 11, 2025
cbc2788
update test
UD1sto Jan 11, 2025
b22c1ee
add livepeer model keys on .example.env
UD1sto Jan 11, 2025
ccb8ed4
Merge branch 'develop' of https://github.com/elizaos/eliza into feat-…
UD1sto Jan 12, 2025
cb59851
Merge pull request #2 from Titan-Node/livepeer-doc-updates
UD1sto Jan 12, 2025
3c89c28
Merge branch 'develop' into feat-livepeer-integration-dev
UD1sto Jan 13, 2025
5e425e4
add endpoint on livepeer on models.ts
UD1sto Jan 13, 2025
9267e95
edit livepeer model config at model.ts
UD1sto Jan 13, 2025
86f4ebe
Add Livepeer to image gen plugin environments
Titan-Node Jan 13, 2025
834a5cd
add comments on livepeer model sizes
UD1sto Jan 13, 2025
bb053b3
remove retry logic from livepeer generate text and img
UD1sto Jan 13, 2025
f3bd2c4
Merge branch 'develop' into feat-livepeer-integration-dev
UD1sto Jan 13, 2025
e01d12a
Fixed .env naming convention and fixed mismatch bug within code
Titan-Node Jan 14, 2025
e04eecf
add bearer on livepeer calls
UD1sto Jan 14, 2025
ddc76ac
Merge pull request #4 from Titan-Node/feat-livepeer-integration-dev
UD1sto Jan 14, 2025
d42d0d5
Merge branch 'feat-livepeer-integration-dev' of https://github.com/UD…
UD1sto Jan 14, 2025
a995557
Merge branch 'elizaOS:develop' into feat-livepeer-integration-dev
UD1sto Jan 14, 2025
d32506b
Merge branch 'develop' into feat-livepeer-integration-dev
UD1sto Jan 14, 2025
2fa2576
change in parsing to accomodate for new livepeer update
UD1sto Jan 16, 2025
bac59ad
Merge branch 'feat-livepeer-integration-dev' of https://github.com/UD…
UD1sto Jan 16, 2025
d635aa5
Merge branch 'develop' into feat-livepeer-integration-dev
UD1sto Jan 17, 2025
3e0d1b7
addadd nineteen api key on the message
UD1sto Jan 17, 2025
fb10407
Merge branch 'feat-livepeer-integration-dev' of https://github.com/UD…
UD1sto Jan 17, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -141,8 +141,12 @@ MEDIUM_AKASH_CHAT_API_MODEL= # Default: Meta-Llama-3-3-70B-Instruct
LARGE_AKASH_CHAT_API_MODEL= # Default: Meta-Llama-3-1-405B-Instruct-FP8

# Livepeer configuration
LIVEPEER_GATEWAY_URL= # Free inference gateways and docs: https://livepeer-eliza.com/
LIVEPEER_IMAGE_MODEL= # Default: ByteDance/SDXL-Lightning

LIVEPEER_GATEWAY_URL=https://dream-gateway.livepeer.cloud # Free inference gateways and docs: https://livepeer-eliza.com/
IMAGE_LIVEPEER_MODEL= # Default: ByteDance/SDXL-Lightning
SMALL_LIVEPEER_MODEL= # Default: meta-llama/Meta-Llama-3.1-8B-Instruct
MEDIUM_LIVEPEER_MODEL= # Default: meta-llama/Meta-Llama-3.1-8B-Instruct
LARGE_LIVEPEER_MODEL= # Default: meta-llama/Meta-Llama-3.1-8B-Instruct

# Speech Synthesis
ELEVENLABS_XI_API_KEY= # API key from elevenlabs
Expand Down
5 changes: 5 additions & 0 deletions agent/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -512,6 +512,11 @@ export function getTokenForProvider(
character.settings?.secrets?.DEEPSEEK_API_KEY ||
settings.DEEPSEEK_API_KEY
);
case ModelProviderName.LIVEPEER:
return (
character.settings?.secrets?.LIVEPEER_GATEWAY_URL ||
settings.LIVEPEER_GATEWAY_URL
);
default:
const errorMessage = `Failed to get token - unsupported model provider: ${provider}`;
elizaLogger.error(errorMessage);
Expand Down
58 changes: 40 additions & 18 deletions docs/docs/advanced/fine-tuning.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ enum ModelProviderName {
REDPILL,
OPENROUTER,
HEURIST,
LIVEPEER,
}
```

Expand Down Expand Up @@ -272,24 +273,45 @@ const llamaLocalSettings = {

```typescript
const heuristSettings = {
settings: {
stop: [],
maxInputTokens: 32768,
maxOutputTokens: 8192,
repetition_penalty: 0.0,
temperature: 0.7,
},
imageSettings: {
steps: 20,
},
endpoint: "https://llm-gateway.heurist.xyz",
model: {
[ModelClass.SMALL]: "hermes-3-llama3.1-8b",
[ModelClass.MEDIUM]: "mistralai/mixtral-8x7b-instruct",
[ModelClass.LARGE]: "nvidia/llama-3.1-nemotron-70b-instruct",
[ModelClass.EMBEDDING]: "", // Add later
[ModelClass.IMAGE]: "FLUX.1-dev",
},
settings: {
stop: [],
maxInputTokens: 32768,
maxOutputTokens: 8192,
repetition_penalty: 0.0,
temperature: 0.7,
},
imageSettings: {
steps: 20,
},
endpoint: "https://llm-gateway.heurist.xyz",
model: {
[ModelClass.SMALL]: "hermes-3-llama3.1-8b",
[ModelClass.MEDIUM]: "mistralai/mixtral-8x7b-instruct",
[ModelClass.LARGE]: "nvidia/llama-3.1-nemotron-70b-instruct",
[ModelClass.EMBEDDING]: "", // Add later
[ModelClass.IMAGE]: "FLUX.1-dev",
},
};
```

### Livepeer Provider

```typescript
const livepeerSettings = {
settings: {
stop: [],
maxInputTokens: 128000,
maxOutputTokens: 8192,
repetition_penalty: 0.4,
temperature: 0.7,
},
endpoint: "https://dream-gateway.livepeer.cloud",
model: {
[ModelClass.SMALL]: "meta-llama/Meta-Llama-3.1-8B-Instruct",
[ModelClass.MEDIUM]: "meta-llama/Meta-Llama-3.1-8B-Instruct",
[ModelClass.LARGE]: "meta-llama/Llama-3.3-70B-Instruct",
[ModelClass.IMAGE]: "ByteDance/SDXL-Lightning",
},
};
```

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/core/characterfile.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The character's display name for identification and in conversations.

#### `modelProvider` (required)

Specifies the AI model provider. Supported options from [ModelProviderName](/api/enumerations/modelprovidername) include `anthropic`, `llama_local`, `openai`, and others.
Specifies the AI model provider. Supported options from [ModelProviderName](/api/enumerations/modelprovidername) include `anthropic`, `llama_local`, `openai`, `livepeer`, and others.

#### `clients` (required)

Expand Down
7 changes: 7 additions & 0 deletions docs/docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,17 @@ Eliza supports multiple AI models:
- **Heurist**: Set `modelProvider: "heurist"` in your character file. Most models are uncensored.
- LLM: Select available LLMs [here](https://docs.heurist.ai/dev-guide/supported-models#large-language-models-llms) and configure `SMALL_HEURIST_MODEL`,`MEDIUM_HEURIST_MODEL`,`LARGE_HEURIST_MODEL`
- Image Generation: Select available Stable Diffusion or Flux models [here](https://docs.heurist.ai/dev-guide/supported-models#image-generation-models) and configure `HEURIST_IMAGE_MODEL` (default is FLUX.1-dev)
<<<<<<< HEAD
- **Llama**: Set `OLLAMA_MODEL` to your chosen model
- **Grok**: Set `GROK_API_KEY` to your Grok API key and set `modelProvider: "grok"` in your character file
- **OpenAI**: Set `OPENAI_API_KEY` to your OpenAI API key and set `modelProvider: "openai"` in your character file
- **Livepeer**: Set `LIVEPEER_IMAGE_MODEL` to your chosen Livepeer image model, available models [here](https://livepeer-eliza.com/)
=======
- **Llama**: Set `XAI_MODEL=meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo`
- **Grok**: Set `XAI_MODEL=grok-beta`
- **OpenAI**: Set `XAI_MODEL=gpt-4o-mini` or `gpt-4o`
- **Livepeer**: Set `SMALL_LIVEPEER_MODEL`,`MEDIUM_LIVEPEER_MODEL`,`LARGE_LIVEPEER_MODEL` and `IMAGE_LIVEPEER_MODEL` to your desired models listed [here](https://livepeer-eliza.com/).
>>>>>>> 95f56e6b4 (Merge pull request #2 from Titan-Node/livepeer-doc-updates)

You set which model to use inside the character JSON file

Expand Down
35 changes: 35 additions & 0 deletions packages/core/__tests__/models.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ vi.mock("../settings", () => {
LLAMACLOUD_MODEL_LARGE: "mock-llama-large",
TOGETHER_MODEL_SMALL: "mock-together-small",
TOGETHER_MODEL_LARGE: "mock-together-large",
LIVEPEER_GATEWAY_URL: "http://gateway.test-gateway",
IMAGE_LIVEPEER_MODEL: "ByteDance/SDXL-Lightning",
},
loadEnv: vi.fn(),
};
Expand Down Expand Up @@ -125,6 +127,26 @@ describe("Model Provider Configuration", () => {
);
});
});
describe("Livepeer Provider", () => {
test("should have correct endpoint configuration", () => {
expect(models[ModelProviderName.LIVEPEER].endpoint).toBe("http://gateway.test-gateway");
});

test("should have correct model mappings", () => {
const livepeerModels = models[ModelProviderName.LIVEPEER].model;
expect(livepeerModels[ModelClass.SMALL]).toBe("meta-llama/Meta-Llama-3.1-8B-Instruct");
expect(livepeerModels[ModelClass.MEDIUM]).toBe("meta-llama/Meta-Llama-3.1-8B-Instruct");
expect(livepeerModels[ModelClass.LARGE]).toBe("meta-llama/Meta-Llama-3.1-8B-Instruct");
expect(livepeerModels[ModelClass.IMAGE]).toBe("ByteDance/SDXL-Lightning");
});

test("should have correct settings configuration", () => {
const settings = models[ModelProviderName.LIVEPEER].settings;
expect(settings.maxInputTokens).toBe(128000);
expect(settings.maxOutputTokens).toBe(8192);
expect(settings.temperature).toBe(0);
});
});
});

describe("Model Retrieval Functions", () => {
Expand Down Expand Up @@ -224,3 +246,16 @@ describe("Environment Variable Integration", () => {
);
});
});

describe("Generation with Livepeer", () => {
test("should have correct image generation settings", () => {
const livepeerConfig = models[ModelProviderName.LIVEPEER];
expect(livepeerConfig.model[ModelClass.IMAGE]).toBe("ByteDance/SDXL-Lightning");
expect(livepeerConfig.settings.temperature).toBe(0);
});

test("should use default image model", () => {
delete process.env.IMAGE_LIVEPEER_MODEL;
expect(models[ModelProviderName.LIVEPEER].model[ModelClass.IMAGE]).toBe("ByteDance/SDXL-Lightning");
});
});
87 changes: 85 additions & 2 deletions packages/core/src/generation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1188,6 +1188,55 @@ export async function generateText({
break;
}

case ModelProviderName.LIVEPEER: {
elizaLogger.debug("Initializing Livepeer model.");

if (!endpoint) {
throw new Error("Livepeer Gateway URL is not defined");
}

const requestBody = {
model: model,
messages: [
{
role: "system",
content: runtime.character.system ?? settings.SYSTEM_PROMPT ?? "You are a helpful assistant"
},
{
role: "user",
content: context
}
],
max_tokens: max_response_length,
stream: false
};

const fetchResponse = await runtime.fetch(endpoint+'/llm', {
method: "POST",
headers: {
"accept": "text/event-stream",
"Content-Type": "application/json",
"Authorization": "Bearer eliza-app-llm"
},
body: JSON.stringify(requestBody)
});

if (!fetchResponse.ok) {
const errorText = await fetchResponse.text();
throw new Error(`Livepeer request failed (${fetchResponse.status}): ${errorText}`);
}

const json = await fetchResponse.json();

if (!json?.choices?.[0]?.message?.content) {
throw new Error("Invalid response format from Livepeer");
}

response = json.choices[0].message.content.replace(/<\|start_header_id\|>assistant<\|end_header_id\|>\n\n/, '');
elizaLogger.debug("Successfully received response from Livepeer model");
break;
}

default: {
const errorMessage = `Unsupported provider: ${provider}`;
elizaLogger.error(errorMessage);
Expand Down Expand Up @@ -1721,7 +1770,6 @@ export const generateImage = async (
}
},
});

// Convert the returned image URLs to base64 to match existing functionality
const base64Promises = result.data.images.map(async (image) => {
const response = await fetch(image.url);
Expand Down Expand Up @@ -1822,15 +1870,18 @@ export const generateImage = async (
if (!baseUrl.protocol.startsWith("http")) {
throw new Error("Invalid Livepeer Gateway URL protocol");
}

const response = await fetch(
`${baseUrl.toString()}text-to-image`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer eliza-app-img",
},
body: JSON.stringify({
model_id: model,
model_id:
data.modelId || "ByteDance/SDXL-Lightning",
prompt: data.prompt,
width: data.width || 1024,
height: data.height || 1024,
Expand Down Expand Up @@ -2108,6 +2159,8 @@ export async function handleProvider(
return await handleOllama(options);
case ModelProviderName.DEEPSEEK:
return await handleDeepSeek(options);
case ModelProviderName.LIVEPEER:
return await handleLivepeer(options);
default: {
const errorMessage = `Unsupported provider: ${provider}`;
elizaLogger.error(errorMessage);
Expand Down Expand Up @@ -2395,6 +2448,36 @@ async function handleDeepSeek({
});
}

async function handleLivepeer({
model,
apiKey,
schema,
schemaName,
schemaDescription,
mode,
modelOptions,
}: ProviderOptions): Promise<GenerateObjectResult<unknown>> {
console.log("Livepeer provider api key:", apiKey);
if (!apiKey) {
throw new Error("Livepeer provider requires LIVEPEER_GATEWAY_URL to be configured");
}

const livepeerClient = createOpenAI({
apiKey,
baseURL: apiKey // Use the apiKey as the baseURL since it contains the gateway URL
});

return await aiGenerateObject({
model: livepeerClient.languageModel(model),
schema,
schemaName,
schemaDescription,
mode,
...modelOptions,
});
}


// Add type definition for Together AI response
interface TogetherAIImageResponse {
data: Array<{
Expand Down
37 changes: 32 additions & 5 deletions packages/core/src/models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -932,11 +932,38 @@ export const models: Models = {
},
},
[ModelProviderName.LIVEPEER]: {
// livepeer endpoint is handled from the sdk
endpoint: settings.LIVEPEER_GATEWAY_URL,
model: {
[ModelClass.SMALL]: {
name:
settings.SMALL_LIVEPEER_MODEL ||
"meta-llama/Meta-Llama-3.1-8B-Instruct",
stop: [],
maxInputTokens: 8000,
maxOutputTokens: 8192,
temperature: 0,
},
[ModelClass.MEDIUM]: {
name:
settings.MEDIUM_LIVEPEER_MODEL ||
"meta-llama/Meta-Llama-3.1-8B-Instruct",
stop: [],
maxInputTokens: 8000,
maxOutputTokens: 8192,
temperature: 0,
},
[ModelClass.LARGE]: {
name:
settings.LARGE_LIVEPEER_MODEL ||
"meta-llama/Meta-Llama-3.1-8B-Instruct",
stop: [],
maxInputTokens: 8000,
maxOutputTokens: 8192,
temperature: 0,
},
[ModelClass.IMAGE]: {
name:
settings.LIVEPEER_IMAGE_MODEL || "ByteDance/SDXL-Lightning",
settings.IMAGE_LIVEPEER_MODEL || "ByteDance/SDXL-Lightning",
},
},
},
Expand All @@ -948,21 +975,21 @@ export const models: Models = {
stop: [],
maxInputTokens: 128000,
maxOutputTokens: 8192,
temperature: 0.6,
temperature: 0,
},
[ModelClass.MEDIUM]: {
name: settings.MEDIUM_INFERA_MODEL || "mistral-nemo:latest",
stop: [],
maxInputTokens: 128000,
maxOutputTokens: 8192,
temperature: 0.6,
temperature: 0,
},
[ModelClass.LARGE]: {
name: settings.LARGE_INFERA_MODEL || "mistral-small:latest",
stop: [],
maxInputTokens: 128000,
maxOutputTokens: 8192,
temperature: 0.6,
temperature: 0,
},
},
},
Expand Down
Loading
Loading