Skip to content

Commit

Permalink
fix: cookbooks & integrations page
Browse files Browse the repository at this point in the history
  • Loading branch information
m-ods authored Feb 5, 2025
2 parents 64d14dd + 875113e commit 1d2a4e2
Show file tree
Hide file tree
Showing 8 changed files with 70 additions and 142 deletions.
75 changes: 18 additions & 57 deletions fern/pages/04-lemur/ask-questions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -250,54 +250,14 @@ under or around the kneecap and pain when walking.

## Q&A with specialized endpoint

<Tabs>
<Tab language="python" title="Python">

The [LeMUR Question & Answer function](https://www.assemblyai.com/docs/api-reference/lemur/question-answer) requires no prompt engineering and facilitates more deterministic and structured outputs. You can use it with `transcript.lemur.question()`.

To use it, define a list of `aai.LemurQuestion` objects. For each question, you can define additional `context` and specify either a `answer_format` or a list of `answer_options`. Additionally, you can define an overall `context`.

</Tab>
<Tab language="typescript" title="TypeScript">

The [Question & Answer endpoint](https://www.assemblyai.com/docs/api-reference/lemur/question-answer) requires no prompt engineering and facilitates more deterministic and structured outputs. You can use it with `client.lemur.questionAnswer()`

To use it, define a list of questions. For each question, you can define additional `context` and specify either a `answer_format` or a list of `answer_options`. Additionally, you can define an overall `context`.

</Tab>
<Tab language="ruby" title="Ruby">

The [Question & Answer endpoint](https://www.assemblyai.com/docs/api-reference/lemur/question-answer) requires no prompt engineering and facilitates more deterministic and structured outputs. You can use it with `client.lemur.question_answer()`

To use it, define a list of questions. For each question, you can define additional `context` and specify either a `answer_format` or a list of `answer_options`. Additionally, you can define an overall `context`.

</Tab>
<Tab language="golang" title="Go">

The [Question & Answer endpoint](https://www.assemblyai.com/docs/api-reference/lemur/question-answer) requires no prompt engineering and facilitates more deterministic and structured outputs. You can use it with `client.LeMUR.Question()`.

To use it, define a list of `aai.LemurQuestion` objects. For each question, you can define additional `context` and specify either a `answer_format` or a list of `answer_options`. Additionally, you can define an overall `context`.

</Tab>
<Tab language="java" title="Java">

The [Question & Answer endpoint](https://www.assemblyai.com/docs/api-reference/lemur/question-answer) requires no prompt engineering and facilitates more deterministic and structured outputs. You can use it with `client.lemur().questionAnswer()`.

To use it, define a list of `LemurQuestion` objects. For each question, you can define additional `context` and specify either a `answerFormat` or a list of `answerOptions`. Additionally, you can define an overall `context`.

</Tab>
<Tab language="csharp" title="C#">

The [Question & Answer endpoint](https://www.assemblyai.com/docs/api-reference/lemur/question-answer) requires no prompt engineering and facilitates more deterministic and structured outputs. You can use it with `client.Lemur.QuestionAnswerAsync()`.

To use it, define a list of `LemurQuestion` objects. For each question, you can define additional `Context` and specify either a `AnswerFormat` or a list of `AnswerOptions`. Additionally, you can define an overall `Context`.

</Tab>
</Tabs>
The [LeMUR Question & Answer function](https://www.assemblyai.com/docs/api-reference/lemur/question-answer) requires no prompt engineering and facilitates more deterministic and structured outputs. See the code examples below for more information on how to use this endpoint.

<Tabs groupId="language">
<Tab language="python" title="Python" default>

To use it, define a list of `aai.LemurQuestion` objects. For each question, you can define additional `context` and specify either a `answer_format` or a list of `answer_options`. Additionally, you can define an overall `context`.


```python {8-16,18-22}
import assemblyai as aai

Expand Down Expand Up @@ -330,6 +290,9 @@ for qa_response in result.response:
</Tab>
<Tab language="typescript" title="TypeScript">

To use it, define a list of `questions`. For each question, you can define additional `context` and specify either a `answer_format` or a list of `answer_options`. Additionally, you can define an overall `context`.


```ts {12-23,25-30}
import { AssemblyAI } from 'assemblyai'

Expand Down Expand Up @@ -374,7 +337,10 @@ run()
</Tab>
<Tab language="java" title="Java">

```java {18-25,27-32}
To use it, define a list of `LemurQuestion` objects. For each question, you can define additional `context` and specify either a `answerFormat` or a list of `answerOptions`. Additionally, you can define an overall `context`.


```java {17-31}
import com.assemblyai.api.AssemblyAI;
import com.assemblyai.api.resources.transcripts.types.*;
import com.assemblyai.api.resources.lemur.requests.*;
Expand Down Expand Up @@ -418,6 +384,9 @@ public final class App {
</Tab>
<Tab language="csharp" title="C#">

To use it, define a list of `LemurQuestion` objects. For each question, you can define additional `Context` and specify either a `AnswerFormat` or a list of `AnswerOptions`. Additionally, you can define an overall `Context`.


```csharp {12-32}
using AssemblyAI;
using AssemblyAI.Lemur;
Expand Down Expand Up @@ -464,6 +433,9 @@ foreach (var qa in response.Response)
</Tab>
<Tab language="ruby" title="Ruby">

To use it, define a list of `questions`. For each question, you can define additional `context` and specify either a `answer_format` or a list of `answer_options`. Additionally, you can define an overall `context`.


```ruby {9-25}
require 'assemblyai'

Expand Down Expand Up @@ -508,18 +480,7 @@ end

This example shows how you can run a custom LeMUR task with an advanced prompt to create custom Q&A responses:

<Button
size="md"
theme="dark"
endIcon="chevron"
variant="filled"
link={{
href: 'https://github.com/AssemblyAI/cookbook/blob/master/lemur/task-endpoint-structured-QA.ipynb'
}}
>
Cookbook: Custom Q&A with LeMUR Task
</Button>

<Card icon="book" title="Cookbook: Custom Q&A with LeMUR Task" href="https://github.com/AssemblyAI/cookbook/blob/master/lemur/task-endpoint-structured-QA.ipynb"/>



Expand Down
95 changes: 40 additions & 55 deletions fern/pages/04-lemur/customize-parameters.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,12 @@ result = transcript.lemur.task(
final_model=aai.LemurModel.claude3_5_sonnet
)
```
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `aai.LemurModel.claude3_5_sonnet` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `aai.LemurModel.claude3_opus` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `aai.LemurModel.claude3_haiku` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `aai.LemurModel.claude3_sonnet` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |

</Tab>
<Tab language="typescript" title="TypeScript">
Expand All @@ -44,6 +50,12 @@ const { response } = await client.lemur.task({
final_model: 'anthropic/claude-3-5-sonnet'
})
```
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `'anthropic/claude-3-5-sonnet'` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `'anthropic/claude-3-opus'` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `'anthropic/claude-3-haiku'` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `'anthropic/claude-3-sonnet'` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |

</Tab>
<Tab language="golang" title="Go">
Expand All @@ -56,6 +68,12 @@ params.FinalModel = "anthropic/claude-3-5-sonnet"

result, _ := client.LeMUR.Task(ctx, params)
```
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `"anthropic/claude-3-5-sonnet"` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `"anthropic/claude-3-opus"` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `"anthropic/claude-3-haiku"` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `"anthropic/claude-3-sonnet"` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |

</Tab>
<Tab language="java" title="Java">
Expand All @@ -67,6 +85,12 @@ var params = LemurTaskParams.builder()
.finalModel(LemurModel.ANTHROPIC_CLAUDE3_5_SONNET)
.build();
```
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `LemurModel.ANTHROPIC_CLAUDE3_5_SONNET` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `LemurModel.ANTHROPIC_CLAUDE3_OPUS` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `LemurModel.ANTHROPIC_CLAUDE3_HAIKU` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `LemurModel.ANTHROPIC_CLAUDE3_SONNET` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |

</Tab>
<Tab language="csharp" title="C#">
Expand All @@ -79,6 +103,12 @@ var lemurTaskParams = new LemurTaskParams
FinalModel = LemurModel.AnthropicClaude3_5_Sonnet
};
```
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `LemurModel.AnthropicClaude3_5_Sonnet` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `LemurModel.AnthropicClaude3_Opus` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `LemurModel.AnthropicClaude3_Haiku` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `LemurModel.AnthropicClaude3_Sonnet` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |

</Tab>
<Tab language="ruby" title="Ruby">
Expand All @@ -90,59 +120,14 @@ response = client.lemur.task(
final_model: AssemblyAI::Lemur::LemurModel::ANTHROPIC_CLAUDE3_5_SONNET
)
```

</Tab>
</Tabs>

<Tabs>
<Tab language="python" title="Python">
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `aai.LemurModel.claude3_5_sonnet` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `aai.LemurModel.claude3_opus` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `aai.LemurModel.claude3_haiku` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `aai.LemurModel.claude3_sonnet` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |
</Tab>
<Tab language="typescript" title="TypeScript">
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `'anthropic/claude-3-5-sonnet'` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `'anthropic/claude-3-opus'` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `'anthropic/claude-3-haiku'` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `'anthropic/claude-3-sonnet'` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |
</Tab>
<Tab language="golang" title="Go">
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `"anthropic/claude-3-5-sonnet"` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `"anthropic/claude-3-opus"` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `"anthropic/claude-3-haiku"` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `"anthropic/claude-3-sonnet"` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |
</Tab>
<Tab language="java" title="Java">
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `"anthropic/claude-3-5-sonnet"` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `"anthropic/claude-3-opus"` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `"anthropic/claude-3-haiku"` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `"anthropic/claude-3-sonnet"` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |
</Tab>
<Tab language="csharp" title="C#">
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `"anthropic/claude-3-5-sonnet"` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `"anthropic/claude-3-opus"` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `"anthropic/claude-3-haiku"` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `"anthropic/claude-3-sonnet"` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |
</Tab>
<Tab language="ruby" title="Ruby">
| Model | SDK Parameter | Description |
| --- | --- | --- |
| **Claude 3.5 Sonnet** | `"anthropic/claude-3-5-sonnet"` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `"anthropic/claude-3-opus"` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `"anthropic/claude-3-haiku"` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `"anthropic/claude-3-sonnet"` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |
</Tab>
| **Claude 3.5 Sonnet** | `AssemblyAI::Lemur::LemurModel::ANTHROPIC_CLAUDE3_5_SONNET` | Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet. This uses Anthropic's Claude 3.5 Sonnet model version `claude-3-5-sonnet-20240620`. |
| **Claude 3.0 Opus** | `AssemblyAI::Lemur::LemurModel::ANTHROPIC_CLAUDE3_OPUS` | Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks. |
| **Claude 3.0 Haiku** | `AssemblyAI::Lemur::LemurModel::ANTHROPIC_CLAUDE3_HAIKU` | Claude 3 Haiku is the fastest model that can execute lightweight actions. |
| **Claude 3.0 Sonnet** | `AssemblyAI::Lemur::LemurModel::ANTHROPIC_CLAUDE3_SONNET` | Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks. |

</Tab>
</Tabs>

You can find more information on pricing for each model <a href="https://www.assemblyai.com/pricing" target="_blank">here</a>.
Expand All @@ -157,7 +142,7 @@ You can find more information on pricing for each model <a href="https://www.ass
<Tabs groupId="language">
<Tab language="python" title="Python" default>

You can change the maximum output size in tokens by specifying the `MaxOutputSize` parameter. Up to 4000 tokens are allowed.
You can change the maximum output size in tokens by specifying the `max_output_size` parameter. Up to 4000 tokens are allowed.


```python {3}
Expand All @@ -184,7 +169,7 @@ const { response } = await client.lemur.task({
</Tab>
<Tab language="golang" title="Go">

You can change the maximum output size in tokens by specifying the `max_output_size` parameter. Up to 4000 tokens are allowed.
You can change the maximum output size in tokens by specifying the `MaxOutputSize` parameter. Up to 4000 tokens are allowed.


```go {4}
Expand Down Expand Up @@ -213,7 +198,7 @@ var params = LemurTaskParams.builder()
</Tab>
<Tab language="csharp" title="C#">

You can change the maximum output size in tokens by specifying the `max_output_size` parameter. Up to 4000 tokens are allowed.
You can change the maximum output size in tokens by specifying the `MaxOutputSize` parameter. Up to 4000 tokens are allowed.


```csharp {5}
Expand Down Expand Up @@ -499,7 +484,7 @@ You can feed in up to a maximum of 100 files or 100 hours, whichever is lower.
<Tabs groupId="language">
<Tab language="python" title="Python" default>

```python
```python {1-7}
transcript_group = transcriber.transcribe_group(
[
"https://example.org/customer1.mp3",
Expand Down
Loading

0 comments on commit 1d2a4e2

Please sign in to comment.