Skip to content

Commit

Permalink
docs: fix trim_messages code blocks (langchain-ai#23271)
Browse files Browse the repository at this point in the history
  • Loading branch information
baskaryan authored Jun 21, 2024
1 parent 8632626 commit 9eda8f2
Showing 1 changed file with 94 additions and 120 deletions.
214 changes: 94 additions & 120 deletions libs/core/langchain_core/messages/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -585,163 +585,137 @@ def dummy_token_counter(messages: List[BaseMessage]) -> int:
return count
First 30 tokens, not allowing partial messages:
.. code-block:: python
.. code-block:: python
trim_messages(messages, max_tokens=30, token_counter=dummy_token_counter, strategy="first")
trim_messages(messages, max_tokens=30, token_counter=dummy_token_counter, strategy="first")
.. code-block:: python
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
]
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
]
First 30 tokens, allowing partial messages:
.. code-block:: python
.. code-block:: python
trim_messages(
messages,
max_tokens=30,
token_counter=dummy_token_counter,
strategy="first",
allow_partial=True,
)
trim_messages(
messages,
max_tokens=30,
token_counter=dummy_token_counter,
strategy="first",
allow_partial=True,
)
.. code-block:: python
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
AIMessage( [{"type": "text", "text": "This is the FIRST 4 token block."}], id="second"),
]
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
AIMessage( [{"type": "text", "text": "This is the FIRST 4 token block."}], id="second"),
]
First 30 tokens, allowing partial messages, have to end on HumanMessage:
.. code-block:: python
.. code-block:: python
trim_messages(
messages,
max_tokens=30,
token_counter=dummy_token_counter,
strategy="first"
allow_partial=True,
end_on="human",
)
trim_messages(
messages,
max_tokens=30,
token_counter=dummy_token_counter,
strategy="first"
allow_partial=True,
end_on="human",
)
.. code-block:: python
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
]
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
]
Last 30 tokens, including system message, not allowing partial messages:
.. code-block:: python
.. code-block:: python
trim_messages(messages, max_tokens=30, include_system=True, token_counter=dummy_token_counter, strategy="last")
trim_messages(messages, max_tokens=30, include_system=True, token_counter=dummy_token_counter, strategy="last")
.. code-block:: python
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
Last 40 tokens, including system message, allowing partial messages:
.. code-block:: python
trim_messages(
messages,
max_tokens=40,
token_counter=dummy_token_counter,
strategy="last",
allow_partial=True,
include_system=True
)
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
AIMessage(
[{"type": "text", "text": "This is the FIRST 4 token block."},],
id="second",
),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
Last 30 tokens, including system message, allowing partial messages, end on HumanMessage:
.. code-block:: python
trim_messages(
messages,
max_tokens=30,
token_counter=dummy_token_counter,
strategy="last",
end_on="human",
include_system=True,
allow_partial=True,
)
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
AIMessage(
[{"type": "text", "text": "This is the FIRST 4 token block."},],
id="second",
),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
]
Last 40 tokens, including system message, allowing partial messages, start on HumanMessage:
.. code-block:: python
trim_messages(
messages,
max_tokens=40,
token_counter=dummy_token_counter,
strategy="last",
include_system=True,
allow_partial=True,
start_on="human"
)
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
Using a TextSplitter for splitting parting messages:
.. code-block:: python
...
trim_messages(
messages,
max_tokens=40,
token_counter=dummy_token_counter,
strategy="last",
allow_partial=True,
include_system=True
)
.. code-block:: python
...
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
AIMessage(
[{"type": "text", "text": "This is the FIRST 4 token block."},],
id="second",
),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
Using a model for token counting:
Last 30 tokens, including system message, allowing partial messages, end on HumanMessage:
.. code-block:: python
...
trim_messages(
messages,
max_tokens=30,
token_counter=dummy_token_counter,
strategy="last",
end_on="human",
include_system=True,
allow_partial=True,
)
.. code-block:: python
...
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
AIMessage(
[{"type": "text", "text": "This is the FIRST 4 token block."},],
id="second",
),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
]
Chaining:
Last 40 tokens, including system message, allowing partial messages, start on HumanMessage:
.. code-block:: python
...
trim_messages(
messages,
max_tokens=40,
token_counter=dummy_token_counter,
strategy="last",
include_system=True,
allow_partial=True,
start_on="human"
)
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
""" # noqa: E501
from langchain_core.language_models import BaseLanguageModel

Expand Down

0 comments on commit 9eda8f2

Please sign in to comment.