Skip to content

Commit

Permalink
Merge branch 'langchain-ai:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
baur-krykpayev authored Jun 16, 2024
2 parents c8ff6a8 + 892bd4c commit 053df43
Show file tree
Hide file tree
Showing 7 changed files with 227 additions and 59 deletions.
8 changes: 7 additions & 1 deletion .github/workflows/_integration_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ env:

jobs:
build:
environment: Scheduled testing
defaults:
run:
working-directory: ${{ inputs.working-directory }}
Expand Down Expand Up @@ -53,8 +52,15 @@ jobs:
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/how_to/streaming.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1003,7 +1003,7 @@
"id": "798ea891-997c-454c-bf60-43124f40ee1b",
"metadata": {},
"source": [
"Because both the model and the parser support streaming, we see sreaming events from both components in real time! Kind of cool isn't it? 🦜"
"Because both the model and the parser support streaming, we see streaming events from both components in real time! Kind of cool isn't it? 🦜"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"# connect to an embedding NIM running at localhost:8000, specifying a specific model\n",
"llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta-llama3-8b-instruct\")"
"llm = ChatNVIDIA(base_url=\"http://localhost:8000/v1\", model=\"meta/llama3-8b-instruct\")"
]
},
{
Expand Down Expand Up @@ -658,7 +658,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.10.2"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/integrations/providers/nvidia.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ When ready to deploy, you can self-host models with NVIDIA NIM—which is includ
from langchain_nvidia_ai_endpoints import ChatNVIDIA, NVIDIAEmbeddings, NVIDIARerank

# connect to an chat NIM running at localhost:8000, specifyig a specific model
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta-llama3-8b-instruct")
llm = ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta/llama3-8b-instruct")

# connect to an embedding NIM running at localhost:8080
embedder = NVIDIAEmbeddings(base_url="http://localhost:8080/v1")
Expand Down
116 changes: 104 additions & 12 deletions libs/community/langchain_community/chat_models/zhipuai.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,23 +163,115 @@ def _truncate_params(payload: Dict[str, Any]) -> None:


class ChatZhipuAI(BaseChatModel):
"""
`ZhipuAI` large language chat models API.
"""ZhipuAI chat model integration.
To use, you should have the ``PyJWT`` python package installed.
Setup:
Install ``PyJWT`` and set environment variable ``ZHIPUAI_API_KEY``
Example:
.. code-block:: python
.. code-block:: bash
from langchain_community.chat_models import ChatZhipuAI
pip install pyjwt
export ZHIPUAI_API_KEY="your-api-key"
zhipuai_chat = ChatZhipuAI(
temperature=0.5,
api_key="your-api-key",
model="glm-4"
)
Key init args — completion params:
model: Optional[str]
Name of OpenAI model to use.
temperature: float
Sampling temperature.
max_tokens: Optional[int]
Max number of tokens to generate.
"""
Key init args — client params:
api_key: Optional[str]
ZhipuAI API key. If not passed in will be read from env var ZHIPUAI_API_KEY.
api_base: Optional[str]
Base URL for API requests.
See full list of supported init args and their descriptions in the params section.
Instantiate:
.. code-block:: python
from langchain_community.chat_models import ChatZhipuAI
zhipuai_chat = ChatZhipuAI(
temperature=0.5,
api_key="your-api-key",
model="glm-4",
# api_base="...",
# other params...
)
Invoke:
.. code-block:: python
messages = [
("system", "你是一名专业的翻译家,可以将用户的中文翻译为英文。"),
("human", "我喜欢编程。"),
]
zhipuai_chat.invoke(messages)
.. code-block:: python
AIMessage(content='I enjoy programming.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 23, 'total_tokens': 29}, 'model_name': 'glm-4', 'finish_reason': 'stop'}, id='run-c5d9af91-55c6-470e-9545-02b2fa0d7f9d-0')
Stream:
.. code-block:: python
for chunk in zhipuai_chat.stream(messages):
print(chunk)
.. code-block:: python
content='I' id='run-4df71729-618f-4e2b-a4ff-884682723082'
content=' enjoy' id='run-4df71729-618f-4e2b-a4ff-884682723082'
content=' programming' id='run-4df71729-618f-4e2b-a4ff-884682723082'
content='.' id='run-4df71729-618f-4e2b-a4ff-884682723082'
content='' response_metadata={'finish_reason': 'stop'} id='run-4df71729-618f-4e2b-a4ff-884682723082'
.. code-block:: python
stream = llm.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
full
.. code-block::
AIMessageChunk(content='I enjoy programming.', response_metadata={'finish_reason': 'stop'}, id='run-20b05040-a0b4-4715-8fdc-b39dba9bfb53')
Async:
.. code-block:: python
await zhipuai_chat.ainvoke(messages)
# stream:
# async for chunk in zhipuai_chat.astream(messages):
# print(chunk)
# batch:
# await zhipuai_chat.abatch([messages])
.. code-block:: python
[AIMessage(content='I enjoy programming.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 23, 'total_tokens': 29}, 'model_name': 'glm-4', 'finish_reason': 'stop'}, id='run-ba06af9d-4baa-40b2-9298-be9c62aa0849-0')]
Response metadata
.. code-block:: python
ai_msg = zhipuai_chat.invoke(messages)
ai_msg.response_metadata
.. code-block:: python
{'token_usage': {'completion_tokens': 6,
'prompt_tokens': 23,
'total_tokens': 29},
'model_name': 'glm-4',
'finish_reason': 'stop'}
""" # noqa: E501

@property
def lc_secrets(self) -> Dict[str, str]:
Expand Down
Loading

0 comments on commit 053df43

Please sign in to comment.