-
Notifications
You must be signed in to change notification settings - Fork 902
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add model context protocol tools with ollama provider #1283
Conversation
b02f064
to
c1294c5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah cool thanks. we should have most of our distros share some of these common utilities. we will refactor the template code accordingly!
Signed-off-by: Shreyanand <[email protected]>
@ashwinb Thanks! I fixed the pre-commit errors. |
…ma#1283) # What does this PR do? Model context protocol (MCP) allows for remote tools to be connected with Agents. The current Ollama provider does not support it. This PR adds necessary code changes to ensure that the integration between Ollama backend and MCP works. This PR is an extension of meta-llama#816 for Ollama. ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] 1. Run llama-stack server with the command: ``` llama stack build --template ollama --image-type conda llama stack run ./templates/ollama/run.yaml \ --port $LLAMA_STACK_PORT \ --env INFERENCE_MODEL=$INFERENCE_MODEL \ --env OLLAMA_URL=http://localhost:11434 ``` 2. Run the sample client agent with MCP tool: ``` from llama_stack_client.lib.agents.agent import Agent from llama_stack_client.lib.agents.event_logger import EventLogger from llama_stack_client.types.agent_create_params import AgentConfig from llama_stack_client.types.shared_params.url import URL from llama_stack_client import LlamaStackClient from termcolor import cprint ## Start the local MCP server # git clone https://github.com/modelcontextprotocol/python-sdk # Follow instructions to get the env ready # cd examples/servers/simple-tool # uv run mcp-simple-tool --transport sse --port 8000 # Connect to the llama stack server base_url="http://localhost:8321" model_id="meta-llama/Llama-3.2-3B-Instruct" client = LlamaStackClient(base_url=base_url) # Register MCP tools client.toolgroups.register( toolgroup_id="mcp::filesystem", provider_id="model-context-protocol", mcp_endpoint=URL(uri="http://localhost:8000/sse")) # Define an agent with MCP toolgroup agent_config = AgentConfig( model=model_id, instructions="You are a helpful assistant", toolgroups=["mcp::filesystem"], input_shields=[], output_shields=[], enable_session_persistence=False, ) agent = Agent(client, agent_config) user_prompts = [ "Fetch content from https://www.google.com and print the response" ] # Run a session with the agent session_id = agent.create_session("test-session") for prompt in user_prompts: cprint(f"User> {prompt}", "green") response = agent.create_turn( messages=[ { "role": "user", "content": prompt, } ], session_id=session_id, ) for log in EventLogger().log(response): log.print() ``` # Documentation The file docs/source/distributions/self_hosted_distro/ollama.md is updated to indicate the MCP tool runtime availability. Signed-off-by: Shreyanand <[email protected]>
What does this PR do?
Model context protocol (MCP) allows for remote tools to be connected with Agents. The current Ollama provider does not support it. This PR adds necessary code changes to ensure that the integration between Ollama backend and MCP works.
This PR is an extension of #816 for Ollama.
Test Plan
[Describe the tests you ran to verify your changes with result summaries. Provide clear instructions so the plan can be easily re-executed.]
Documentation
The file docs/source/distributions/self_hosted_distro/ollama.md is updated to indicate the MCP tool runtime availability.