Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] GeminiMultimodalLiveLLMService Tool Setup when OpenAILLMContextFrame is passed is not working #1284

Open
navtejreddy-oai opened this issue Feb 25, 2025 · 0 comments

Comments

@navtejreddy-oai
Copy link

Within the gemini.py file of gemini_mulimodal_live, we have the below code in the connect method which is basically within which we setup tools and send google this information (which works fine as when I pass tools while initialising GeminiMultimodalLiveLLMService there is no bug).

async def _connect(self):
...
if self._tools:
                logger.debug(f"Gemini is configuring to use tools{self._tools}")
                config.setup.tools = self._tools
            await self.send_client_event(config)

The problem comes when I want it to work via context - the code below is supposed to update the tools but doesn’t as we never really send the updated tool information to google. And in multimodal gemini we dont send tool info on every request.

async def process_frame(self, frame: Frame, direction: FrameDirection):
...
 elif isinstance(frame, OpenAILLMContextFrame):
            context: GeminiMultimodalLiveContext = GeminiMultimodalLiveContext.upgrade(
                frame.context
            )
            ...
            if not self._context:
                self._context = context
                if frame.context.tools:
                    self._tools = frame.context.tools
                await self._create_initial_response()

This isnt a problem with other llms as we normally pass self.tools as an argument during chat completions on every request.
for example in openai.py :

async def get_chat_completions(
        self, context: OpenAILLMContext, messages: List[ChatCompletionMessageParam]
    ) -> AsyncStream[ChatCompletionChunk]:
        params = {
            "model": self.model_name,
            "stream": True,
            "messages": messages,
            "tools": context.tools,
            "tool_choice": context.tool_choice,
            "stream_options": {"include_usage": True},
            "frequency_penalty": self._settings["frequency_penalty"],
            "presence_penalty": self._settings["presence_penalty"],
            "seed": self._settings["seed"],
            "temperature": self._settings["temperature"],
            "top_p": self._settings["top_p"],
            "max_tokens": self._settings["max_tokens"],
            "max_completion_tokens": self._settings["max_completion_tokens"],
        }

I believe the fix could be in elif isinstance(frame, OpenAILLMContextFrame): or _create_initial_response just adding some new google request with updated tool information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant