Skip to content

Commit

Permalink
Handle propose timeouts the same as discuss and vote by ensurin…
Browse files Browse the repository at this point in the history
…g a default response is returned in any error case (#24)

Addresses https://neon-ai.sentry.io/issues/6329886374/events/8fe379bc32ed45f3a98548234d96bae5/
  • Loading branch information
NeonDaniel authored Feb 25, 2025
1 parent 2590c6f commit 2b0cc49
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions neon_llm_core/rmq.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,13 +200,16 @@ def _handle_request_async(self, request: dict):
history = request["history"]
persona = request.get("persona", {})
LOG.debug(f"Request persona={persona}|key={routing_key}")
# Default response if the model fails to respond
response = 'Sorry, but I cannot respond to your message at the '\
'moment; please, try again later'
try:
response = self.model.ask(message=query, chat_history=history,
persona=persona)
except ValueError as err:
LOG.error(f'ValueError={err}')
response = ('Sorry, but I cannot respond to your message at the '
'moment, please try again later')
except Exception as e:
LOG.exception(e)
api_response = LLMProposeResponse(message_id=message_id,
response=response,
routing_key=routing_key)
Expand Down

0 comments on commit 2b0cc49

Please sign in to comment.