Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Whenever you cancel a request, future ones no longer work - did not find tool_result block #125

Open
cryptax opened this issue Feb 17, 2025 · 1 comment

Comments

@cryptax
Copy link
Contributor

cryptax commented Feb 17, 2025

The flow is the following:

  • Ask something to r2ai in auto mode
  • R2ai requests authorization to execute this or that command. Cancel that by Ctrl-C because you don't want this request to be done.
  • Ask another question to r2ai in auto-mode
  • The question fails (Exception) because no tool_result block
  • Using -R to reset the conversation does not help

Example

First request to AI:

[r2ai:0x00401ef0]> ' Please decompile function at 0x40c090. Please pay attention to "enckey"
                    
assistant
I'll help you decompile the function at 0x40c090. Let me first check the function using r2.
anthropic/claude-3-5-sonnet-20241022 | total: $0.0534720000 | run: $0.0534720000 | 1 / 100 | 32s / 32s
r2ai is going to execute the following command on the host
Want to edit? (ENTER to validate) pdf @ 0x40c090
This command will execute on this host: pdf @ 0x40c090. Agree? (y/N) y

> pdf @ 0x40c090


assistant


I notice there's no output for this address. Let me verify if this is a valid function address by listing functions around this area:
anthropic/claude-3-5-sonnet-20241022 | total: $0.1070610000 | run: $0.0535890000 | 2 / 100 | 7s / 40s
r2ai is going to execute the following command on the host
Want to edit? (ENTER to validate) afl~0x40c
Operation cancelled by user.

I cancelled afl~0x40c because I think this command is not going to help.

[r2ai:0x00401ef0]> : s 0x40c094; af read_incoming_socket

[r2ai:0x00401ef0]> ' Please decompile function at 0x40c094. Pay attention to enckey.
 ⠇ 0.1s

 ⠏ 0.2s[02/17/25 15:33:52] ERROR    r2ai - ERROR - Error getting completion: litellm.BadRequestError:               auto.py:402
                             AnthropicException -                                                                       
                             b'{"type":"error","error":{"type":"invalid_request_error","message":"messages.4            
                             : Did not find 1 `tool_result` block(s) at the beginning of this message.                  
                             Messages following `tool_use` blocks must begin with a matching number of                  
                             `tool_result` blocks."}}'                                                                  
Traceback (most recent call last):
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/llms/anthropic/chat/handler.py", line 105, in make_call
    response = await client.post(
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/llms/custom_httpx/http_handler.py", line 165, in post
    raise e
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/llms/custom_httpx/http_handler.py", line 125, in post
    response.raise_for_status()
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/httpx/_models.py", line 829, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://api.anthropic.com/v1/messages'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/main.py", line 481, in acompletion
    response = await init_response
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/llms/anthropic/chat/handler.py", line 226, in acompletion_stream_function
    completion_stream, headers = await make_call(
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/llms/anthropic/chat/handler.py", line 113, in make_call
    raise AnthropicError(
litellm.llms.anthropic.common_utils.AnthropicError: b'{"type":"error","error":{"type":"invalid_request_error","message":"messages.4: Did not find 1 `tool_result` block(s) at the beginning of this message. Messages following `tool_use` blocks must begin with a matching number of `tool_result` blocks."}}'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/axelle/git/r2ai/r2ai/interpreter.py", line 324, in chat
    response = self.respond()
  File "/home/axelle/git/r2ai/r2ai/interpreter.py", line 401, in respond
    response = auto.chat(self)
  File "/home/axelle/git/r2ai/r2ai/auto.py", line 488, in chat
    return loop.run_until_complete(chat_auto.achat())
  File "/home/axelle/.pyenv/versions/3.10.14/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/axelle/git/r2ai/r2ai/auto.py", line 416, in achat
    response = await self.get_completion()
  File "/home/axelle/git/r2ai/r2ai/auto.py", line 391, in get_completion
    response = await self.attempt_completion()
  File "/home/axelle/git/r2ai/r2ai/auto.py", line 339, in attempt_completion
    compl = await acompletion(
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/utils.py", line 1175, in wrapper_async
    raise e
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/utils.py", line 1031, in wrapper_async
    result = await original_function(*args, **kwargs)
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/main.py", line 503, in acompletion
    raise exception_type(
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2136, in exception_type
    raise e
  File "/home/axelle/git/r2ai/venv/lib/python3.10/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 469, in exception_type
    raise BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: AnthropicException - b'{"type":"error","error":{"type":"invalid_request_error","message":"messages.4: Did not find 1 `tool_result` block(s) at the beginning of this message. Messages following `tool_use` blocks must begin with a matching number of `tool_result` blocks."}}'

Error.

Resetting the conversation with -R does not help

[r2ai:0x00401ef0]> -R
[r2ai:0x00401ef0]> ' Please decompile function at 0x40c094. Pay attention to enckey.
 ⠹ 0.2s
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

[02/17/25 15:36:25] ERROR    r2ai - ERROR - Error getting completion: litellm.BadRequestError:               auto.py:402
                             AnthropicException -                                                                       
                             b'{"type":"error","error":{"type":"invalid_request_error","message":"messages.4            
                             : Did not find 1 `tool_result` block(s) at the beginning of this message.                  
                             Messages following `tool_use` blocks must begin with a matching number of                  
                             `tool_result` blocks."}}'                    
@dnakov
Copy link
Collaborator

dnakov commented Feb 17, 2025

Thanks! I'll fix these tomorrow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants