Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File generation with the run code tool #32

Open
ProjectMoon opened this issue Nov 5, 2024 · 4 comments
Open

File generation with the run code tool #32

ProjectMoon opened this issue Nov 5, 2024 · 4 comments

Comments

@ProjectMoon
Copy link

Is your feature request related to a problem? Please describe.
The run code action seems to handle file generation perfectly. The run code tool does not. But there does seem to be some code in the tool to support this? Am I doing something wrong?

Describe the solution you'd like
When the run code tool runs, it generates files the same way as the action.

Describe alternatives you've considered
Using the action, which works.

@EtiennePerot
Copy link
Owner

You're not doing anything wrong; the tool currently doesn't have this functionality that the function does. The rationale for having it this way in the tool was that the tool is for the LLM itself to use, so any generated files are irrelevant to the user. But it's true that there are use-cases where that rationale does not hold. So this is a valid feature request.

@ProjectMoon
Copy link
Author

Ah, I was basically using the tool in the same way as what someone would use the action button for. Like "Make me a QR code" and then have it output a QR code in the chat.

@EtiennePerot
Copy link
Owner

EtiennePerot commented Nov 10, 2024

Makes sense. I think it is a different and valid use-case, but not sure how to best support it.
With Open WebUI's recent "code execution status" UI support, would it be satisfactory if files were shown on the code execution status dialog (which you have to click for), rather than in chat?
I ask because showing things directly in the chat also makes them part of the conversation context, which is duplicated tokens for the case of the tool (because the tool's output is already effectively part of the LLM's system message). So it wastes context window space when also displayed in the chat.
Alternatively, perhaps this can be introduced as a tool parameter, exposed to the LLM either as a boolean ("show the code output to the user? y/n"), or as a separate tool name ("execute_code_internally" vs "execute_code_visible_to_user").

@ProjectMoon
Copy link
Author

That would work, yes. I think it might even be better?

There could also perhaps be some integration with the artifacts feature? Though not sure how much that applies or what it actually supports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants