Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve tool UI and support nested thoughts #10226

Open
wants to merge 37 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
cc5c2cb
ungroup thoughts from messages
hannahblair Dec 18, 2024
773db7f
rename messagebox to thought
hannahblair Dec 18, 2024
2b8da48
refactor
hannahblair Dec 18, 2024
281ff2a
* add metadata typing
hannahblair Dec 19, 2024
915439e
tweaks
hannahblair Dec 19, 2024
134a7b7
tweak
hannahblair Dec 19, 2024
462f394
add changeset
gradio-pr-bot Dec 19, 2024
894d5bd
Merge branch 'thought-ui' of github.com:gradio-app/gradio into though…
hannahblair Dec 19, 2024
56fcac2
fix expanded rotation
hannahblair Dec 19, 2024
ea9eaf3
border radius
hannahblair Dec 19, 2024
1e731b7
update thought design
hannahblair Jan 6, 2025
bde1c69
move spinner
hannahblair Jan 6, 2025
983059a
Merge branch 'main' into thought-ui
hannahblair Jan 7, 2025
9aedecc
Merge branch 'main' into thought-ui
hannahblair Jan 7, 2025
8d2aa02
prevent circular reference
hannahblair Jan 7, 2025
653f66e
revert border removal
hannahblair Jan 7, 2025
220804c
css tweaks
hannahblair Jan 7, 2025
e40f69a
border tweak
hannahblair Jan 7, 2025
9d4e00a
move chevron to the left
hannahblair Jan 7, 2025
bd47db8
tweak nesting logic
hannahblair Jan 7, 2025
811429d
thought group spacing
hannahblair Jan 7, 2025
fa2caa4
update run.py
hannahblair Jan 7, 2025
1cdaf40
icon changes
abidlabs Jan 7, 2025
1d2129d
format
abidlabs Jan 7, 2025
cc9ef78
add changeset
gradio-pr-bot Jan 7, 2025
cc31a37
Merge branch 'main' into thought-ui
abidlabs Jan 7, 2025
259a8ea
add nested thought demo
abidlabs Jan 7, 2025
4e117eb
changes
abidlabs Jan 7, 2025
ebafced
changes
abidlabs Jan 7, 2025
b928677
changes
abidlabs Jan 7, 2025
bc1ba8c
add demo
abidlabs Jan 7, 2025
87d79b5
refactor styles and clean up logic
hannahblair Jan 7, 2025
8914bc5
Merge branch 'thought-ui' of github.com:gradio-app/gradio into though…
hannahblair Jan 8, 2025
863a794
revert demo change and and deeper nested thought to demo
hannahblair Jan 8, 2025
0469e48
add optional duration to message types
hannahblair Jan 8, 2025
340d980
add nested thoughts story
hannahblair Jan 8, 2025
39e3e0c
format
hannahblair Jan 8, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .changeset/wise-moose-exist.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
"@gradio/chatbot": minor
"@gradio/icons": minor
"gradio": minor
---

feat:Improve tool UI and support nested thoughts
1 change: 1 addition & 0 deletions demo/chatbot_nested_thoughts/run.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: chatbot_nested_thoughts"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "from gradio import ChatMessage\n", "import time\n", "\n", "sleep_time = 0.1\n", "long_sleep_time = 1\n", "\n", "def generate_response(history):\n", " history.append(\n", " ChatMessage(\n", " role=\"user\", content=\"What is the weather in San Francisco right now?\"\n", " )\n", " )\n", " yield history\n", " time.sleep(sleep_time)\n", " history.append(\n", " ChatMessage(\n", " role=\"assistant\",\n", " content=\"In order to find the current weather in San Francisco, I will need to use my weather tool.\",\n", " )\n", " )\n", " yield history\n", " time.sleep(sleep_time)\n", " history.append(\n", " ChatMessage(\n", " role=\"assistant\",\n", " content=\"\",\n", " metadata={\"title\": \"Gathering Weather Websites\", \"id\": 1},\n", " )\n", " )\n", " yield history\n", " time.sleep(long_sleep_time)\n", " history[-1].content = \"Will check: weather.com and sunny.org\"\n", " yield history\n", " time.sleep(sleep_time)\n", " history.append(\n", " ChatMessage(\n", " role=\"assistant\",\n", " content=\"Received weather from weather.com.\",\n", " metadata={\"title\": \"API Success \u2705\", \"parent_id\": 1, \"id\": 2},\n", " )\n", " )\n", " yield history\n", " time.sleep(sleep_time)\n", " history.append(\n", " ChatMessage(\n", " role=\"assistant\",\n", " content=\"API Error when connecting to sunny.org.\",\n", " metadata={\"title\": \"API Error \ud83d\udca5 \", \"parent_id\": 1, \"id\": 3},\n", " )\n", " )\n", " yield history\n", " time.sleep(sleep_time)\n", "\n", " history.append(\n", " ChatMessage(\n", " role=\"assistant\",\n", " content=\"I will try yet again\",\n", " metadata={\"title\": \"I will try again\", \"id\": 4, \"parent_id\": 3},\n", " )\n", " )\n", " yield history\n", "\n", " time.sleep(sleep_time)\n", " history.append(\n", " ChatMessage(\n", " role=\"assistant\",\n", " content=\"Failed again\",\n", " metadata={\"title\": \"Failed again\", \"id\": 6, \"parent_id\": 4},\n", " )\n", " )\n", " yield history\n", "\n", "with gr.Blocks() as demo:\n", " chatbot = gr.Chatbot(type=\"messages\", height=500, show_copy_button=True)\n", " demo.load(generate_response, chatbot, chatbot)\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()\n"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
79 changes: 79 additions & 0 deletions demo/chatbot_nested_thoughts/run.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
import gradio as gr
from gradio import ChatMessage
import time

sleep_time = 0.1
long_sleep_time = 1

def generate_response(history):
history.append(
ChatMessage(
role="user", content="What is the weather in San Francisco right now?"
)
)
yield history
time.sleep(sleep_time)
history.append(
ChatMessage(
role="assistant",
content="In order to find the current weather in San Francisco, I will need to use my weather tool.",
)
)
yield history
time.sleep(sleep_time)
history.append(
ChatMessage(
role="assistant",
content="",
metadata={"title": "Gathering Weather Websites", "id": 1},
)
)
yield history
time.sleep(long_sleep_time)
history[-1].content = "Will check: weather.com and sunny.org"
yield history
time.sleep(sleep_time)
history.append(
ChatMessage(
role="assistant",
content="Received weather from weather.com.",
metadata={"title": "API Success ✅", "parent_id": 1, "id": 2},
)
)
yield history
time.sleep(sleep_time)
history.append(
ChatMessage(
role="assistant",
content="API Error when connecting to sunny.org.",
metadata={"title": "API Error 💥 ", "parent_id": 1, "id": 3},
)
)
yield history
time.sleep(sleep_time)

history.append(
ChatMessage(
role="assistant",
content="I will try yet again",
metadata={"title": "I will try again", "id": 4, "parent_id": 3},
)
)
yield history

time.sleep(sleep_time)
history.append(
ChatMessage(
role="assistant",
content="Failed again",
metadata={"title": "Failed again", "id": 6, "parent_id": 4},
)
)
yield history

with gr.Blocks() as demo:
chatbot = gr.Chatbot(type="messages", height=500, show_copy_button=True)
demo.load(generate_response, chatbot, chatbot)

if __name__ == "__main__":
demo.launch()
1 change: 1 addition & 0 deletions demo/chatbot_thoughts/run.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"cells": [{"cell_type": "markdown", "id": "302934307671667531413257853548643485645", "metadata": {}, "source": ["# Gradio Demo: chatbot_thoughts"]}, {"cell_type": "code", "execution_count": null, "id": "272996653310673477252411125948039410165", "metadata": {}, "outputs": [], "source": ["!pip install -q gradio "]}, {"cell_type": "code", "execution_count": null, "id": "288918539441861185822528903084949547379", "metadata": {}, "outputs": [], "source": ["import gradio as gr\n", "from gradio import ChatMessage\n", "import time\n", "\n", "def simulate_thinking_chat(message: str, history: list):\n", " \"\"\"Mimicking thinking process and response\"\"\"\n", " # Add initial empty thinking message to chat history\n", "\n", " history.append( # Adds new message to the chat history list\n", " ChatMessage( # Creates a new chat message\n", " role=\"assistant\", # Specifies this is from the assistant\n", " content=\"\", # Initially empty content\n", " metadata={\"title\": \"Thinking Process \ud83d\udcad\"} # Setting a thinking header here\n", " )\n", " )\n", " time.sleep(0.5)\n", " yield history # Returns current state of chat history\n", " \n", " # Define the thoughts that LLM will \"think\" through\n", " thoughts = [\n", " \"First, I need to understand the core aspects of the query...\",\n", " \"Now, considering the broader context and implications...\",\n", " \"Analyzing potential approaches to formulate a comprehensive answer...\",\n", " \"Finally, structuring the response for clarity and completeness...\"\n", " ]\n", " \n", " # Variable to store all thoughts as they accumulate\n", " accumulated_thoughts = \"\"\n", " \n", " # Loop through each thought\n", " for thought in thoughts:\n", " time.sleep(0.5) # Add a samll delay for realism\n", " \n", " # Add new thought to accumulated thoughts with markdown bullet point\n", " accumulated_thoughts += f\"- {thought}\\n\\n\" # \\n\\n creates line breaks\n", " \n", " # Update the thinking message with all thoughts so far\n", " history[-1] = ChatMessage( # Updates last message in history\n", " role=\"assistant\",\n", " content=accumulated_thoughts.strip(), # Remove extra whitespace\n", " metadata={\"title\": \"\ud83d\udcad Thinking Process\"} # Shows thinking header\n", " )\n", " yield history # Returns updated chat history\n", " \n", " # After thinking is complete, adding the final response\n", " history.append(\n", " ChatMessage(\n", " role=\"assistant\",\n", " content=\"Based on my thoughts and analysis above, my response is: This dummy repro shows how thoughts of a thinking LLM can be progressively shown before providing its final answer.\"\n", " )\n", " )\n", " yield history # Returns final state of chat history\n", "\n", "# Gradio blocks with gr.chatbot\n", "with gr.Blocks() as demo:\n", " gr.Markdown(\"# Thinking LLM Demo \ud83e\udd14\")\n", " chatbot = gr.Chatbot(type=\"messages\", render_markdown=True)\n", " msg = gr.Textbox(placeholder=\"Type your message...\")\n", " \n", " msg.submit(\n", " lambda m, h: (m, h + [ChatMessage(role=\"user\", content=m)]),\n", " [msg, chatbot],\n", " [msg, chatbot]\n", " ).then(\n", " simulate_thinking_chat,\n", " [msg, chatbot],\n", " chatbot\n", " )\n", "\n", "if __name__ == \"__main__\":\n", " demo.launch()"]}], "metadata": {}, "nbformat": 4, "nbformat_minor": 5}
71 changes: 71 additions & 0 deletions demo/chatbot_thoughts/run.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import gradio as gr
from gradio import ChatMessage
import time

def simulate_thinking_chat(message: str, history: list):
"""Mimicking thinking process and response"""
# Add initial empty thinking message to chat history

history.append( # Adds new message to the chat history list
ChatMessage( # Creates a new chat message
role="assistant", # Specifies this is from the assistant
content="", # Initially empty content
metadata={"title": "Thinking Process 💭"} # Setting a thinking header here
)
)
time.sleep(0.5)
yield history # Returns current state of chat history

# Define the thoughts that LLM will "think" through
thoughts = [
"First, I need to understand the core aspects of the query...",
"Now, considering the broader context and implications...",
"Analyzing potential approaches to formulate a comprehensive answer...",
"Finally, structuring the response for clarity and completeness..."
]

# Variable to store all thoughts as they accumulate
accumulated_thoughts = ""

# Loop through each thought
for thought in thoughts:
time.sleep(0.5) # Add a samll delay for realism

# Add new thought to accumulated thoughts with markdown bullet point
accumulated_thoughts += f"- {thought}\n\n" # \n\n creates line breaks

# Update the thinking message with all thoughts so far
history[-1] = ChatMessage( # Updates last message in history
role="assistant",
content=accumulated_thoughts.strip(), # Remove extra whitespace
metadata={"title": "💭 Thinking Process"} # Shows thinking header
)
yield history # Returns updated chat history

# After thinking is complete, adding the final response
history.append(
ChatMessage(
role="assistant",
content="Based on my thoughts and analysis above, my response is: This dummy repro shows how thoughts of a thinking LLM can be progressively shown before providing its final answer."
)
)
yield history # Returns final state of chat history

# Gradio blocks with gr.chatbot
with gr.Blocks() as demo:
gr.Markdown("# Thinking LLM Demo 🤔")
chatbot = gr.Chatbot(type="messages", render_markdown=True)
msg = gr.Textbox(placeholder="Type your message...")

msg.submit(
lambda m, h: (m, h + [ChatMessage(role="user", content=m)]),
[msg, chatbot],
[msg, chatbot]
).then(
simulate_thinking_chat,
[msg, chatbot],
chatbot
)

if __name__ == "__main__":
demo.launch()
8 changes: 8 additions & 0 deletions gradio/components/chatbot.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@

class MetadataDict(TypedDict):
title: Union[str, None]
id: NotRequired[int | str]
parent_id: NotRequired[int | str]


class Option(TypedDict):
Expand All @@ -57,6 +59,7 @@ class MessageDict(TypedDict):
role: Literal["user", "assistant", "system"]
metadata: NotRequired[MetadataDict]
options: NotRequired[list[Option]]
duration: NotRequired[int]


class FileMessage(GradioModel):
Expand All @@ -82,13 +85,16 @@ class ChatbotDataTuples(GradioRootModel):

class Metadata(GradioModel):
title: Optional[str] = None
id: Optional[int | str] = None
parent_id: Optional[int | str] = None


class Message(GradioModel):
role: str
metadata: Metadata = Field(default_factory=Metadata)
content: Union[str, FileMessage, ComponentMessage]
options: Optional[list[Option]] = None
duration: Optional[int] = None


class ExampleMessage(TypedDict):
Expand All @@ -110,6 +116,7 @@ class ChatMessage:
content: str | FileData | Component | FileDataDict | tuple | list
metadata: MetadataDict | Metadata = field(default_factory=Metadata)
options: Optional[list[Option]] = None
duration: Optional[int] = None


class ChatbotDataMessages(GradioRootModel):
Expand Down Expand Up @@ -538,6 +545,7 @@ def _postprocess_message_messages(
content=message.content, # type: ignore
metadata=message.metadata, # type: ignore
options=message.options,
duration=message.duration,
)
elif isinstance(message, Message):
return message
Expand Down
67 changes: 64 additions & 3 deletions js/chatbot/Chatbot.stories.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -431,12 +431,73 @@ This document is a showcase of various Markdown capabilities.`,
value: [
{
role: "user",
content: "What is the weather like today?"
content: "What is 27 * 14?",
duration: 0.1
},
{
role: "assistant",
metadata: { title: "☀️ Using Weather Tool" },
content: "Weather looks sunny today."
duration: 10,
content: "Let me break this down step by step.",
metadata: {
id: 1,
title: "Solving multiplication",
parent_id: 0
}
},
{
role: "assistant",
content: "First, let's multiply 27 by 10: 27 * 10 = 270",
metadata: {
id: 2,
title: "Step 1",
parent_id: 1
}
},
{
role: "assistant",
content:
"We can do this quickly because multiplying by 10 just adds a zero",
metadata: {
id: 6,
title: "Quick Tip",
parent_id: 2
}
},
{
role: "assistant",
content: "Then multiply 27 by 4: 27 * 4 = 108",
metadata: {
id: 3,
title: "Step 2",
parent_id: 1
}
},
{
role: "assistant",
content:
"Adding these together: 270 + 108 = 378. Therefore, 27 * 14 = 378"
},
{
role: "assistant",
content: "Let me verify this result using a different method.",
metadata: {
id: 4,
title: "Verification",
parent_id: 0
}
},
{
role: "assistant",
content: "Using the standard algorithm: 27 * 14 = (20 + 7) * (10 + 4)",
metadata: {
id: 5,
title: "Expanding",
parent_id: 4
}
},
{
role: "assistant",
content: "The result is confirmed to be 378."
}
]
}}
Expand Down
7 changes: 0 additions & 7 deletions js/chatbot/shared/ChatBot.svelte
Original file line number Diff line number Diff line change
Expand Up @@ -395,13 +395,6 @@
}
}
.message-wrap {
display: flex;
flex-direction: column;
justify-content: space-between;
margin-bottom: var(--spacing-xxl);
}
.message-wrap :global(.prose.chatbot.md) {
opacity: 0.8;
overflow-wrap: break-word;
Expand Down
Loading
Loading