From 19bddfc115e2798c20cedc5d6ff2e5c7732e3543 Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Wed, 7 Feb 2024 13:10:03 +0530
Subject: [PATCH 01/17] Add react_gemini_prompting.ipynb
---
site/en/docs/react_gemini_prompting.ipynb | 1090 +++++++++++++++++++++
1 file changed, 1090 insertions(+)
create mode 100644 site/en/docs/react_gemini_prompting.ipynb
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
new file mode 100644
index 000000000..9a9fafabe
--- /dev/null
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -0,0 +1,1090 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "T85zXWw3Zs05"
+ },
+ "source": [
+ "##### Copyright 2024 Google LLC."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "X4uPZ83DbUTq"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
+ "# you may not use this file except in compliance with the License.\n",
+ "# You may obtain a copy of the License at\n",
+ "#\n",
+ "# https://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing, software\n",
+ "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
+ "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
+ "# See the License for the specific language governing permissions and\n",
+ "# limitations under the License."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vX-FA27MbYpQ"
+ },
+ "source": [
+ "# ReAct + Gemini: A prompting method for demonstrating reasoning & acting in LLMs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Pk4Y-PKWc3MU"
+ },
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "sdkuZY1IdRal"
+ },
+ "source": [
+ "This notebook is a minimal implementation of [ [Yao, S. et al](https://arxiv.org/abs/2210.03629) ] with the Google `gemini-pro` model. `gemini-pro` model.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PSr-BK-5meRo"
+ },
+ "source": [
+ "This notebook demonstrates the use of `gemini-pro` to generate reasoning traces and task-specific actions by leveraging a **Few-shot ReAct Prompt**. In this walkthrough, you will learn how to:\n",
+ "\n",
+ "\n",
+ "1. Set up your development environment and API access to use Gemini.\n",
+ "2. Prompt Gemini with ReAct.\n",
+ "3. Use the newly prompted model for multi-turn conversations (chat).\n",
+ "4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via `wikipedia api`.\n",
+ "5. Have Conversations with deployed **ReAct-Gemini Chat 🤖**\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "lSkx3VHr3WYb"
+ },
+ "source": [
+ "### Background"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NemuU6yB4BWY"
+ },
+ "source": [
+ "#### What(s)\n",
+ "\n",
+ "\n",
+ "1. What is ReAct?\n",
+ "\n",
+ "\n",
+ "> ReAct is a prompting method which allows language models to showcase the trace of thinking processes/steps involved in answering a user's query. This improves human interpretability and trustworthiness. ReAct prompted models generate **Thought-Action-Observation** triplets for every iteration.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ULalraVs6_Cb"
+ },
+ "source": [
+ "#### Why(s)\n",
+ "\n",
+ "\n",
+ "\n",
+ "1. Why to use ReAct when you can simply instruct the model: \"Explain step-by-step\"\n",
+ "> Large Language models (LLMs) generate responses by extrapolating the prompts they are provided with. This is achieved by using their internal knowledge representation. In short, they are oblivious to the external/updated information, which makes them hallucinate(generate wrong/unreal answers) at times.\n",
+ "\n",
+ " - ReAct prevents hallucination by mandating language models to seek factual information strictly from the external environment they are provided with. This allows language models to access updated/true groundtruth.\n",
+ " - ReAct prompted models use their internal knowledge only to analyze **(THOUGHT)** and make informed decisions**(ACTION)** on the external knowledge sources **(OBSERVATION)**.\n",
+ " - Internal knowledge is not used to retrieve factual information.\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7gs5VfqZQj8y"
+ },
+ "source": [
+ "#### How(s)\n",
+ "\n",
+ "1. How does it work?\n",
+ ">\n",
+ "\n",
+ " ##### Working of ReAct prompted Gemini model\n",
+ " 1. Model receives the query from the user.\n",
+ " 2. This query with previous chat history(for continuous context link) if any, will be analyzed by the model.\n",
+ " 3. The model will use its internal knowledge to decide which among the following action it needs to take.\n",
+ " - Search[entity]: By analyzing the query, the model will decide what `entity` to query to the wikipedia API.\n",
+ " - Lookup[phrase]: Model will analyze the content of the wikipedia page returned by `Search[entity]` action and then decide what specific information or `phrase` is needed to be retrieved from that page to answer the user's question.\n",
+ " - Finish[Answer]: Return the `Answer` to the user.\n",
+ " 4. As evident from this point, the factual information is strictly being retrieved from external sources and decision making, parameter choosing and function calling is being generated from the model's internal knowledge and context.\n",
+ " 5. Step 3 is repeated until the model finds the answer.\n",
+ "\n",
+ " \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ZHbdlviLTtlx"
+ },
+ "source": [
+ "Now that you have understood the background, you can dive deep into technical implementations."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "80bo5tjXtb_X"
+ },
+ "source": [
+ "## Prerequisites\n",
+ "\n",
+ "You can run this quickstart in [Google Colab](https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/docs/react_gemini_prompting.ipynb), which runs this notebook directly in the browser and does not require additional environment configuration.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cVvxnBG-thZG"
+ },
+ "source": [
+ "## Setup\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Xq6NOA99tiHK"
+ },
+ "source": [
+ "### Install the Python SDK\n",
+ "\n",
+ "The Python SDK for the Gemini API, is contained in the [`google-generativeai`](https://pypi.org/project/google-generativeai/) package. Install the dependency using pip:\n",
+ "\n",
+ "You will also need to install the `wikipedia` API.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Twc_XZ7h7Bb4",
+ "outputId": "4f86e9e7-7012-417e-af79-03f570953de0"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ " Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
+ " Building wheel for wikipedia (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
+ ]
+ }
+ ],
+ "source": [
+ "!pip install -q google.generativeai\n",
+ "!pip install -q wikipedia"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vqv5MnQUuBZJ"
+ },
+ "source": [
+ "### Import packages"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "qS5HJk_VuCup"
+ },
+ "source": [
+ "Import the necessary packages."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Jz5HOLy47VX0"
+ },
+ "outputs": [],
+ "source": [
+ "from typing import Any, Dict, Optional\n",
+ "from contextlib import contextmanager\n",
+ "import re\n",
+ "import os\n",
+ "\n",
+ "import wikipedia\n",
+ "from wikipedia.exceptions import DisambiguationError, PageError\n",
+ "\n",
+ "import google.generativeai as genai\n",
+ "from google.colab import userdata"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4xsPDHz_uSYH"
+ },
+ "source": [
+ "### Set up your API key\n",
+ "\n",
+ "Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.\n",
+ "\n",
+ "Get an API key\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3QC1DUOxuWDS"
+ },
+ "source": [
+ "In Colab, add the key to the secrets manager under the \"🔑\" in the left panel. Give it the name `GOOGLE_API_KEY`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "SAvjxTybuWw-"
+ },
+ "source": [
+ "Once you have the API key, pass it to the SDK. You can do this in two ways:\n",
+ "\n",
+ "* Put the key in the `GOOGLE_API_KEY` environment variable (the SDK will automatically pick it up from there).\n",
+ "* Pass the key to `genai.configure(api_key=...)`\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "JAzIedGr9PdN"
+ },
+ "outputs": [],
+ "source": [
+ "# Or use `os.getenv('GOOGLE_API_KEY')` to fetch an environment variable.\n",
+ "GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n",
+ "\n",
+ "genai.configure(api_key=GOOGLE_API_KEY)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Sqkwp87FumIp"
+ },
+ "source": [
+ "## The ReAct prompt"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "lLv9Kuuu5Ffs"
+ },
+ "source": [
+ "The original prompts used in the paper is available at [https://github.com/ysymyth/ReAct/tree/master/prompts](https://github.com/ysymyth/ReAct/tree/master/prompts)\n",
+ "\n",
+ "Here, you will be working with the following ReAct prompt with a few minor adjustments."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "g8klL8df4iXe"
+ },
+ "outputs": [],
+ "source": [
+ "model_instructions = \"\"\"Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, Observation is gauging relevant information from an Action's output and Action can be of three types:\n",
+ "(1) entity, which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search.\n",
+ "(2) keyword, which returns the next sentence containing keyword in the current passage.\n",
+ "(3) answer, which returns the answer and finishes the task.\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Fw52CHAG0aRr"
+ },
+ "source": [
+ "### Few-shot prompting to enable in-context learning with Gemini\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-jhaD4ChNv6M"
+ },
+ "source": [
+ "While large language models show remarkable understanding of the instructions they are prompted with, they still may lack on performing complex tasks in a zero-shot setting. Hence, you will now provide a few examples along with your prompt to steer the model's output according to your needs. This **in-context learning** improves the model's performance significantly."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "tZ7vezr02qv0"
+ },
+ "outputs": [],
+ "source": [
+ "examples = \"\"\"\n",
+ "Here are some examples.\n",
+ "\n",
+ "Question\n",
+ "What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?\n",
+ "\n",
+ "Thought 1\n",
+ "I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of the area.\n",
+ "\n",
+ "Action 1\n",
+ "Colorado orogeny\n",
+ "\n",
+ "Observation 1\n",
+ "The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.\n",
+ "\n",
+ "Thought 2\n",
+ "It does not mention the eastern sector. So I need to look up eastern sector.\n",
+ "\n",
+ "Action 2\n",
+ "eastern sector\n",
+ "\n",
+ "Observation 2\n",
+ "The eastern sector extends into the High Plains and is called the Central Plains orogeny.\n",
+ "\n",
+ "Thought 3\n",
+ "The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.\n",
+ "\n",
+ "Action 3\n",
+ "High Plains\n",
+ "\n",
+ "Observation 3\n",
+ "High Plains refers to one of two distinct land regions\n",
+ "\n",
+ "Thought 4\n",
+ "I need to instead search High Plains (United States).\n",
+ "\n",
+ "Action 4\n",
+ "High Plains (United States)\n",
+ "\n",
+ "Observation 4\n",
+ "The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130m).\n",
+ "\n",
+ "Thought 5\n",
+ "High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.\n",
+ "\n",
+ "Action 5\n",
+ "1,800 to 7,000 ft\n",
+ "\n",
+ "Question\n",
+ "Musician and satirist Allie Goertz wrote a song about the \"The Simpsons\" character Milhouse, who Matt Groening named after who?\n",
+ "\n",
+ "Thought 1\n",
+ "The question simplifies to \"The Simpsons\" character Milhouse is named after who. I only need to search Milhouse and find who it is named after.\n",
+ "\n",
+ "Action 1\n",
+ "Milhouse\n",
+ "\n",
+ "Observation 1\n",
+ "Milhouse Mussolini Van Houten is a recurring character in the Fox animated television series The Simpsons voiced by Pamela Hayden and created by Matt Groening.\n",
+ "\n",
+ "Thought 2\n",
+ "The paragraph does not tell who Milhouse is named after, maybe I can look up \"named after\".\n",
+ "\n",
+ "Action 2\n",
+ "named after\n",
+ "\n",
+ "Observation 2\n",
+ "Milhouse was named after U.S. president Richard Nixon, whose middle name was Milhous.\n",
+ "\n",
+ "Thought 3\n",
+ "Milhouse was named after U.S. president Richard Nixon, so the answer is Richard Nixon.\n",
+ "\n",
+ "Action 3\n",
+ "Richard Nixon\n",
+ "\n",
+ "Question\n",
+ "Which documentary is about Finnish rock groups, Adam Clayton Powell or The Saimaa Gesture?\n",
+ "\n",
+ "Thought 1\n",
+ "I need to search Adam Clayton Powell and The Saimaa Gesture, and find which documentary is about Finnish rock groups.\n",
+ "\n",
+ "Action 1\n",
+ "Adam Clayton Powell\n",
+ "\n",
+ "Observation 1\n",
+ "Could not find [Adam Clayton Powell]. Similar: [’Adam Clayton Powell III’, ’Seventh Avenue (Manhattan)’, ’Adam Clayton Powell Jr. State Office Building’, ’Isabel Washington Powell’, ’Adam Powell’, ’Adam Clayton Powell (film)’, ’Giancarlo Esposito’].\n",
+ "\n",
+ "Thought 2\n",
+ "To find the documentary, I can search Adam Clayton Powell (film).\n",
+ "\n",
+ "Action 2\n",
+ "Adam Clayton Powell (film)\n",
+ "\n",
+ "Observation 2\n",
+ "Adam Clayton Powell is a 1989 American documentary film directed by Richard Kilberg. The film is about the rise and fall of influential African-American politician Adam Clayton Powell Jr.[3][4] It was later aired as part of the PBS series The American Experience.\n",
+ "\n",
+ "Thought 3\n",
+ "Adam Clayton Powell (film) is a documentary about an African-American politician, not Finnish rock groups. So the documentary about Finnish rock groups must instead be The Saimaa Gesture.\n",
+ "\n",
+ "Action 3\n",
+ "The Saimaa Gesture\n",
+ "\n",
+ "Question\n",
+ "What profession does Nicholas Ray and Elia Kazan have in common?\n",
+ "\n",
+ "Thought 1\n",
+ "I need to search Nicholas Ray and Elia Kazan, find their professions, then find the profession they have in common.\n",
+ "\n",
+ "Action 1\n",
+ "Nicholas Ray\n",
+ "\n",
+ "Observation 1\n",
+ "Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16, 1979) was an American film director, screenwriter, and actor best known for the 1955 film Rebel Without a Cause.\n",
+ "\n",
+ "Thought 2\n",
+ "Professions of Nicholas Ray are director, screenwriter, and actor. I need to search Elia Kazan next and find his professions.\n",
+ "\n",
+ "Action 2\n",
+ "Elia Kazan\n",
+ "\n",
+ "Observation 2\n",
+ "Elia Kazan was an American film and theatre director, producer, screenwriter and actor.\n",
+ "\n",
+ "Thought 3\n",
+ "Professions of Elia Kazan are director, producer, screenwriter, and actor. So profession Nicholas Ray and Elia Kazan have in common is director, screenwriter, and actor.\n",
+ "\n",
+ "Action 3\n",
+ "director, screenwriter, actor\n",
+ "\n",
+ "Question\n",
+ "Which magazine was started first Arthur’s Magazine or First for Women?\n",
+ "\n",
+ "Thought 1\n",
+ "I need to search Arthur’s Magazine and First for Women, and find which was started first.\n",
+ "\n",
+ "Action 1\n",
+ "Arthur’s Magazine\n",
+ "\n",
+ "Observation 1\n",
+ "Arthur’s Magazine (1844-1846) was an American literary periodical published in Philadelphia in the 19th century.\n",
+ "\n",
+ "Thought 2\n",
+ "Arthur’s Magazine was started in 1844. I need to search First for Women next.\n",
+ "\n",
+ "Action 2\n",
+ "First for Women\n",
+ "\n",
+ "Observation 2\n",
+ "First for Women is a woman’s magazine published by Bauer Media Group in the USA.[1] The magazine was started in 1989.\n",
+ "\n",
+ "Thought 3\n",
+ "First for Women was started in 1989. 1844 (Arthur’s Magazine) < 1989 (First for Women), so Arthur’s Magazine was started first.\n",
+ "\n",
+ "Action 3\n",
+ "Arthur’s Magazine\n",
+ "\n",
+ "Question\n",
+ "Were Pavel Urysohn and Leonid Levin known for the same type of work?\n",
+ "\n",
+ "Thought 1\n",
+ "I need to search Pavel Urysohn and Leonid Levin, find their types of work, then find if they are the same.\n",
+ "\n",
+ "Action 1\n",
+ "Pavel Urysohn\n",
+ "\n",
+ "Observation 1\n",
+ "Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet mathematician who is best known for his contributions in dimension theory.\n",
+ "\n",
+ "Thought 2\n",
+ "Pavel Urysohn is a mathematician. I need to search Leonid Levin next and find its type of work.\n",
+ "\n",
+ "Action 2\n",
+ "Leonid Levin\n",
+ "\n",
+ "Observation 2\n",
+ "Leonid Anatolievich Levin is a Soviet-American mathematician and computer scientist.\n",
+ "\n",
+ "Thought 3\n",
+ "Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn and Leonid Levin have the same type of work.\n",
+ "\n",
+ "Action 3\n",
+ "yes\n",
+ "\n",
+ "Question\n",
+ "{question}\"\"\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xeCImqiN3WiQ"
+ },
+ "source": [
+ "Dump the instructions along with examples in a file called `model_instructions.txt`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ZyTfAdpk26oB"
+ },
+ "outputs": [],
+ "source": [
+ "ReAct_prompt = model_instructions + examples\n",
+ "with open('model_instructions.txt', 'w') as f:\n",
+ " f.write(ReAct_prompt)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Is8BIVQP3u95"
+ },
+ "source": [
+ "## The Gemini-ReAct pipeline"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PqEwKVDgM1MF"
+ },
+ "source": [
+ "### Setup"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "T4M3lxEoM3k0"
+ },
+ "source": [
+ "The main objective now is to build an end-to-end pipeline to facilitate multi-turn chat with the ReAct-prompted Gemini model.\n",
+ "\n",
+ "\n",
+ "Firstly, your aim is to initialize and establish a chat session with the model. This will set-up a foundation upon which you can build out easily. In addition to the prompt and chat session, you will be keeping a track of model's search history (more on this later)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "vssDZcroN-Ob"
+ },
+ "outputs": [],
+ "source": [
+ "class ReAct:\n",
+ " def __init__(self, model: str, ReAct_prompt: str | os.PathLike):\n",
+ " \"\"\"Prepares Gemini to follow a `Few-shot ReAct prompt` by imitating\n",
+ " `function calling` technique to generate both reasoning traces and\n",
+ " task-specific actions in an interleaved manner.\n",
+ "\n",
+ " Args:\n",
+ " model (str): name to the model\n",
+ " ReAct_prompt (str): ReAct prompt OR path to the ReAct prompt\n",
+ " \"\"\"\n",
+ " self.model = genai.GenerativeModel(model)\n",
+ " self.chat = self.model.start_chat(history=[])\n",
+ " self._search_history = []\n",
+ "\n",
+ " try:\n",
+ " # try to read the file\n",
+ " with open(ReAct_prompt, 'r') as f:\n",
+ " self._prompt = f.read()\n",
+ " except FileNotFoundError:\n",
+ " # assume that the parameter represents prompt itself rather than path to the prompt file.\n",
+ " self._prompt = ReAct_prompt\n",
+ "\n",
+ " @property\n",
+ " def prompt(self):\n",
+ " return self._prompt\n",
+ "\n",
+ " @classmethod\n",
+ " def add_method(cls, func):\n",
+ " setattr(cls, func.__name__, func)\n",
+ "\n",
+ " @staticmethod\n",
+ " def clean(text: str):\n",
+ " \"\"\"helper function for responses\n",
+ " \"\"\"\n",
+ " text = text.replace(\"\\n\", \" \")\n",
+ " return text"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xKfThpmhMZYZ"
+ },
+ "source": [
+ "### Function(tools) definitions\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dnvZ2jqdRHE1"
+ },
+ "source": [
+ "As instructed by the prompt, the model will be generating **Thought-Action-Observation** traces, where every **Action** trace could be one of the following tokens:\n",
+ "\n",
+ "\n",
+ "1. : Perform a Wikipedia search via external API.\n",
+ "2. : Lookup for specific information on a page with the wikipedia API.\n",
+ "3. : Stop the execution of the model and return the answer.\n",
+ "\n",
+ "In short, if the model encounters any of these tokens, the model should make use of the `tools` made available to the model. This understanding of the model to leverage acquired toolsets to collect information from the external world is often referred to as `function calling`. Therefore, the next goal is to imitate this `function calling` technique in order to allow ReAct prompted Gemini model to access the external groundtruth.\n",
+ "\n",
+ "The Gemini API supports fucntion calling and you could use this feature to set up your tools. However, for this tutorial, you will learn to simulate it using `stop_sequences` parameter.\n",
+ "\n",
+ "\n",
+ "Define the tools:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ysHN4y4FPlJZ"
+ },
+ "source": [
+ "#### Search\n",
+ "Used to perform Wikipedia searches."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "yCRB4g4BNzak"
+ },
+ "outputs": [],
+ "source": [
+ "@ReAct.add_method\n",
+ "def search(self, query: str):\n",
+ " \"\"\"perfoms search on `query` via wikipedia api and returns it's summary.\n",
+ "\n",
+ " Args:\n",
+ " query (str): Search parameter to query the wikipedia API with.\n",
+ "\n",
+ " Returns:\n",
+ " observation: Summary of wikipedia search for `query` if found else\n",
+ " similar search results\n",
+ " \"\"\"\n",
+ " try:\n",
+ " # try to get the summary for requested `query` from the wikipedia\n",
+ " observation = wikipedia.summary(query, sentences=4)\n",
+ " observation = self.clean(observation)\n",
+ "\n",
+ " # if successful, return the first 2-3 sentences from the summary as model's context\n",
+ " observation = self.model.generate_content(f'Retun the first 2 or 3 \\\n",
+ " sentences from the following text: {observation}')\n",
+ " observation = observation.text\n",
+ "\n",
+ " # keep track of the model's search history\n",
+ " self._search_history.append(query)\n",
+ "\n",
+ " # if the page is ambiguous/does not exist, return similar search phrases for model's context\n",
+ " except (DisambiguationError, PageError) as e:\n",
+ " search_results = wikipedia.search(query)\n",
+ " observation = f'Could not find [\" {query} \"]. Similar: {search_results}'\n",
+ "\n",
+ " finally:\n",
+ " return observation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "v3fUbHUsPyoF"
+ },
+ "source": [
+ "#### Lookup\n",
+ "Look for a specific phrase on the Wikipedia page."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "_F4kAF77O0E_"
+ },
+ "outputs": [],
+ "source": [
+ "@ReAct.add_method\n",
+ "def lookup(self, phrase: str, context_length=200):\n",
+ " \"\"\"searches for the `phrase` in the lastest wikipedia search page\n",
+ " and returns number of sentences which is controlled by the\n",
+ " `context_length` parameter\n",
+ "\n",
+ " Args:\n",
+ " phrase (str): Lookup phrase to search for within a page. Generally\n",
+ " attributes to some specification of any topic.\n",
+ "\n",
+ " context_length (int) default=100: Number of words to consider\n",
+ " while looking for the answer.\n",
+ "\n",
+ " Returns:\n",
+ " result (str): Context related to the `phrase` within the page.\n",
+ " \"\"\"\n",
+ " # get the last searched wikipedia page and find `phrase` in it.\n",
+ " page = wikipedia.page(self._search_history[-1], auto_suggest=False).content\n",
+ " page = self.clean(page)\n",
+ " start_index = page.find(phrase)\n",
+ "\n",
+ " # extract sentences considering the context length defined\n",
+ " result = page[max(0, start_index - context_length):start_index+len(phrase)+context_length]\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Tc4mq2qlQCnE"
+ },
+ "source": [
+ "#### Finish\n",
+ "Instruct the pipline to terminate its execution."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "0Wxpx8COPak_"
+ },
+ "outputs": [],
+ "source": [
+ "@ReAct.add_method\n",
+ "def finish(self, _):\n",
+ " \"\"\"finishes the conversation on encoutring token by\n",
+ " setting the flag to `False`\n",
+ " \"\"\"\n",
+ " self.flag = False"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "u9Tl6W98Zhut"
+ },
+ "source": [
+ "### Stop tokens & function calling imitation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "0VnX9zpBcdA0"
+ },
+ "source": [
+ "Now that you are all set with function definitions, the next step is to instruct the model to interrupt its execution upon encountering any of the action tokens. You will make use of the `stop_sequences` parameter from `genai.GenerativeModel.GenerationConfig` class to instruct the model when to stop. Upon encountering an action token, the pipeline will simply extract what specific token from the `stop_sequences` argument terminated the model's execution thereby calling appropriate **tool**(function).\n",
+ "\n",
+ "Function's response will be added to model's chat history for continuing the context link."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "vnQom1aQOsK8"
+ },
+ "outputs": [],
+ "source": [
+ "@ReAct.add_method\n",
+ "def __call__(self, max_calls: int=8, **generation_kwargs):\n",
+ " \"\"\"Starts multi-turn conversation with the chat models with function calling\n",
+ "\n",
+ " Args:\n",
+ " max_calls (int) default=8: max calls made to the model to\n",
+ " get the final answer.\n",
+ "\n",
+ " generation_kwargs: Same as genai.GenerativeModel.GenerationConfig\n",
+ " candidate_count: (int | None) = None,\n",
+ " stop_sequences: (Iterable[str] | None) = None,\n",
+ " max_output_tokens: (int | None) = None,\n",
+ " temperature: (float | None) = None,\n",
+ " top_p: (float | None) = None,\n",
+ " top_k: (int | None) = None\n",
+ "\n",
+ " Raises:\n",
+ " AssertionError: if max_calls is not between 1 and 8\n",
+ " \"\"\"\n",
+ "\n",
+ " # hyperparameter fine-tuned according to the paper\n",
+ " assert 0 < max_calls <= 8, \"max_calls must be between 1 and 8\"\n",
+ "\n",
+ " user_question = input(\"You: \") # your query goes here\n",
+ "\n",
+ " if len(self.chat.history) == 0:\n",
+ " model_prompt = self.prompt.format(question=user_question)\n",
+ " else:\n",
+ " model_prompt = user_question\n",
+ "\n",
+ " # stop_sequences for the model to immitate function calling\n",
+ " callable_entities = ['', '', '']\n",
+ "\n",
+ " if generation_kwargs:\n",
+ " generation_kwargs.update({'stop_sequences': callable_entities})\n",
+ " else:\n",
+ " generation_kwargs = {'stop_sequences': callable_entities}\n",
+ "\n",
+ " self.flag = True\n",
+ " for idx in range(max_calls):\n",
+ "\n",
+ " self.response = self.chat.send_message(content=[model_prompt],\n",
+ " generation_config=generation_kwargs, stream=True)\n",
+ "\n",
+ " for chunk in self.response:\n",
+ " print(chunk.text, end=' ')\n",
+ "\n",
+ " response_cmd = self.chat.history[-1].parts[-1].text\n",
+ "\n",
+ " try:\n",
+ " # regex to extract \n",
+ " cmd = re.findall(r'<(.*)>', response_cmd)[-1]\n",
+ " print(f'{cmd}>')\n",
+ " # regex to extract param\n",
+ " query = response_cmd.split(f'<{cmd}>')[-1].strip()\n",
+ " # call to appropriate function\n",
+ " observation = self.__getattribute__(cmd)(query)\n",
+ "\n",
+ " if not self.flag:\n",
+ " break\n",
+ "\n",
+ " stream_message = f\"\\nObservation {idx + 1}\\n{observation}\"\n",
+ " print(stream_message)\n",
+ " # send function's output as user's response\n",
+ " model_prompt = f\"<{cmd}>{query}{cmd}>'s Output: {stream_message}\"\n",
+ "\n",
+ " except (IndexError, AttributeError) as e:\n",
+ " model_prompt = \"Please try again in a different way. \\\n",
+ " , , coould not be found\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xtndhebkhW62"
+ },
+ "source": [
+ "### Test ReAct prompted Gemini model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "h_KWkXWwfZ5h"
+ },
+ "outputs": [],
+ "source": [
+ "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='/content/model_instructions.txt')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 781
+ },
+ "id": "MuZn0LlvjvyV",
+ "outputId": "33a84598-4b67-44e8-dc51-2fbc427b9260"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "You: What is the sum of the age of the main trio from the new Percy Jackson and the Olympians TV Series in real life?\n",
+ "Thought 1\n",
+ "I need to search the new Percy Jackson and the Olympians TV Series, find the main trio and calculate the sum of their ages in real life.\n",
+ "\n",
+ "Action 1\n",
+ "Percy Jackson and the Olympians TV Series \n",
+ "\n",
+ "Observation 1\n",
+ "Percy Jackson and the Olympians is an American fantasy television series created by Rick Riordan and Jonathan E. Steinberg for Disney+. It is based on the book series of the same name by Riordan. Walker Scobell stars as Percy Jackson, alongside Leah Sava Jeffries and Aryan Simhadri.\n",
+ "\n",
+ "\n",
+ "Thought 2\n",
+ "I found the main trio as Walker Scobell, Leah Sava Jeffries and Aryan Simhadri. I need to find their ages and sum it up.\n",
+ "\n",
+ "Action 2\n",
+ "Walker Scobell age \n",
+ "\n",
+ "Observation 2\n",
+ "Walker Scobell is an American actor born on January 5, 2009, in Virginia Beach, Virginia. He has starred in the 2022 action comedy films The Adam Project and Secret Headquarters.\n",
+ "\n",
+ "\n",
+ "Action 3\n",
+ "Leah Sava Jeffries age \n",
+ "\n",
+ "Observation 3\n",
+ "Leah Sava Jeffries is an American child actress born on September 25, 2009, in Detroit, Michigan. Jeffries made her acting debut on the American musical drama, Empire (2015), and later her feature film debut in the action-thriller Beast (2022).\n",
+ "\n",
+ "\n",
+ "Action 4\n",
+ "Aryan Simhadri age \n",
+ "\n",
+ "Observation 4\n",
+ "Aryan Simhadri is an Indian–American actor born on May 6, 2006. He is best known for his role as Grover Underwood in the Disney+ series Percy Jackson and the Olympians.\n",
+ "\n",
+ "\n",
+ "\n",
+ "Thought 3\n",
+ "Walker Scobell is 14 years old, Leah Sava Jeffries is 14 years old, and Aryan Simhadri is 17 years old. Their sum of ages is 14+14 +17=45.\n",
+ "\n",
+ "Action 5\n",
+ "45 \n"
+ ]
+ }
+ ],
+ "source": [
+ " # Note: try different combinations of generational_config parameters for variational results\n",
+ "gemini_ReAct_chat()\n",
+ "#"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ZIfeyyI6hoIE"
+ },
+ "source": [
+ "Now, try asking the same question to `gemini-pro` model without the ReAct prompt."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 37
+ },
+ "id": "_NUXNbTuakSC",
+ "outputId": "f91d8ad1-41cc-40bf-f926-81865e2fe4f3"
+ },
+ "outputs": [
+ {
+ "data": {
+ "application/vnd.google.colaboratory.intrinsic+json": {
+ "type": "string"
+ },
+ "text/plain": [
+ "'The Percy Jackson and the Olympians TV Series has not been released yet, so the ages of the main trio are not available.'"
+ ]
+ },
+ "execution_count": 15,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "gemini_ReAct_chat.model.generate_content(\"What is the sum of the age of the main trio from the new Percy Jackson and the Olympians TV Series in real life?\").text"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-8JSOA5yh1FP"
+ },
+ "source": [
+ "As you can see, information about the asked question is not available in the internal knowldge representation of the langugae model(updated external groundtruth is missing from the model's knowledge) hence it produced a wrong result!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "B-jsJSyBtrP8"
+ },
+ "source": [
+ "## Summary\n",
+ "\n",
+ "As demonstrated by the example above, the ReAct prompted Gemini model is grounded by external information sources and hence is less prone to hallucination. Furthermore, **Thought-Action-Observation** traces generated by the model enhance human interpretability and trustworthiness by allowing users to witness the model's reasoning process for answering the user's query.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vmdNYTm5Lobz"
+ },
+ "source": [
+ "## Further reading\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "iTiDOoTkLvH6"
+ },
+ "source": [
+ "Head over to the [Streamlit app](https://mayochat.streamlit.app/) to interact with a ReAct prompted Gemini bot."
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
From 1aca2bf446e4c762c96ade384435ae9a1bd9cd96 Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Wed, 7 Feb 2024 13:11:32 +0530
Subject: [PATCH 02/17] format with nbfmt
---
site/en/docs/react_gemini_prompting.ipynb | 46 +++++------------------
1 file changed, 10 insertions(+), 36 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index 9a9fafabe..f406aa88a 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -81,8 +81,7 @@
"2. Prompt Gemini with ReAct.\n",
"3. Use the newly prompted model for multi-turn conversations (chat).\n",
"4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via `wikipedia api`.\n",
- "5. Have Conversations with deployed **ReAct-Gemini Chat 🤖**\n",
- "\n"
+ "5. Have Conversations with deployed **ReAct-Gemini Chat 🤖**\n"
]
},
{
@@ -106,8 +105,7 @@
"1. What is ReAct?\n",
"\n",
"\n",
- "> ReAct is a prompting method which allows language models to showcase the trace of thinking processes/steps involved in answering a user's query. This improves human interpretability and trustworthiness. ReAct prompted models generate **Thought-Action-Observation** triplets for every iteration.\n",
- "\n"
+ "> ReAct is a prompting method which allows language models to showcase the trace of thinking processes/steps involved in answering a user's query. This improves human interpretability and trustworthiness. ReAct prompted models generate **Thought-Action-Observation** triplets for every iteration.\n"
]
},
{
@@ -125,11 +123,7 @@
"\n",
" - ReAct prevents hallucination by mandating language models to seek factual information strictly from the external environment they are provided with. This allows language models to access updated/true groundtruth.\n",
" - ReAct prompted models use their internal knowledge only to analyze **(THOUGHT)** and make informed decisions**(ACTION)** on the external knowledge sources **(OBSERVATION)**.\n",
- " - Internal knowledge is not used to retrieve factual information.\n",
- "\n",
- "\n",
- "\n",
- "\n"
+ " - Internal knowledge is not used to retrieve factual information.\n"
]
},
{
@@ -195,19 +189,14 @@
"\n",
"The Python SDK for the Gemini API, is contained in the [`google-generativeai`](https://pypi.org/project/google-generativeai/) package. Install the dependency using pip:\n",
"\n",
- "You will also need to install the `wikipedia` API.\n",
- "\n"
+ "You will also need to install the `wikipedia` API.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "Twc_XZ7h7Bb4",
- "outputId": "4f86e9e7-7012-417e-af79-03f570953de0"
+ "id": "Twc_XZ7h7Bb4"
},
"outputs": [
{
@@ -351,8 +340,7 @@
"id": "Fw52CHAG0aRr"
},
"source": [
- "### Few-shot prompting to enable in-context learning with Gemini\n",
- "\n"
+ "### Few-shot prompting to enable in-context learning with Gemini\n"
]
},
{
@@ -931,12 +919,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 781
- },
- "id": "MuZn0LlvjvyV",
- "outputId": "33a84598-4b67-44e8-dc51-2fbc427b9260"
+ "id": "MuZn0LlvjvyV"
},
"outputs": [
{
@@ -1006,12 +989,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 37
- },
- "id": "_NUXNbTuakSC",
- "outputId": "f91d8ad1-41cc-40bf-f926-81865e2fe4f3"
+ "id": "_NUXNbTuakSC"
},
"outputs": [
{
@@ -1058,8 +1036,7 @@
"id": "vmdNYTm5Lobz"
},
"source": [
- "## Further reading\n",
- "\n"
+ "## Further reading\n"
]
},
{
@@ -1074,15 +1051,12 @@
],
"metadata": {
"colab": {
- "provenance": [],
+ "name": "react_gemini_prompting.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
- },
- "language_info": {
- "name": "python"
}
},
"nbformat": 4,
From 6c95195c8ba6c03d36b2b84f5999c77fd645deee Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Fri, 9 Feb 2024 08:21:00 +0530
Subject: [PATCH 03/17] Add Wikipedia content licensing information
---
site/en/docs/react_gemini_prompting.ipynb | 665 ++++++++++------------
1 file changed, 290 insertions(+), 375 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index f406aa88a..47b0aca3c 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -1,22 +1,31 @@
{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
"cells": [
{
"cell_type": "markdown",
- "metadata": {
- "id": "T85zXWw3Zs05"
- },
"source": [
"##### Copyright 2024 Google LLC."
- ]
+ ],
+ "metadata": {
+ "id": "T85zXWw3Zs05"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "cellView": "form",
- "id": "X4uPZ83DbUTq"
- },
- "outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
@@ -29,50 +38,62 @@
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
- ]
+ ],
+ "metadata": {
+ "cellView": "form",
+ "id": "X4uPZ83DbUTq"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
+ "source": [
+ "The Wikipedia content in this tutorial is licensed under CC BY-SA 4.0
This example is for demonstrational and educational purposes only. It is therfore advised to seek legal aid for further distribution/use.
"
+ ],
"metadata": {
- "id": "vX-FA27MbYpQ"
- },
+ "id": "J_wnYaX_hKtu"
+ }
+ },
+ {
+ "cell_type": "markdown",
"source": [
"# ReAct + Gemini: A prompting method for demonstrating reasoning & acting in LLMs"
- ]
+ ],
+ "metadata": {
+ "id": "vX-FA27MbYpQ"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "Pk4Y-PKWc3MU"
- },
"source": [
""
- ]
+ ],
+ "metadata": {
+ "id": "Pk4Y-PKWc3MU"
+ }
},
{
"cell_type": "markdown",
+ "source": [
+ "This notebook is a minimal implementation of [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) with the Google `gemini-pro` model.\n"
+ ],
"metadata": {
"id": "sdkuZY1IdRal"
- },
- "source": [
- "This notebook is a minimal implementation of [ [Yao, S. et al](https://arxiv.org/abs/2210.03629) ] with the Google `gemini-pro` model. `gemini-pro` model.\n"
- ]
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "PSr-BK-5meRo"
- },
"source": [
"This notebook demonstrates the use of `gemini-pro` to generate reasoning traces and task-specific actions by leveraging a **Few-shot ReAct Prompt**. In this walkthrough, you will learn how to:\n",
"\n",
@@ -81,127 +102,98 @@
"2. Prompt Gemini with ReAct.\n",
"3. Use the newly prompted model for multi-turn conversations (chat).\n",
"4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via `wikipedia api`.\n",
- "5. Have Conversations with deployed **ReAct-Gemini Chat 🤖**\n"
- ]
- },
- {
- "cell_type": "markdown",
+ "5. Have Conversations with deployed **ReAct prmopted Gemini bot 🤖**\n",
+ "\n"
+ ],
"metadata": {
- "id": "lSkx3VHr3WYb"
- },
- "source": [
- "### Background"
- ]
+ "id": "PSr-BK-5meRo"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "NemuU6yB4BWY"
- },
"source": [
- "#### What(s)\n",
+ "### Background\n",
"\n",
+ " \n",
"\n",
- "1. What is ReAct?\n",
"\n",
"\n",
- "> ReAct is a prompting method which allows language models to showcase the trace of thinking processes/steps involved in answering a user's query. This improves human interpretability and trustworthiness. ReAct prompted models generate **Thought-Action-Observation** triplets for every iteration.\n"
- ]
+ "\n"
+ ],
+ "metadata": {
+ "id": "lSkx3VHr3WYb"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "ULalraVs6_Cb"
- },
"source": [
- "#### Why(s)\n",
+ "What(s)\n",
+ "\n",
+ "\n",
+ "1. What is ReAct?\n",
+ "> ReAct is a prompting method which allows language models to showcase the trace of thinking processes/steps involved in answering a user's query. This improves human interpretability and trustworthiness. ReAct prompted models generate **Thought-Action-Observation** triplets for every iteration.\n",
"\n",
+ "Why(s)\n",
"\n",
"\n",
"1. Why to use ReAct when you can simply instruct the model: \"Explain step-by-step\"\n",
"> Large Language models (LLMs) generate responses by extrapolating the prompts they are provided with. This is achieved by using their internal knowledge representation. In short, they are oblivious to the external/updated information, which makes them hallucinate(generate wrong/unreal answers) at times.\n",
"\n",
- " - ReAct prevents hallucination by mandating language models to seek factual information strictly from the external environment they are provided with. This allows language models to access updated/true groundtruth.\n",
- " - ReAct prompted models use their internal knowledge only to analyze **(THOUGHT)** and make informed decisions**(ACTION)** on the external knowledge sources **(OBSERVATION)**.\n",
- " - Internal knowledge is not used to retrieve factual information.\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "7gs5VfqZQj8y"
- },
- "source": [
- "#### How(s)\n",
+ " - ReAct tries to prevent hallucination by mandating language models to seek factual information strictly from the external environment they are provided with.\n",
+ "\n",
+ "\n",
+ "How(s)\n",
"\n",
"1. How does it work?\n",
- ">\n",
"\n",
- " ##### Working of ReAct prompted Gemini model\n",
- " 1. Model receives the query from the user.\n",
- " 2. This query with previous chat history(for continuous context link) if any, will be analyzed by the model.\n",
- " 3. The model will use its internal knowledge to decide which among the following action it needs to take.\n",
+ " - Model receives the query from the user.\n",
+ " - This query with previous chat history(for continuous context link) if any, will be analyzed by the model.\n",
+ " - The model will use its internal knowledge to decide which among the following action it needs to take.\n",
" - Search[entity]: By analyzing the query, the model will decide what `entity` to query to the wikipedia API.\n",
" - Lookup[phrase]: Model will analyze the content of the wikipedia page returned by `Search[entity]` action and then decide what specific information or `phrase` is needed to be retrieved from that page to answer the user's question.\n",
- " - Finish[Answer]: Return the `Answer` to the user.\n",
- " 4. As evident from this point, the factual information is strictly being retrieved from external sources and decision making, parameter choosing and function calling is being generated from the model's internal knowledge and context.\n",
- " 5. Step 3 is repeated until the model finds the answer.\n",
- "\n",
- " \n"
- ]
- },
- {
- "cell_type": "markdown",
+ " - Finish[Answer]: Return the `Answer` to the user."
+ ],
"metadata": {
- "id": "ZHbdlviLTtlx"
- },
- "source": [
- "Now that you have understood the background, you can dive deep into technical implementations."
- ]
+ "id": "PqoT0ojAcV9P"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "80bo5tjXtb_X"
- },
"source": [
- "## Prerequisites\n",
- "\n",
- "You can run this quickstart in [Google Colab](https://colab.research.google.com/github/google/generative-ai-docs/blob/main/site/en/docs/react_gemini_prompting.ipynb), which runs this notebook directly in the browser and does not require additional environment configuration.\n"
- ]
- },
- {
- "cell_type": "markdown",
+ "## Setup\n"
+ ],
"metadata": {
"id": "cVvxnBG-thZG"
- },
- "source": [
- "## Setup\n"
- ]
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "Xq6NOA99tiHK"
- },
"source": [
"### Install the Python SDK\n",
"\n",
"The Python SDK for the Gemini API, is contained in the [`google-generativeai`](https://pypi.org/project/google-generativeai/) package. Install the dependency using pip:\n",
"\n",
- "You will also need to install the `wikipedia` API.\n"
- ]
+ "You will also need to install the `wikipedia` API.\n",
+ "\n"
+ ],
+ "metadata": {
+ "id": "Xq6NOA99tiHK"
+ }
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
- "id": "Twc_XZ7h7Bb4"
+ "id": "Twc_XZ7h7Bb4",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "d35eab0b-197c-4948-e514-8f0e016e278b"
},
"outputs": [
{
- "name": "stdout",
"output_type": "stream",
+ "name": "stdout",
"text": [
" Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Building wheel for wikipedia (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
@@ -215,29 +207,24 @@
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "vqv5MnQUuBZJ"
- },
"source": [
"### Import packages"
- ]
+ ],
+ "metadata": {
+ "id": "vqv5MnQUuBZJ"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "qS5HJk_VuCup"
- },
"source": [
"Import the necessary packages."
- ]
+ ],
+ "metadata": {
+ "id": "qS5HJk_VuCup"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "Jz5HOLy47VX0"
- },
- "outputs": [],
"source": [
"from typing import Any, Dict, Optional\n",
"from contextlib import contextmanager\n",
@@ -249,116 +236,117 @@
"\n",
"import google.generativeai as genai\n",
"from google.colab import userdata"
- ]
+ ],
+ "metadata": {
+ "id": "Jz5HOLy47VX0"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "4xsPDHz_uSYH"
- },
"source": [
"### Set up your API key\n",
"\n",
"Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.\n",
"\n",
"Get an API key\n"
- ]
+ ],
+ "metadata": {
+ "id": "4xsPDHz_uSYH"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "3QC1DUOxuWDS"
- },
"source": [
"In Colab, add the key to the secrets manager under the \"🔑\" in the left panel. Give it the name `GOOGLE_API_KEY`."
- ]
+ ],
+ "metadata": {
+ "id": "3QC1DUOxuWDS"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "SAvjxTybuWw-"
- },
"source": [
"Once you have the API key, pass it to the SDK. You can do this in two ways:\n",
"\n",
"* Put the key in the `GOOGLE_API_KEY` environment variable (the SDK will automatically pick it up from there).\n",
"* Pass the key to `genai.configure(api_key=...)`\n"
- ]
+ ],
+ "metadata": {
+ "id": "SAvjxTybuWw-"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "JAzIedGr9PdN"
- },
- "outputs": [],
"source": [
"# Or use `os.getenv('GOOGLE_API_KEY')` to fetch an environment variable.\n",
"GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n",
"\n",
"genai.configure(api_key=GOOGLE_API_KEY)"
- ]
+ ],
+ "metadata": {
+ "id": "JAzIedGr9PdN"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "Sqkwp87FumIp"
- },
"source": [
"## The ReAct prompt"
- ]
+ ],
+ "metadata": {
+ "id": "Sqkwp87FumIp"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "lLv9Kuuu5Ffs"
- },
"source": [
"The original prompts used in the paper is available at [https://github.com/ysymyth/ReAct/tree/master/prompts](https://github.com/ysymyth/ReAct/tree/master/prompts)\n",
"\n",
"Here, you will be working with the following ReAct prompt with a few minor adjustments."
- ]
+ ],
+ "metadata": {
+ "id": "lLv9Kuuu5Ffs"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "g8klL8df4iXe"
- },
- "outputs": [],
"source": [
"model_instructions = \"\"\"Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, Observation is gauging relevant information from an Action's output and Action can be of three types:\n",
- "(1) entity, which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search.\n",
- "(2) keyword, which returns the next sentence containing keyword in the current passage.\n",
+ "(1) entity, which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search and you can try to search the information from those topics.\n",
+ "(2) keyword, which returns the next sentence containing keyword in the current context.\n",
"(3) answer, which returns the answer and finishes the task.\n",
"\"\"\""
- ]
+ ],
+ "metadata": {
+ "id": "g8klL8df4iXe"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
+ "source": [
+ "### Few-shot prompting to enable in-context learning with Gemini\n",
+ "\n"
+ ],
"metadata": {
"id": "Fw52CHAG0aRr"
- },
- "source": [
- "### Few-shot prompting to enable in-context learning with Gemini\n"
- ]
+ }
},
{
"cell_type": "markdown",
+ "source": [
+ "While large language models show good understanding of the instructions they are prompted with, they still may perform poorly on complex tasks in a zero-shot setting. Hence, you will now provide a few examples along with your prompt to steer the model's output according to your needs. This **in-context learning** improves the model's performance significantly."
+ ],
"metadata": {
"id": "-jhaD4ChNv6M"
- },
- "source": [
- "While large language models show remarkable understanding of the instructions they are prompted with, they still may lack on performing complex tasks in a zero-shot setting. Hence, you will now provide a few examples along with your prompt to steer the model's output according to your needs. This **in-context learning** improves the model's performance significantly."
- ]
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "tZ7vezr02qv0"
- },
- "outputs": [],
"source": [
"examples = \"\"\"\n",
"Here are some examples.\n",
@@ -545,67 +533,64 @@
"\n",
"Question\n",
"{question}\"\"\""
- ]
+ ],
+ "metadata": {
+ "id": "tZ7vezr02qv0"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
+ "source": [
+ "Copy the instructions along with examples in a file called `model_instructions.txt`"
+ ],
"metadata": {
"id": "xeCImqiN3WiQ"
- },
- "source": [
- "Dump the instructions along with examples in a file called `model_instructions.txt`"
- ]
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "ZyTfAdpk26oB"
- },
- "outputs": [],
"source": [
"ReAct_prompt = model_instructions + examples\n",
"with open('model_instructions.txt', 'w') as f:\n",
" f.write(ReAct_prompt)"
- ]
+ ],
+ "metadata": {
+ "id": "ZyTfAdpk26oB"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "Is8BIVQP3u95"
- },
"source": [
"## The Gemini-ReAct pipeline"
- ]
+ ],
+ "metadata": {
+ "id": "Is8BIVQP3u95"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "PqEwKVDgM1MF"
- },
"source": [
"### Setup"
- ]
+ ],
+ "metadata": {
+ "id": "PqEwKVDgM1MF"
+ }
},
{
"cell_type": "markdown",
+ "source": [
+ "You will now build an end-to-end pipeline to facilitate multi-turn chat with the ReAct-prompted Gemini model."
+ ],
"metadata": {
"id": "T4M3lxEoM3k0"
- },
- "source": [
- "The main objective now is to build an end-to-end pipeline to facilitate multi-turn chat with the ReAct-prompted Gemini model.\n",
- "\n",
- "\n",
- "Firstly, your aim is to initialize and establish a chat session with the model. This will set-up a foundation upon which you can build out easily. In addition to the prompt and chat session, you will be keeping a track of model's search history (more on this later)"
- ]
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "vssDZcroN-Ob"
- },
- "outputs": [],
"source": [
"class ReAct:\n",
" def __init__(self, model: str, ReAct_prompt: str | os.PathLike):\n",
@@ -619,7 +604,8 @@
" \"\"\"\n",
" self.model = genai.GenerativeModel(model)\n",
" self.chat = self.model.start_chat(history=[])\n",
- " self._search_history = []\n",
+ " self._search_history: list[str] = []\n",
+ " self._search_urls: list[str] = []\n",
"\n",
" try:\n",
" # try to read the file\n",
@@ -643,22 +629,24 @@
" \"\"\"\n",
" text = text.replace(\"\\n\", \" \")\n",
" return text"
- ]
+ ],
+ "metadata": {
+ "id": "vssDZcroN-Ob"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "xKfThpmhMZYZ"
- },
"source": [
"### Function(tools) definitions\n"
- ]
+ ],
+ "metadata": {
+ "id": "xKfThpmhMZYZ"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "dnvZ2jqdRHE1"
- },
"source": [
"As instructed by the prompt, the model will be generating **Thought-Action-Observation** traces, where every **Action** trace could be one of the following tokens:\n",
"\n",
@@ -667,31 +655,29 @@
"2. : Lookup for specific information on a page with the wikipedia API.\n",
"3. : Stop the execution of the model and return the answer.\n",
"\n",
- "In short, if the model encounters any of these tokens, the model should make use of the `tools` made available to the model. This understanding of the model to leverage acquired toolsets to collect information from the external world is often referred to as `function calling`. Therefore, the next goal is to imitate this `function calling` technique in order to allow ReAct prompted Gemini model to access the external groundtruth.\n",
+ "If the model encounters any of these tokens, the model should make use of the `tools` made available to the model. This understanding of the model to leverage acquired toolsets to collect information from the external world is often referred to as **function calling**. Therefore, the next goal is to imitate this function calling technique in order to allow ReAct prompted Gemini model to access the external groundtruth.\n",
"\n",
- "The Gemini API supports fucntion calling and you could use this feature to set up your tools. However, for this tutorial, you will learn to simulate it using `stop_sequences` parameter.\n",
+ "The Gemini API supports function calling and you could use this feature to set up your tools. However, for this tutorial, you will learn to simulate it using `stop_sequences` parameter.\n",
"\n",
"\n",
"Define the tools:"
- ]
+ ],
+ "metadata": {
+ "id": "dnvZ2jqdRHE1"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "ysHN4y4FPlJZ"
- },
"source": [
"#### Search\n",
- "Used to perform Wikipedia searches."
- ]
+ "Define a method to perform Wikipedia searches"
+ ],
+ "metadata": {
+ "id": "ysHN4y4FPlJZ"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "yCRB4g4BNzak"
- },
- "outputs": [],
"source": [
"@ReAct.add_method\n",
"def search(self, query: str):\n",
@@ -707,6 +693,7 @@
" try:\n",
" # try to get the summary for requested `query` from the wikipedia\n",
" observation = wikipedia.summary(query, sentences=4)\n",
+ " wiki_url = wikipedia.page(query, auto_suggest=False).url\n",
" observation = self.clean(observation)\n",
"\n",
" # if successful, return the first 2-3 sentences from the summary as model's context\n",
@@ -716,33 +703,35 @@
"\n",
" # keep track of the model's search history\n",
" self._search_history.append(query)\n",
+ " self._search_urls.append(wiki_url)\n",
+ " print(f\"Information Source: {wiki_url}\")\n",
"\n",
" # if the page is ambiguous/does not exist, return similar search phrases for model's context\n",
" except (DisambiguationError, PageError) as e:\n",
" search_results = wikipedia.search(query)\n",
- " observation = f'Could not find [\" {query} \"]. Similar: {search_results}'\n",
+ " observation = f'Could not find [\" {query} \"]. Similar: {search_results}. You should search for {search_results[0]} instead.'\n",
"\n",
" finally:\n",
" return observation"
- ]
+ ],
+ "metadata": {
+ "id": "yCRB4g4BNzak"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "v3fUbHUsPyoF"
- },
"source": [
"#### Lookup\n",
"Look for a specific phrase on the Wikipedia page."
- ]
+ ],
+ "metadata": {
+ "id": "v3fUbHUsPyoF"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "_F4kAF77O0E_"
- },
- "outputs": [],
"source": [
"@ReAct.add_method\n",
"def lookup(self, phrase: str, context_length=200):\n",
@@ -761,68 +750,71 @@
" result (str): Context related to the `phrase` within the page.\n",
" \"\"\"\n",
" # get the last searched wikipedia page and find `phrase` in it.\n",
- " page = wikipedia.page(self._search_history[-1], auto_suggest=False).content\n",
+ " page = wikipedia.page(self._search_history[-1], auto_suggest=False)\n",
+ " page = page.content\n",
" page = self.clean(page)\n",
" start_index = page.find(phrase)\n",
"\n",
" # extract sentences considering the context length defined\n",
" result = page[max(0, start_index - context_length):start_index+len(phrase)+context_length]\n",
+ " print(f\"Information Source: {self._search_urls[-1]}\")\n",
" return result"
- ]
+ ],
+ "metadata": {
+ "id": "_F4kAF77O0E_"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "Tc4mq2qlQCnE"
- },
"source": [
"#### Finish\n",
"Instruct the pipline to terminate its execution."
- ]
+ ],
+ "metadata": {
+ "id": "Tc4mq2qlQCnE"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "0Wxpx8COPak_"
- },
- "outputs": [],
"source": [
"@ReAct.add_method\n",
"def finish(self, _):\n",
" \"\"\"finishes the conversation on encoutring token by\n",
" setting the flag to `False`\n",
" \"\"\"\n",
- " self.flag = False"
- ]
+ " self.flag = False\n",
+ " print(f\"Information Sources: {self._search_urls}\")"
+ ],
+ "metadata": {
+ "id": "0Wxpx8COPak_"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "u9Tl6W98Zhut"
- },
"source": [
"### Stop tokens & function calling imitation"
- ]
+ ],
+ "metadata": {
+ "id": "u9Tl6W98Zhut"
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "0VnX9zpBcdA0"
- },
"source": [
- "Now that you are all set with function definitions, the next step is to instruct the model to interrupt its execution upon encountering any of the action tokens. You will make use of the `stop_sequences` parameter from `genai.GenerativeModel.GenerationConfig` class to instruct the model when to stop. Upon encountering an action token, the pipeline will simply extract what specific token from the `stop_sequences` argument terminated the model's execution thereby calling appropriate **tool**(function).\n",
+ "Now that you are all set with function definitions, the next step is to instruct the model to interrupt its execution upon encountering any of the action tokens. You will make use of the `stop_sequences` parameter from [`genai.GenerativeModel.GenerationConfig`](https://ai.google.dev/api/python/google/generativeai/GenerationConfig) class to instruct the model when to stop. Upon encountering an action token, the pipeline will simply extract what specific token from the `stop_sequences` argument terminated the model's execution thereby calling appropriate **tool**(function).\n",
"\n",
"Function's response will be added to model's chat history for continuing the context link."
- ]
+ ],
+ "metadata": {
+ "id": "0VnX9zpBcdA0"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "vnQom1aQOsK8"
- },
- "outputs": [],
"source": [
"@ReAct.add_method\n",
"def __call__(self, max_calls: int=8, **generation_kwargs):\n",
@@ -893,172 +885,95 @@
" except (IndexError, AttributeError) as e:\n",
" model_prompt = \"Please try again in a different way. \\\n",
" , , coould not be found\""
- ]
+ ],
+ "metadata": {
+ "id": "vnQom1aQOsK8"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "xtndhebkhW62"
- },
"source": [
"### Test ReAct prompted Gemini model"
- ]
+ ],
+ "metadata": {
+ "id": "xtndhebkhW62"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
+ "source": [
+ "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='/content/model_instructions.txt')"
+ ],
"metadata": {
"id": "h_KWkXWwfZ5h"
},
- "outputs": [],
- "source": [
- "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='/content/model_instructions.txt')"
- ]
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "MuZn0LlvjvyV"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "You: What is the sum of the age of the main trio from the new Percy Jackson and the Olympians TV Series in real life?\n",
- "Thought 1\n",
- "I need to search the new Percy Jackson and the Olympians TV Series, find the main trio and calculate the sum of their ages in real life.\n",
- "\n",
- "Action 1\n",
- "Percy Jackson and the Olympians TV Series \n",
- "\n",
- "Observation 1\n",
- "Percy Jackson and the Olympians is an American fantasy television series created by Rick Riordan and Jonathan E. Steinberg for Disney+. It is based on the book series of the same name by Riordan. Walker Scobell stars as Percy Jackson, alongside Leah Sava Jeffries and Aryan Simhadri.\n",
- "\n",
- "\n",
- "Thought 2\n",
- "I found the main trio as Walker Scobell, Leah Sava Jeffries and Aryan Simhadri. I need to find their ages and sum it up.\n",
- "\n",
- "Action 2\n",
- "Walker Scobell age \n",
- "\n",
- "Observation 2\n",
- "Walker Scobell is an American actor born on January 5, 2009, in Virginia Beach, Virginia. He has starred in the 2022 action comedy films The Adam Project and Secret Headquarters.\n",
- "\n",
- "\n",
- "Action 3\n",
- "Leah Sava Jeffries age \n",
- "\n",
- "Observation 3\n",
- "Leah Sava Jeffries is an American child actress born on September 25, 2009, in Detroit, Michigan. Jeffries made her acting debut on the American musical drama, Empire (2015), and later her feature film debut in the action-thriller Beast (2022).\n",
- "\n",
- "\n",
- "Action 4\n",
- "Aryan Simhadri age \n",
- "\n",
- "Observation 4\n",
- "Aryan Simhadri is an Indian–American actor born on May 6, 2006. He is best known for his role as Grover Underwood in the Disney+ series Percy Jackson and the Olympians.\n",
- "\n",
- "\n",
- "\n",
- "Thought 3\n",
- "Walker Scobell is 14 years old, Leah Sava Jeffries is 14 years old, and Aryan Simhadri is 17 years old. Their sum of ages is 14+14 +17=45.\n",
- "\n",
- "Action 5\n",
- "45 \n"
- ]
- }
- ],
"source": [
" # Note: try different combinations of generational_config parameters for variational results\n",
"gemini_ReAct_chat()\n",
"#"
- ]
+ ],
+ "metadata": {
+ "id": "MuZn0LlvjvyV"
+ },
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "ZIfeyyI6hoIE"
- },
"source": [
"Now, try asking the same question to `gemini-pro` model without the ReAct prompt."
- ]
+ ],
+ "metadata": {
+ "id": "ZIfeyyI6hoIE"
+ }
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "_NUXNbTuakSC"
- },
- "outputs": [
- {
- "data": {
- "application/vnd.google.colaboratory.intrinsic+json": {
- "type": "string"
- },
- "text/plain": [
- "'The Percy Jackson and the Olympians TV Series has not been released yet, so the ages of the main trio are not available.'"
- ]
- },
- "execution_count": 15,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
"source": [
- "gemini_ReAct_chat.model.generate_content(\"What is the sum of the age of the main trio from the new Percy Jackson and the Olympians TV Series in real life?\").text"
- ]
- },
- {
- "cell_type": "markdown",
+ "gemini_ReAct_chat.model.generate_content(\"[ask same question here]\").text"
+ ],
"metadata": {
- "id": "-8JSOA5yh1FP"
+ "id": "_NUXNbTuakSC"
},
- "source": [
- "As you can see, information about the asked question is not available in the internal knowldge representation of the langugae model(updated external groundtruth is missing from the model's knowledge) hence it produced a wrong result!"
- ]
+ "execution_count": null,
+ "outputs": []
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "B-jsJSyBtrP8"
- },
"source": [
"## Summary\n",
"\n",
- "As demonstrated by the example above, the ReAct prompted Gemini model is grounded by external information sources and hence is less prone to hallucination. Furthermore, **Thought-Action-Observation** traces generated by the model enhance human interpretability and trustworthiness by allowing users to witness the model's reasoning process for answering the user's query.\n"
- ]
+ "The ReAct prompted Gemini model is grounded by external information sources and hence is less prone to hallucination. Furthermore, **Thought-Action-Observation** traces generated by the model enhance human interpretability and trustworthiness by allowing users to witness the model's reasoning process for answering the user's query.\n"
+ ],
+ "metadata": {
+ "id": "B-jsJSyBtrP8"
+ }
},
{
"cell_type": "markdown",
+ "source": [
+ "## Further reading\n",
+ "\n"
+ ],
"metadata": {
"id": "vmdNYTm5Lobz"
- },
- "source": [
- "## Further reading\n"
- ]
+ }
},
{
"cell_type": "markdown",
- "metadata": {
- "id": "iTiDOoTkLvH6"
- },
"source": [
"Head over to the [Streamlit app](https://mayochat.streamlit.app/) to interact with a ReAct prompted Gemini bot."
- ]
- }
- ],
- "metadata": {
- "colab": {
- "name": "react_gemini_prompting.ipynb",
- "toc_visible": true
- },
- "kernelspec": {
- "display_name": "Python 3",
- "name": "python3"
+ ],
+ "metadata": {
+ "id": "iTiDOoTkLvH6"
+ }
}
- },
- "nbformat": 4,
- "nbformat_minor": 0
+ ]
}
From b6cc5486e5c45e0d105b139c27de9e11873d260a Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Fri, 9 Feb 2024 08:28:33 +0530
Subject: [PATCH 04/17] nbfmt format
---
site/en/docs/react_gemini_prompting.ipynb | 497 +++++++++++-----------
1 file changed, 241 insertions(+), 256 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index 47b0aca3c..34abb51e0 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -1,31 +1,22 @@
{
- "nbformat": 4,
- "nbformat_minor": 0,
- "metadata": {
- "colab": {
- "provenance": [],
- "toc_visible": true
- },
- "kernelspec": {
- "name": "python3",
- "display_name": "Python 3"
- },
- "language_info": {
- "name": "python"
- }
- },
"cells": [
{
"cell_type": "markdown",
- "source": [
- "##### Copyright 2024 Google LLC."
- ],
"metadata": {
"id": "T85zXWw3Zs05"
- }
+ },
+ "source": [
+ "##### Copyright 2024 Google LLC."
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "cellView": "form",
+ "id": "X4uPZ83DbUTq"
+ },
+ "outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
@@ -38,34 +29,31 @@
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
- ],
- "metadata": {
- "cellView": "form",
- "id": "X4uPZ83DbUTq"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "The Wikipedia content in this tutorial is licensed under CC BY-SA 4.0
This example is for demonstrational and educational purposes only. It is therfore advised to seek legal aid for further distribution/use.
"
- ],
"metadata": {
"id": "J_wnYaX_hKtu"
- }
+ },
+ "source": [
+ "The Wikipedia content in this tutorial is licensed under CC BY-SA 4.0
This example is for demonstrational and educational purposes only. It is therfore advised to seek legal aid for further distribution/use.
"
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "# ReAct + Gemini: A prompting method for demonstrating reasoning & acting in LLMs"
- ],
"metadata": {
"id": "vX-FA27MbYpQ"
- }
+ },
+ "source": [
+ "# ReAct + Gemini: A prompting method for demonstrating reasoning & acting in LLMs"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "Pk4Y-PKWc3MU"
+ },
"source": [
""
- ],
- "metadata": {
- "id": "Pk4Y-PKWc3MU"
- }
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "This notebook is a minimal implementation of [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) with the Google `gemini-pro` model.\n"
- ],
"metadata": {
"id": "sdkuZY1IdRal"
- }
+ },
+ "source": [
+ "This notebook is a minimal implementation of [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) with the Google `gemini-pro` model.\n"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "PSr-BK-5meRo"
+ },
"source": [
"This notebook demonstrates the use of `gemini-pro` to generate reasoning traces and task-specific actions by leveraging a **Few-shot ReAct Prompt**. In this walkthrough, you will learn how to:\n",
"\n",
@@ -102,30 +90,25 @@
"2. Prompt Gemini with ReAct.\n",
"3. Use the newly prompted model for multi-turn conversations (chat).\n",
"4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via `wikipedia api`.\n",
- "5. Have Conversations with deployed **ReAct prmopted Gemini bot 🤖**\n",
- "\n"
- ],
- "metadata": {
- "id": "PSr-BK-5meRo"
- }
+ "5. Have Conversations with deployed **ReAct prmopted Gemini bot 🤖**\n"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "lSkx3VHr3WYb"
+ },
"source": [
"### Background\n",
"\n",
- " \n",
- "\n",
- "\n",
- "\n",
- "\n"
- ],
- "metadata": {
- "id": "lSkx3VHr3WYb"
- }
+ " \n"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "PqoT0ojAcV9P"
+ },
"source": [
"What(s)\n",
"\n",
@@ -152,48 +135,40 @@
" - Search[entity]: By analyzing the query, the model will decide what `entity` to query to the wikipedia API.\n",
" - Lookup[phrase]: Model will analyze the content of the wikipedia page returned by `Search[entity]` action and then decide what specific information or `phrase` is needed to be retrieved from that page to answer the user's question.\n",
" - Finish[Answer]: Return the `Answer` to the user."
- ],
- "metadata": {
- "id": "PqoT0ojAcV9P"
- }
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "## Setup\n"
- ],
"metadata": {
"id": "cVvxnBG-thZG"
- }
+ },
+ "source": [
+ "## Setup\n"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "Xq6NOA99tiHK"
+ },
"source": [
"### Install the Python SDK\n",
"\n",
"The Python SDK for the Gemini API, is contained in the [`google-generativeai`](https://pypi.org/project/google-generativeai/) package. Install the dependency using pip:\n",
"\n",
- "You will also need to install the `wikipedia` API.\n",
- "\n"
- ],
- "metadata": {
- "id": "Xq6NOA99tiHK"
- }
+ "You will also need to install the `wikipedia` API.\n"
+ ]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
- "id": "Twc_XZ7h7Bb4",
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "outputId": "d35eab0b-197c-4948-e514-8f0e016e278b"
+ "id": "Twc_XZ7h7Bb4"
},
"outputs": [
{
- "output_type": "stream",
"name": "stdout",
+ "output_type": "stream",
"text": [
" Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
" Building wheel for wikipedia (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
@@ -207,24 +182,29 @@
},
{
"cell_type": "markdown",
- "source": [
- "### Import packages"
- ],
"metadata": {
"id": "vqv5MnQUuBZJ"
- }
+ },
+ "source": [
+ "### Import packages"
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "Import the necessary packages."
- ],
"metadata": {
"id": "qS5HJk_VuCup"
- }
+ },
+ "source": [
+ "Import the necessary packages."
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "Jz5HOLy47VX0"
+ },
+ "outputs": [],
"source": [
"from typing import Any, Dict, Optional\n",
"from contextlib import contextmanager\n",
@@ -236,117 +216,116 @@
"\n",
"import google.generativeai as genai\n",
"from google.colab import userdata"
- ],
- "metadata": {
- "id": "Jz5HOLy47VX0"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "4xsPDHz_uSYH"
+ },
"source": [
"### Set up your API key\n",
"\n",
"Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.\n",
"\n",
"Get an API key\n"
- ],
- "metadata": {
- "id": "4xsPDHz_uSYH"
- }
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "In Colab, add the key to the secrets manager under the \"🔑\" in the left panel. Give it the name `GOOGLE_API_KEY`."
- ],
"metadata": {
"id": "3QC1DUOxuWDS"
- }
+ },
+ "source": [
+ "In Colab, add the key to the secrets manager under the \"🔑\" in the left panel. Give it the name `GOOGLE_API_KEY`."
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "SAvjxTybuWw-"
+ },
"source": [
"Once you have the API key, pass it to the SDK. You can do this in two ways:\n",
"\n",
"* Put the key in the `GOOGLE_API_KEY` environment variable (the SDK will automatically pick it up from there).\n",
"* Pass the key to `genai.configure(api_key=...)`\n"
- ],
- "metadata": {
- "id": "SAvjxTybuWw-"
- }
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "JAzIedGr9PdN"
+ },
+ "outputs": [],
"source": [
"# Or use `os.getenv('GOOGLE_API_KEY')` to fetch an environment variable.\n",
"GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n",
"\n",
"genai.configure(api_key=GOOGLE_API_KEY)"
- ],
- "metadata": {
- "id": "JAzIedGr9PdN"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "## The ReAct prompt"
- ],
"metadata": {
"id": "Sqkwp87FumIp"
- }
+ },
+ "source": [
+ "## The ReAct prompt"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "lLv9Kuuu5Ffs"
+ },
"source": [
"The original prompts used in the paper is available at [https://github.com/ysymyth/ReAct/tree/master/prompts](https://github.com/ysymyth/ReAct/tree/master/prompts)\n",
"\n",
"Here, you will be working with the following ReAct prompt with a few minor adjustments."
- ],
- "metadata": {
- "id": "lLv9Kuuu5Ffs"
- }
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "g8klL8df4iXe"
+ },
+ "outputs": [],
"source": [
"model_instructions = \"\"\"Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, Observation is gauging relevant information from an Action's output and Action can be of three types:\n",
"(1) entity, which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search and you can try to search the information from those topics.\n",
"(2) keyword, which returns the next sentence containing keyword in the current context.\n",
"(3) answer, which returns the answer and finishes the task.\n",
"\"\"\""
- ],
- "metadata": {
- "id": "g8klL8df4iXe"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "### Few-shot prompting to enable in-context learning with Gemini\n",
- "\n"
- ],
"metadata": {
"id": "Fw52CHAG0aRr"
- }
+ },
+ "source": [
+ "### Few-shot prompting to enable in-context learning with Gemini\n"
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "While large language models show good understanding of the instructions they are prompted with, they still may perform poorly on complex tasks in a zero-shot setting. Hence, you will now provide a few examples along with your prompt to steer the model's output according to your needs. This **in-context learning** improves the model's performance significantly."
- ],
"metadata": {
"id": "-jhaD4ChNv6M"
- }
+ },
+ "source": [
+ "While large language models show good understanding of the instructions they are prompted with, they still may perform poorly on complex tasks in a zero-shot setting. Hence, you will now provide a few examples along with your prompt to steer the model's output according to your needs. This **in-context learning** improves the model's performance significantly."
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "tZ7vezr02qv0"
+ },
+ "outputs": [],
"source": [
"examples = \"\"\"\n",
"Here are some examples.\n",
@@ -533,64 +512,64 @@
"\n",
"Question\n",
"{question}\"\"\""
- ],
- "metadata": {
- "id": "tZ7vezr02qv0"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "Copy the instructions along with examples in a file called `model_instructions.txt`"
- ],
"metadata": {
"id": "xeCImqiN3WiQ"
- }
+ },
+ "source": [
+ "Copy the instructions along with examples in a file called `model_instructions.txt`"
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "ZyTfAdpk26oB"
+ },
+ "outputs": [],
"source": [
"ReAct_prompt = model_instructions + examples\n",
"with open('model_instructions.txt', 'w') as f:\n",
" f.write(ReAct_prompt)"
- ],
- "metadata": {
- "id": "ZyTfAdpk26oB"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "## The Gemini-ReAct pipeline"
- ],
"metadata": {
"id": "Is8BIVQP3u95"
- }
+ },
+ "source": [
+ "## The Gemini-ReAct pipeline"
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "### Setup"
- ],
"metadata": {
"id": "PqEwKVDgM1MF"
- }
+ },
+ "source": [
+ "### Setup"
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "You will now build an end-to-end pipeline to facilitate multi-turn chat with the ReAct-prompted Gemini model."
- ],
"metadata": {
"id": "T4M3lxEoM3k0"
- }
+ },
+ "source": [
+ "You will now build an end-to-end pipeline to facilitate multi-turn chat with the ReAct-prompted Gemini model."
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "vssDZcroN-Ob"
+ },
+ "outputs": [],
"source": [
"class ReAct:\n",
" def __init__(self, model: str, ReAct_prompt: str | os.PathLike):\n",
@@ -629,24 +608,22 @@
" \"\"\"\n",
" text = text.replace(\"\\n\", \" \")\n",
" return text"
- ],
- "metadata": {
- "id": "vssDZcroN-Ob"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "### Function(tools) definitions\n"
- ],
"metadata": {
"id": "xKfThpmhMZYZ"
- }
+ },
+ "source": [
+ "### Function(tools) definitions\n"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "dnvZ2jqdRHE1"
+ },
"source": [
"As instructed by the prompt, the model will be generating **Thought-Action-Observation** traces, where every **Action** trace could be one of the following tokens:\n",
"\n",
@@ -661,23 +638,25 @@
"\n",
"\n",
"Define the tools:"
- ],
- "metadata": {
- "id": "dnvZ2jqdRHE1"
- }
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "ysHN4y4FPlJZ"
+ },
"source": [
"#### Search\n",
"Define a method to perform Wikipedia searches"
- ],
- "metadata": {
- "id": "ysHN4y4FPlJZ"
- }
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "yCRB4g4BNzak"
+ },
+ "outputs": [],
"source": [
"@ReAct.add_method\n",
"def search(self, query: str):\n",
@@ -713,25 +692,25 @@
"\n",
" finally:\n",
" return observation"
- ],
- "metadata": {
- "id": "yCRB4g4BNzak"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "v3fUbHUsPyoF"
+ },
"source": [
"#### Lookup\n",
"Look for a specific phrase on the Wikipedia page."
- ],
- "metadata": {
- "id": "v3fUbHUsPyoF"
- }
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "_F4kAF77O0E_"
+ },
+ "outputs": [],
"source": [
"@ReAct.add_method\n",
"def lookup(self, phrase: str, context_length=200):\n",
@@ -759,25 +738,25 @@
" result = page[max(0, start_index - context_length):start_index+len(phrase)+context_length]\n",
" print(f\"Information Source: {self._search_urls[-1]}\")\n",
" return result"
- ],
- "metadata": {
- "id": "_F4kAF77O0E_"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "Tc4mq2qlQCnE"
+ },
"source": [
"#### Finish\n",
"Instruct the pipline to terminate its execution."
- ],
- "metadata": {
- "id": "Tc4mq2qlQCnE"
- }
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "0Wxpx8COPak_"
+ },
+ "outputs": [],
"source": [
"@ReAct.add_method\n",
"def finish(self, _):\n",
@@ -786,35 +765,35 @@
" \"\"\"\n",
" self.flag = False\n",
" print(f\"Information Sources: {self._search_urls}\")"
- ],
- "metadata": {
- "id": "0Wxpx8COPak_"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "### Stop tokens & function calling imitation"
- ],
"metadata": {
"id": "u9Tl6W98Zhut"
- }
+ },
+ "source": [
+ "### Stop tokens & function calling imitation"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "0VnX9zpBcdA0"
+ },
"source": [
"Now that you are all set with function definitions, the next step is to instruct the model to interrupt its execution upon encountering any of the action tokens. You will make use of the `stop_sequences` parameter from [`genai.GenerativeModel.GenerationConfig`](https://ai.google.dev/api/python/google/generativeai/GenerationConfig) class to instruct the model when to stop. Upon encountering an action token, the pipeline will simply extract what specific token from the `stop_sequences` argument terminated the model's execution thereby calling appropriate **tool**(function).\n",
"\n",
"Function's response will be added to model's chat history for continuing the context link."
- ],
- "metadata": {
- "id": "0VnX9zpBcdA0"
- }
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "vnQom1aQOsK8"
+ },
+ "outputs": [],
"source": [
"@ReAct.add_method\n",
"def __call__(self, max_calls: int=8, **generation_kwargs):\n",
@@ -885,95 +864,101 @@
" except (IndexError, AttributeError) as e:\n",
" model_prompt = \"Please try again in a different way. \\\n",
" , , coould not be found\""
- ],
- "metadata": {
- "id": "vnQom1aQOsK8"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "### Test ReAct prompted Gemini model"
- ],
"metadata": {
"id": "xtndhebkhW62"
- }
+ },
+ "source": [
+ "### Test ReAct prompted Gemini model"
+ ]
},
{
"cell_type": "code",
- "source": [
- "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='/content/model_instructions.txt')"
- ],
+ "execution_count": null,
"metadata": {
"id": "h_KWkXWwfZ5h"
},
- "execution_count": null,
- "outputs": []
+ "outputs": [],
+ "source": [
+ "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='/content/model_instructions.txt')"
+ ]
},
{
"cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "MuZn0LlvjvyV"
+ },
+ "outputs": [],
"source": [
" # Note: try different combinations of generational_config parameters for variational results\n",
"gemini_ReAct_chat()\n",
"#"
- ],
- "metadata": {
- "id": "MuZn0LlvjvyV"
- },
- "execution_count": null,
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "Now, try asking the same question to `gemini-pro` model without the ReAct prompt."
- ],
"metadata": {
"id": "ZIfeyyI6hoIE"
- }
+ },
+ "source": [
+ "Now, try asking the same question to `gemini-pro` model without the ReAct prompt."
+ ]
},
{
"cell_type": "code",
- "source": [
- "gemini_ReAct_chat.model.generate_content(\"[ask same question here]\").text"
- ],
+ "execution_count": null,
"metadata": {
"id": "_NUXNbTuakSC"
},
- "execution_count": null,
- "outputs": []
+ "outputs": [],
+ "source": [
+ "gemini_ReAct_chat.model.generate_content(\"[ask same question here]\").text"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "id": "B-jsJSyBtrP8"
+ },
"source": [
"## Summary\n",
"\n",
"The ReAct prompted Gemini model is grounded by external information sources and hence is less prone to hallucination. Furthermore, **Thought-Action-Observation** traces generated by the model enhance human interpretability and trustworthiness by allowing users to witness the model's reasoning process for answering the user's query.\n"
- ],
- "metadata": {
- "id": "B-jsJSyBtrP8"
- }
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "## Further reading\n",
- "\n"
- ],
"metadata": {
"id": "vmdNYTm5Lobz"
- }
+ },
+ "source": [
+ "## Further reading\n"
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "Head over to the [Streamlit app](https://mayochat.streamlit.app/) to interact with a ReAct prompted Gemini bot."
- ],
"metadata": {
"id": "iTiDOoTkLvH6"
- }
+ },
+ "source": [
+ "Head over to the [Streamlit app](https://mayochat.streamlit.app/) to interact with a ReAct prompted Gemini bot."
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "name": "react_gemini_prompting.ipynb",
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
}
- ]
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
}
From 3bdbd1b3e0a0c79006c8e0c00bf55479201122b5 Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Fri, 9 Feb 2024 08:38:58 +0530
Subject: [PATCH 05/17] Fix broken url
---
site/en/docs/react_gemini_prompting.ipynb | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index 34abb51e0..e0709fc9e 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -57,13 +57,13 @@
"source": [
""
]
From d8cc2e9d9ce3247a7519548b6de7a02d29649c2f Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Sat, 17 Feb 2024 04:01:35 +0530
Subject: [PATCH 06/17] Update react_gemini_prompting.ipynb
---
site/en/docs/react_gemini_prompting.ipynb | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index e0709fc9e..d03c897b7 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -37,7 +37,16 @@
"id": "J_wnYaX_hKtu"
},
"source": [
- "The Wikipedia content in this tutorial is licensed under CC BY-SA 4.0
This example is for demonstrational and educational purposes only. It is therfore advised to seek legal aid for further distribution/use.
"
+ "The Wikipedia content in this tutorial is licensed under CC BY-SA 4.0
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OCL4OiW77A7O"
+ },
+ "source": [
+ "Important: This tutorial synthesizes text from Wikipedia articles. See the [Wikipedia Copyrights](https://en.wikipedia.org/wiki/Wikipedia:Copyrights) for details on the licensing of Wikipedia content. This example is for demonstrational and educational purposes only. If you use this code or its outputs it is your responsibility to ensure that you comply with the licensing of any data source(s) you use."
]
},
{
@@ -224,7 +233,7 @@
"id": "4xsPDHz_uSYH"
},
"source": [
- "### Set up your API key\n",
+ "### Grab an API key\n",
"\n",
"Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.\n",
"\n",
From bd3dc2fdd4814edf92821c977a8eff6c112b7691 Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Thu, 22 Feb 2024 00:31:47 +0530
Subject: [PATCH 07/17] Update react_gemini_prompting.ipynb
---
site/en/docs/react_gemini_prompting.ipynb | 180 +++++++++++-----------
1 file changed, 93 insertions(+), 87 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index d03c897b7..f17cdbbd9 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -11,7 +11,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 1,
"metadata": {
"cellView": "form",
"id": "X4uPZ83DbUTq"
@@ -32,12 +32,18 @@
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": 2,
"metadata": {
- "id": "J_wnYaX_hKtu"
+ "cellView": "form",
+ "id": "FUqzNst0YN9P"
},
+ "outputs": [],
"source": [
- "The Wikipedia content in this tutorial is licensed under CC BY-SA 4.0
"
+ "# @title The non-source code materials in this project are licensed under Creative Commons - Attribution CC-BY 4.0
\n",
+ "\n",
+ "# The non-source code materials in this project are licensed under Creative Commons - Attribution CC-BY 4.0,\n",
+ "# https://creativecommons.org/licenses/by/4.0/legalcode."
]
},
{
@@ -46,7 +52,8 @@
"id": "OCL4OiW77A7O"
},
"source": [
- "Important: This tutorial synthesizes text from Wikipedia articles. See the [Wikipedia Copyrights](https://en.wikipedia.org/wiki/Wikipedia:Copyrights) for details on the licensing of Wikipedia content. This example is for demonstrational and educational purposes only. If you use this code or its outputs it is your responsibility to ensure that you comply with the licensing of any data source(s) you use."
+ "**Important:**\n",
+ "> This tutorial synthesizes text from Wikipedia articles. See the [Wikipedia Copyrights](https://en.wikipedia.org/wiki/Wikipedia:Copyrights) for details on the licensing of Wikipedia content. This example is for demonstrational and educational purposes only. If you use this code or its outputs it is your responsibility to ensure that you comply with the licensing of any data source(s) you use."
]
},
{
@@ -55,7 +62,7 @@
"id": "vX-FA27MbYpQ"
},
"source": [
- "# ReAct + Gemini: A prompting method for demonstrating reasoning & acting in LLMs"
+ "# ReAct + Gemini: A prompting method for demonstrating reasoning and acting in LLMs"
]
},
{
@@ -66,7 +73,7 @@
"source": [
"\n",
" \n",
- " View on Google AI\n",
+ " View on ai.google.dev\n",
" | \n",
" \n",
" Run in Google Colab\n",
@@ -98,8 +105,8 @@
"1. Set up your development environment and API access to use Gemini.\n",
"2. Prompt Gemini with ReAct.\n",
"3. Use the newly prompted model for multi-turn conversations (chat).\n",
- "4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via `wikipedia api`.\n",
- "5. Have Conversations with deployed **ReAct prmopted Gemini bot 🤖**\n"
+ "4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via **Wikipedia API**.\n",
+ "5. Have Conversations with deployed **ReAct prompted Gemini bot 🤖**\n"
]
},
{
@@ -119,30 +126,22 @@
"id": "PqoT0ojAcV9P"
},
"source": [
- "What(s)\n",
+ "According to the [research paper](https://arxiv.org/abs/2210.03629), ReAct is a prompting method which allows language models to showcase the trace of thinking processes/steps involved in answering a user's query. This improves human interpretability and trustworthiness. ReAct prompted models generate **Thought-Action-Observation** triplets for every iteration.\n",
"\n",
"\n",
- "1. What is ReAct?\n",
- "> ReAct is a prompting method which allows language models to showcase the trace of thinking processes/steps involved in answering a user's query. This improves human interpretability and trustworthiness. ReAct prompted models generate **Thought-Action-Observation** triplets for every iteration.\n",
- "\n",
- "Why(s)\n",
- "\n",
- "\n",
- "1. Why to use ReAct when you can simply instruct the model: \"Explain step-by-step\"\n",
+ "- Why to use ReAct when you can simply instruct the model: \"Explain step-by-step\"\n",
"> Large Language models (LLMs) generate responses by extrapolating the prompts they are provided with. This is achieved by using their internal knowledge representation. In short, they are oblivious to the external/updated information, which makes them hallucinate(generate wrong/unreal answers) at times.\n",
"\n",
" - ReAct tries to prevent hallucination by mandating language models to seek factual information strictly from the external environment they are provided with.\n",
"\n",
"\n",
- "How(s)\n",
- "\n",
- "1. How does it work?\n",
+ "- How does it work?\n",
"\n",
" - Model receives the query from the user.\n",
" - This query with previous chat history(for continuous context link) if any, will be analyzed by the model.\n",
" - The model will use its internal knowledge to decide which among the following action it needs to take.\n",
- " - Search[entity]: By analyzing the query, the model will decide what `entity` to query to the wikipedia API.\n",
- " - Lookup[phrase]: Model will analyze the content of the wikipedia page returned by `Search[entity]` action and then decide what specific information or `phrase` is needed to be retrieved from that page to answer the user's question.\n",
+ " - Search[entity]: By analyzing the query, the model will decide what `entity` to query to the Wikipedia API.\n",
+ " - Lookup[phrase]: Model will analyze the content of the Wikipedia page returned by `Search[entity]` action and then decide what specific information or `phrase` is needed to be retrieved from that page to answer the user's question.\n",
" - Finish[Answer]: Return the `Answer` to the user."
]
},
@@ -165,30 +164,41 @@
"\n",
"The Python SDK for the Gemini API, is contained in the [`google-generativeai`](https://pypi.org/project/google-generativeai/) package. Install the dependency using pip:\n",
"\n",
- "You will also need to install the `wikipedia` API.\n"
+ "You will also need to install the **Wikipedia** API.\n"
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 3,
"metadata": {
"id": "Twc_XZ7h7Bb4"
},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- " Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
- " Building wheel for wikipedia (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
- ]
- }
- ],
- "source": [
- "!pip install -q google.generativeai\n",
+ "outputs": [],
+ "source": [
+ "!pip install -q google.generativeai"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "7oZwkgQpfrLl"
+ },
+ "outputs": [],
+ "source": [
"!pip install -q wikipedia"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "DVWIqdtbffau"
+ },
+ "source": [
+ "**Note:**\n",
+ "> This library was designed for ease of use and simplicity, not for advanced use. If you plan on doing serious scraping or automated requests, please use [Pywikipediabot](http://www.mediawiki.org/wiki/Manual:Pywikipediabot) or one of the other more advanced [Python MediaWiki API wrappers](http://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Python), which has a larger API, rate limiting, and other features so we can be considerate of the MediaWiki infrastructure."
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {
@@ -209,14 +219,12 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 5,
"metadata": {
"id": "Jz5HOLy47VX0"
},
"outputs": [],
"source": [
- "from typing import Any, Dict, Optional\n",
- "from contextlib import contextmanager\n",
"import re\n",
"import os\n",
"\n",
@@ -233,7 +241,7 @@
"id": "4xsPDHz_uSYH"
},
"source": [
- "### Grab an API key\n",
+ "### Set up your API key\n",
"\n",
"Before you can use the Gemini API, you must first obtain an API key. If you don't already have one, create a key with one click in Google AI Studio.\n",
"\n",
@@ -263,7 +271,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 6,
"metadata": {
"id": "JAzIedGr9PdN"
},
@@ -297,13 +305,13 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 7,
"metadata": {
"id": "g8klL8df4iXe"
},
"outputs": [],
"source": [
- "model_instructions = \"\"\"Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, Observation is gauging relevant information from an Action's output and Action can be of three types:\n",
+ "model_instructions = \"\"\"Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, Observation is understanding relevant information from an Action's output and Action can be of three types:\n",
"(1) entity, which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search and you can try to search the information from those topics.\n",
"(2) keyword, which returns the next sentence containing keyword in the current context.\n",
"(3) answer, which returns the answer and finishes the task.\n",
@@ -330,7 +338,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 8,
"metadata": {
"id": "tZ7vezr02qv0"
},
@@ -534,7 +542,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 9,
"metadata": {
"id": "ZyTfAdpk26oB"
},
@@ -574,7 +582,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 10,
"metadata": {
"id": "vssDZcroN-Ob"
},
@@ -587,11 +595,12 @@
" task-specific actions in an interleaved manner.\n",
"\n",
" Args:\n",
- " model (str): name to the model\n",
- " ReAct_prompt (str): ReAct prompt OR path to the ReAct prompt\n",
+ " model: name to the model.\n",
+ " ReAct_prompt: ReAct prompt OR path to the ReAct prompt.\n",
" \"\"\"\n",
" self.model = genai.GenerativeModel(model)\n",
" self.chat = self.model.start_chat(history=[])\n",
+ " self.should_continue_prompting = True\n",
" self._search_history: list[str] = []\n",
" self._search_urls: list[str] = []\n",
"\n",
@@ -613,8 +622,7 @@
"\n",
" @staticmethod\n",
" def clean(text: str):\n",
- " \"\"\"helper function for responses\n",
- " \"\"\"\n",
+ " \"\"\"Helper function for responses.\"\"\"\n",
" text = text.replace(\"\\n\", \" \")\n",
" return text"
]
@@ -625,7 +633,7 @@
"id": "xKfThpmhMZYZ"
},
"source": [
- "### Function(tools) definitions\n"
+ "### Define tools\n"
]
},
{
@@ -638,7 +646,7 @@
"\n",
"\n",
"1. : Perform a Wikipedia search via external API.\n",
- "2. : Lookup for specific information on a page with the wikipedia API.\n",
+ "2. : Lookup for specific information on a page with the Wikipedia API.\n",
"3. : Stop the execution of the model and return the answer.\n",
"\n",
"If the model encounters any of these tokens, the model should make use of the `tools` made available to the model. This understanding of the model to leverage acquired toolsets to collect information from the external world is often referred to as **function calling**. Therefore, the next goal is to imitate this function calling technique in order to allow ReAct prompted Gemini model to access the external groundtruth.\n",
@@ -661,7 +669,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 11,
"metadata": {
"id": "yCRB4g4BNzak"
},
@@ -669,17 +677,18 @@
"source": [
"@ReAct.add_method\n",
"def search(self, query: str):\n",
- " \"\"\"perfoms search on `query` via wikipedia api and returns it's summary.\n",
+ " \"\"\"Perfoms search on `query` via Wikipedia api and returns its summary.\n",
"\n",
" Args:\n",
- " query (str): Search parameter to query the wikipedia API with.\n",
+ " query: Search parameter to query the Wikipedia API with.\n",
"\n",
" Returns:\n",
- " observation: Summary of wikipedia search for `query` if found else\n",
- " similar search results\n",
+ " observation: Summary of Wikipedia search for `query` if found else\n",
+ " similar search results.\n",
" \"\"\"\n",
+ " observation = None\n",
" try:\n",
- " # try to get the summary for requested `query` from the wikipedia\n",
+ " # try to get the summary for requested `query` from the Wikipedia\n",
" observation = wikipedia.summary(query, sentences=4)\n",
" wiki_url = wikipedia.page(query, auto_suggest=False).url\n",
" observation = self.clean(observation)\n",
@@ -696,11 +705,12 @@
"\n",
" # if the page is ambiguous/does not exist, return similar search phrases for model's context\n",
" except (DisambiguationError, PageError) as e:\n",
+ " observation = f'Could not find [\" {query} \"].'\n",
+ " # get a list of similar search topics\n",
" search_results = wikipedia.search(query)\n",
- " observation = f'Could not find [\" {query} \"]. Similar: {search_results}. You should search for {search_results[0]} instead.'\n",
+ " observation += f' Similar: {search_results}. You should search for {search_results[0]} instead.'\n",
"\n",
- " finally:\n",
- " return observation"
+ " return observation"
]
},
{
@@ -715,7 +725,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 12,
"metadata": {
"id": "_F4kAF77O0E_"
},
@@ -723,21 +733,21 @@
"source": [
"@ReAct.add_method\n",
"def lookup(self, phrase: str, context_length=200):\n",
- " \"\"\"searches for the `phrase` in the lastest wikipedia search page\n",
+ " \"\"\"Searches for the `phrase` in the lastest Wikipedia search page\n",
" and returns number of sentences which is controlled by the\n",
- " `context_length` parameter\n",
+ " `context_length` parameter.\n",
"\n",
" Args:\n",
- " phrase (str): Lookup phrase to search for within a page. Generally\n",
+ " phrase: Lookup phrase to search for within a page. Generally\n",
" attributes to some specification of any topic.\n",
"\n",
- " context_length (int) default=100: Number of words to consider\n",
+ " context_length: Number of words to consider\n",
" while looking for the answer.\n",
"\n",
" Returns:\n",
- " result (str): Context related to the `phrase` within the page.\n",
+ " result: Context related to the `phrase` within the page.\n",
" \"\"\"\n",
- " # get the last searched wikipedia page and find `phrase` in it.\n",
+ " # get the last searched Wikipedia page and find `phrase` in it.\n",
" page = wikipedia.page(self._search_history[-1], auto_suggest=False)\n",
" page = page.content\n",
" page = self.clean(page)\n",
@@ -761,7 +771,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 13,
"metadata": {
"id": "0Wxpx8COPak_"
},
@@ -769,10 +779,10 @@
"source": [
"@ReAct.add_method\n",
"def finish(self, _):\n",
- " \"\"\"finishes the conversation on encoutring token by\n",
- " setting the flag to `False`\n",
+ " \"\"\"Finishes the conversation on encountering token by\n",
+ " setting the `self.should_continue_prompting` flag to `False`.\n",
" \"\"\"\n",
- " self.flag = False\n",
+ " self.should_continue_prompting = False\n",
" print(f\"Information Sources: {self._search_urls}\")"
]
},
@@ -782,7 +792,7 @@
"id": "u9Tl6W98Zhut"
},
"source": [
- "### Stop tokens & function calling imitation"
+ "### Stop tokens and function calling imitation"
]
},
{
@@ -798,7 +808,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 14,
"metadata": {
"id": "vnQom1aQOsK8"
},
@@ -809,8 +819,7 @@
" \"\"\"Starts multi-turn conversation with the chat models with function calling\n",
"\n",
" Args:\n",
- " max_calls (int) default=8: max calls made to the model to\n",
- " get the final answer.\n",
+ " max_calls: max calls made to the model to get the final answer.\n",
"\n",
" generation_kwargs: Same as genai.GenerativeModel.GenerationConfig\n",
" candidate_count: (int | None) = None,\n",
@@ -837,12 +846,9 @@
" # stop_sequences for the model to immitate function calling\n",
" callable_entities = ['', '', '']\n",
"\n",
- " if generation_kwargs:\n",
- " generation_kwargs.update({'stop_sequences': callable_entities})\n",
- " else:\n",
- " generation_kwargs = {'stop_sequences': callable_entities}\n",
+ " generation_kwargs.update({'stop_sequences': callable_entities})\n",
"\n",
- " self.flag = True\n",
+ " self.should_continue_prompting = True\n",
" for idx in range(max_calls):\n",
"\n",
" self.response = self.chat.send_message(content=[model_prompt],\n",
@@ -862,7 +868,7 @@
" # call to appropriate function\n",
" observation = self.__getattribute__(cmd)(query)\n",
"\n",
- " if not self.flag:\n",
+ " if not self.should_continue_prompting:\n",
" break\n",
"\n",
" stream_message = f\"\\nObservation {idx + 1}\\n{observation}\"\n",
@@ -871,8 +877,8 @@
" model_prompt = f\"<{cmd}>{query}{cmd}>'s Output: {stream_message}\"\n",
"\n",
" except (IndexError, AttributeError) as e:\n",
- " model_prompt = \"Please try again in a different way. \\\n",
- " , , coould not be found\""
+ " model_prompt = \"Please try to generate thought-action-observation traces \\\n",
+ " as instructed by the prompt.\""
]
},
{
@@ -886,13 +892,13 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 18,
"metadata": {
"id": "h_KWkXWwfZ5h"
},
"outputs": [],
"source": [
- "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='/content/model_instructions.txt')"
+ "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='model_instructions.txt')"
]
},
{
@@ -903,7 +909,7 @@
},
"outputs": [],
"source": [
- " # Note: try different combinations of generational_config parameters for variational results\n",
+ "# Note: try different combinations of generational_config parameters for variational results\n",
"gemini_ReAct_chat()\n",
"#"
]
From 9ff17d7fd62d0e277254b390858d40c709548672 Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Thu, 22 Feb 2024 00:41:43 +0530
Subject: [PATCH 08/17] Fix styling
---
site/en/docs/react_gemini_prompting.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index f17cdbbd9..08db10264 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -196,7 +196,7 @@
},
"source": [
"**Note:**\n",
- "> This library was designed for ease of use and simplicity, not for advanced use. If you plan on doing serious scraping or automated requests, please use [Pywikipediabot](http://www.mediawiki.org/wiki/Manual:Pywikipediabot) or one of the other more advanced [Python MediaWiki API wrappers](http://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Python), which has a larger API, rate limiting, and other features so we can be considerate of the MediaWiki infrastructure."
+ "> This library was designed for ease of use and simplicity, not for advanced use. If you plan on doing serious scraping or automated requests, please use [Pywikipediabot](http://www.mediawiki.org/wiki/Manual:Pywikipediabot) or one of the other more advanced [Python MediaWiki API wrappers](http://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Python), which has a larger API, rate limiting, and other features so you can be considerate of the MediaWiki infrastructure."
]
},
{
From 7f1e75547fec4e2737442d6f43bf99165561839a Mon Sep 17 00:00:00 2001
From: Mark Daoust
Date: Wed, 21 Feb 2024 16:48:42 -0800
Subject: [PATCH 09/17] Fix double license.
---
site/en/docs/react_gemini_prompting.ipynb | 17 +++--------------
1 file changed, 3 insertions(+), 14 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index 08db10264..1f64987c7 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -40,20 +40,9 @@
},
"outputs": [],
"source": [
- "# @title The non-source code materials in this project are licensed under Creative Commons - Attribution CC-BY 4.0\n",
- "\n",
- "# The non-source code materials in this project are licensed under Creative Commons - Attribution CC-BY 4.0,\n",
- "# https://creativecommons.org/licenses/by/4.0/legalcode."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "OCL4OiW77A7O"
- },
- "source": [
- "**Important:**\n",
- "> This tutorial synthesizes text from Wikipedia articles. See the [Wikipedia Copyrights](https://en.wikipedia.org/wiki/Wikipedia:Copyrights) for details on the licensing of Wikipedia content. This example is for demonstrational and educational purposes only. If you use this code or its outputs it is your responsibility to ensure that you comply with the licensing of any data source(s) you use."
+ "# @title\n",
+ "# The non-source code materials on this page are licensed under Creative Commons - Attribution-ShareAlike CC-BY-SA 4.0,\n",
+ "# https://creativecommons.org/licenses/by-sa/4.0/legalcode."
]
},
{
From 3c3554971eea76aecce86d2f56dfc24c8bb1b5eb Mon Sep 17 00:00:00 2001
From: Mark Daoust
Date: Wed, 21 Feb 2024 16:53:36 -0800
Subject: [PATCH 10/17] Update react_gemini_prompting.ipynb
Remove @title
---
site/en/docs/react_gemini_prompting.ipynb | 1 -
1 file changed, 1 deletion(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index 1f64987c7..f25aaf7ab 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -40,7 +40,6 @@
},
"outputs": [],
"source": [
- "# @title\n",
"# The non-source code materials on this page are licensed under Creative Commons - Attribution-ShareAlike CC-BY-SA 4.0,\n",
"# https://creativecommons.org/licenses/by-sa/4.0/legalcode."
]
From 33227199fc768c54a3dd34cf5e61427782118bd9 Mon Sep 17 00:00:00 2001
From: Mark Daoust
Date: Wed, 21 Feb 2024 17:04:30 -0800
Subject: [PATCH 11/17] Fix Note.
---
site/en/docs/react_gemini_prompting.ipynb | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index f25aaf7ab..bf7f17040 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -183,8 +183,7 @@
"id": "DVWIqdtbffau"
},
"source": [
- "**Note:**\n",
- "> This library was designed for ease of use and simplicity, not for advanced use. If you plan on doing serious scraping or automated requests, please use [Pywikipediabot](http://www.mediawiki.org/wiki/Manual:Pywikipediabot) or one of the other more advanced [Python MediaWiki API wrappers](http://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Python), which has a larger API, rate limiting, and other features so you can be considerate of the MediaWiki infrastructure."
+ "> Note: This library was designed for ease of use and simplicity, not for advanced use. If you plan on doing serious scraping or automated requests, please use [Pywikipediabot](http://www.mediawiki.org/wiki/Manual:Pywikipediabot) or one of the other more advanced [Python MediaWiki API wrappers](http://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Python), which has a larger API, rate limiting, and other features so you can be considerate of the MediaWiki infrastructure."
]
},
{
From 8c03dc7ad906b69abd02c491fb78bda746d9dd7e Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Mon, 26 Feb 2024 11:45:37 +0530
Subject: [PATCH 12/17] Add info about licensing of prompt and in-context
examples
---
site/en/docs/react_gemini_prompting.ipynb | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index bf7f17040..f135fb957 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -290,6 +290,13 @@
"Here, you will be working with the following ReAct prompt with a few minor adjustments."
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "> Note: The prompt and in-context examples used here are taken from [https://github.com/ysymyth/ReAct](https://github.com/ysymyth/ReAct) which is published under [MIT license](https://opensource.org/licenses/MIT)."
+ ]
+ },
{
"cell_type": "code",
"execution_count": 7,
From 2712331865866911ba05d82f79c5b4d9542b26b4 Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Mon, 26 Feb 2024 11:51:33 +0530
Subject: [PATCH 13/17] format
---
site/en/docs/react_gemini_prompting.ipynb | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index f135fb957..af2575fcb 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -292,7 +292,9 @@
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "id": "07ed55c29a1d"
+ },
"source": [
"> Note: The prompt and in-context examples used here are taken from [https://github.com/ysymyth/ReAct](https://github.com/ysymyth/ReAct) which is published under [MIT license](https://opensource.org/licenses/MIT)."
]
From ce6051ee32bb0f0811b764b68b80a17dc7fcb973 Mon Sep 17 00:00:00 2001
From: Mark Daoust
Date: Mon, 4 Mar 2024 16:58:24 -0800
Subject: [PATCH 14/17] Pr255 (#1)
* Some fixes + add outputs
* add example
---
site/en/docs/react_gemini_prompting.ipynb | 136 +++++++++++++++-------
1 file changed, 93 insertions(+), 43 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index af2575fcb..539cef3ff 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -161,18 +161,36 @@
"metadata": {
"id": "Twc_XZ7h7Bb4"
},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[33mWARNING: There was an error checking the latest version of pip.\u001b[0m\u001b[33m\n",
+ "\u001b[0m"
+ ]
+ }
+ ],
"source": [
"!pip install -q google.generativeai"
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 4,
"metadata": {
"id": "7oZwkgQpfrLl"
},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[33mWARNING: There was an error checking the latest version of pip.\u001b[0m\u001b[33m\n",
+ "\u001b[0m"
+ ]
+ }
+ ],
"source": [
"!pip install -q wikipedia"
]
@@ -206,7 +224,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 6,
"metadata": {
"id": "Jz5HOLy47VX0"
},
@@ -218,8 +236,7 @@
"import wikipedia\n",
"from wikipedia.exceptions import DisambiguationError, PageError\n",
"\n",
- "import google.generativeai as genai\n",
- "from google.colab import userdata"
+ "import google.generativeai as genai"
]
},
{
@@ -258,15 +275,18 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 7,
"metadata": {
"id": "JAzIedGr9PdN"
},
"outputs": [],
"source": [
- "# Or use `os.getenv('GOOGLE_API_KEY')` to fetch an environment variable.\n",
- "GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n",
- "\n",
+ "try:\n",
+ " from google.colab import userdata\n",
+ " GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n",
+ "except ImportError as e:\n",
+ " import os\n",
+ " GOOGLE_API_KEY = os.environ['GOOGLE_API_KEY']\n",
"genai.configure(api_key=GOOGLE_API_KEY)"
]
},
@@ -301,7 +321,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 29,
"metadata": {
"id": "g8klL8df4iXe"
},
@@ -309,7 +329,7 @@
"source": [
"model_instructions = \"\"\"Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, Observation is understanding relevant information from an Action's output and Action can be of three types:\n",
"(1) entity, which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search and you can try to search the information from those topics.\n",
- "(2) keyword, which returns the next sentence containing keyword in the current context.\n",
+ "(2) keyword, which returns the next sentence containing keyword in the current context. This only does exact matches, so keep your searches short.\n",
"(3) answer, which returns the answer and finishes the task.\n",
"\"\"\""
]
@@ -334,7 +354,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 30,
"metadata": {
"id": "tZ7vezr02qv0"
},
@@ -538,7 +558,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 31,
"metadata": {
"id": "ZyTfAdpk26oB"
},
@@ -578,7 +598,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 59,
"metadata": {
"id": "vssDZcroN-Ob"
},
@@ -665,7 +685,7 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 60,
"metadata": {
"id": "yCRB4g4BNzak"
},
@@ -683,9 +703,10 @@
" similar search results.\n",
" \"\"\"\n",
" observation = None\n",
+ " query = query.strip()\n",
" try:\n",
" # try to get the summary for requested `query` from the Wikipedia\n",
- " observation = wikipedia.summary(query, sentences=4)\n",
+ " observation = wikipedia.summary(query, sentences=4, auto_suggest=False)\n",
" wiki_url = wikipedia.page(query, auto_suggest=False).url\n",
" observation = self.clean(observation)\n",
"\n",
@@ -701,10 +722,10 @@
"\n",
" # if the page is ambiguous/does not exist, return similar search phrases for model's context\n",
" except (DisambiguationError, PageError) as e:\n",
- " observation = f'Could not find [\" {query} \"].'\n",
+ " observation = f'Could not find [\"{query}\"].'\n",
" # get a list of similar search topics\n",
" search_results = wikipedia.search(query)\n",
- " observation += f' Similar: {search_results}. You should search for {search_results[0]} instead.'\n",
+ " observation += f' Similar: {search_results}. You should search for one of those instead.'\n",
"\n",
" return observation"
]
@@ -721,7 +742,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 61,
"metadata": {
"id": "_F4kAF77O0E_"
},
@@ -767,7 +788,7 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": 62,
"metadata": {
"id": "0Wxpx8COPak_"
},
@@ -804,14 +825,14 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 63,
"metadata": {
"id": "vnQom1aQOsK8"
},
"outputs": [],
"source": [
"@ReAct.add_method\n",
- "def __call__(self, max_calls: int=8, **generation_kwargs):\n",
+ "def __call__(self, user_question, max_calls: int=8, **generation_kwargs):\n",
" \"\"\"Starts multi-turn conversation with the chat models with function calling\n",
"\n",
" Args:\n",
@@ -832,8 +853,6 @@
" # hyperparameter fine-tuned according to the paper\n",
" assert 0 < max_calls <= 8, \"max_calls must be between 1 and 8\"\n",
"\n",
- " user_question = input(\"You: \") # your query goes here\n",
- "\n",
" if len(self.chat.history) == 0:\n",
" model_prompt = self.prompt.format(question=user_question)\n",
" else:\n",
@@ -888,26 +907,46 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": 85,
"metadata": {
"id": "h_KWkXWwfZ5h"
},
- "outputs": [],
- "source": [
- "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='model_instructions.txt')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "MuZn0LlvjvyV"
- },
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Thought 1\n",
+ "I need to search the 2006 FIFA world cup and find which team won.\n",
+ "\n",
+ "Action 1\n",
+ "2006 FIFA world cup \n",
+ "Information Source: https://en.wikipedia.org/wiki/2006_FIFA_World_Cup\n",
+ "\n",
+ "Observation 1\n",
+ "The 2006 FIFA World Cup was the 18th FIFA World Cup, the quadrennial international football world championship tournament. It was held from 9 June to 9 July 2006 in Germany, which had won the right to host the event in July 2000. Teams representing 198 national football associations from all six populated continents participated in the qualification process which began in September 2003.\n",
+ "Thought 2\n",
+ "The paragraph does not specify which team won the cup, but it does mention the tournament was held in Germany. So I need to look up Germany.\n",
+ "\n",
+ "Action 2\n",
+ "Germany \n",
+ "Information Source: https://en.wikipedia.org/wiki/2006_FIFA_World_Cup\n",
+ "\n",
+ "Observation 2\n",
+ "The 2006 FIFA World Cup was the 18th FIFA World Cup, the quadrennial international football world championship tournament. It was held from 9 June to 9 July 2006 in Germany, which had won the right to host the event in July 2000. Teams representing 198 national football associations from all six populated continents participated in the qualification process which began \n",
+ "Thought 3\n",
+ "Germany won the right to host the event in July 2 000. So Germany must be the winner.\n",
+ "\n",
+ "Action 3\n",
+ "Germany \n",
+ "Information Sources: ['https://en.wikipedia.org/wiki/2006_FIFA_World_Cup']\n"
+ ]
+ }
+ ],
"source": [
+ "gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='model_instructions.txt')\n",
"# Note: try different combinations of generational_config parameters for variational results\n",
- "gemini_ReAct_chat()\n",
- "#"
+ "gemini_ReAct_chat(\"Which team won the 2006 FIFA world cup?\")"
]
},
{
@@ -921,13 +960,24 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 86,
"metadata": {
"id": "_NUXNbTuakSC"
},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "'Italy'"
+ ]
+ },
+ "execution_count": 86,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
- "gemini_ReAct_chat.model.generate_content(\"[ask same question here]\").text"
+ "gemini_ReAct_chat.model.generate_content(\"Who won the 2006 world cup?\").text"
]
},
{
From 718d19daa009e89c381ba086f013c2fbe4b22271 Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Tue, 5 Mar 2024 08:43:55 +0530
Subject: [PATCH 15/17] Add outputs and format
---
site/en/docs/react_gemini_prompting.ipynb | 108 ++++++++++++----------
1 file changed, 60 insertions(+), 48 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index 539cef3ff..75668bf6d 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -157,40 +157,22 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 1,
"metadata": {
"id": "Twc_XZ7h7Bb4"
},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\u001b[33mWARNING: There was an error checking the latest version of pip.\u001b[0m\u001b[33m\n",
- "\u001b[0m"
- ]
- }
- ],
+ "outputs": [],
"source": [
"!pip install -q google.generativeai"
]
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 2,
"metadata": {
"id": "7oZwkgQpfrLl"
},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\u001b[33mWARNING: There was an error checking the latest version of pip.\u001b[0m\u001b[33m\n",
- "\u001b[0m"
- ]
- }
- ],
+ "outputs": [],
"source": [
"!pip install -q wikipedia"
]
@@ -224,7 +206,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": null,
"metadata": {
"id": "Jz5HOLy47VX0"
},
@@ -321,7 +303,7 @@
},
{
"cell_type": "code",
- "execution_count": 29,
+ "execution_count": 8,
"metadata": {
"id": "g8klL8df4iXe"
},
@@ -354,7 +336,7 @@
},
{
"cell_type": "code",
- "execution_count": 30,
+ "execution_count": 9,
"metadata": {
"id": "tZ7vezr02qv0"
},
@@ -558,7 +540,7 @@
},
{
"cell_type": "code",
- "execution_count": 31,
+ "execution_count": 10,
"metadata": {
"id": "ZyTfAdpk26oB"
},
@@ -598,7 +580,7 @@
},
{
"cell_type": "code",
- "execution_count": 59,
+ "execution_count": 11,
"metadata": {
"id": "vssDZcroN-Ob"
},
@@ -685,7 +667,7 @@
},
{
"cell_type": "code",
- "execution_count": 60,
+ "execution_count": 12,
"metadata": {
"id": "yCRB4g4BNzak"
},
@@ -742,7 +724,7 @@
},
{
"cell_type": "code",
- "execution_count": 61,
+ "execution_count": 13,
"metadata": {
"id": "_F4kAF77O0E_"
},
@@ -788,7 +770,7 @@
},
{
"cell_type": "code",
- "execution_count": 62,
+ "execution_count": 14,
"metadata": {
"id": "0Wxpx8COPak_"
},
@@ -825,7 +807,7 @@
},
{
"cell_type": "code",
- "execution_count": 63,
+ "execution_count": 15,
"metadata": {
"id": "vnQom1aQOsK8"
},
@@ -907,7 +889,7 @@
},
{
"cell_type": "code",
- "execution_count": 85,
+ "execution_count": 20,
"metadata": {
"id": "h_KWkXWwfZ5h"
},
@@ -917,36 +899,66 @@
"output_type": "stream",
"text": [
"Thought 1\n",
- "I need to search the 2006 FIFA world cup and find which team won.\n",
+ "I need to search the main trio from the new Percy Jackson and the Olympians TV series, find their ages in real life, then sum them up.\n",
"\n",
"Action 1\n",
- "2006 FIFA world cup \n",
- "Information Source: https://en.wikipedia.org/wiki/2006_FIFA_World_Cup\n",
+ "Percy Jackson and the Olymp ians TV series \n",
"\n",
"Observation 1\n",
- "The 2006 FIFA World Cup was the 18th FIFA World Cup, the quadrennial international football world championship tournament. It was held from 9 June to 9 July 2006 in Germany, which had won the right to host the event in July 2000. Teams representing 198 national football associations from all six populated continents participated in the qualification process which began in September 2003.\n",
+ "Could not find [\"Percy Jackson and the Olympians TV series\"]. Similar: ['Percy Jackson and the Olympians (TV series)', 'Percy Jackson & the Olympians', 'Percy Jackson (film series)', 'Percy Jackson & the Olympians: The Lightning Thief', 'Percy Jackson (disambiguation)', 'Percy Jackson', 'List of characters in mythology novels by Rick Riordan', 'The Lightning Thief', 'The Heroes of Olympus', 'Walker Scobell']. You should search for one of those instead.\n",
"Thought 2\n",
- "The paragraph does not specify which team won the cup, but it does mention the tournament was held in Germany. So I need to look up Germany.\n",
+ "I can search Percy Jackson and the Olympians (TV series ) instead.\n",
"\n",
"Action 2\n",
- "Germany \n",
- "Information Source: https://en.wikipedia.org/wiki/2006_FIFA_World_Cup\n",
+ "Percy Jackson and the Olympians (TV series) \n",
+ "Information Source: https://en.wikipedia.org/wiki/Percy_Jackson_and_the_Olympians_(TV_series)\n",
"\n",
"Observation 2\n",
- "The 2006 FIFA World Cup was the 18th FIFA World Cup, the quadrennial international football world championship tournament. It was held from 9 June to 9 July 2006 in Germany, which had won the right to host the event in July 2000. Teams representing 198 national football associations from all six populated continents participated in the qualification process which began \n",
+ "Percy Jackson and the Olympians is an American fantasy television series created by Rick Riordan and Jonathan E. Steinberg for Disney+, based on the book series of the same name by Riordan. Walker Scobell stars as Percy Jackson, alongside Leah Sava Jeffries and Aryan Simhadri. Development on the series began in May 2020, following a pitch by Riordan to Disney Branded Television.\n",
"Thought 3\n",
- "Germany won the right to host the event in July 2 000. So Germany must be the winner.\n",
+ "The main trio from the new Percy Jackson and the Olympians TV series are Walker Scobell, Leah Sava Jeffries, and Aryan Simhadri. I need to find their ages in real life.\n",
"\n",
"Action 3\n",
- "Germany \n",
- "Information Sources: ['https://en.wikipedia.org/wiki/2006_FIFA_World_Cup']\n"
+ " Walker Scobell \n",
+ "Information Source: https://en.wikipedia.org/wiki/Walker_Scobell\n",
+ "\n",
+ "Observation 3\n",
+ "Walker Scobell, born on January 5, 2009, is an American actor who has starred in the 2022 action comedy films The Adam Project and Secret Headquarters.\n",
+ "\n",
+ "In 2023, Scobell began playing the title character of Percy Jackson in the Disney+ fantasy series Percy Jackson and the Olympians.\n",
+ "Thought 4\n",
+ "Walker Scobell was born on January 5, 2 009, so he is 14 years old. I need to find the ages of Leah Sava Jeffries and Aryan Simhadri.\n",
+ "\n",
+ "Action 4\n",
+ "Leah Sava Jeffries \n",
+ "Information Source: https://en.wikipedia.org/wiki/Leah_Jeffries\n",
+ "\n",
+ "Observation 4\n",
+ "Leah Sava Jeffries, born on September 25, 2009, is an American child actress. She made her acting debut in the American musical drama, Empire, in 2015, and later made her feature film debut in the action-thriller Beast, released in 2022.\n",
+ "Thought 5\n",
+ "Leah Sava Jeffries was born on September 25, 2009, so she is 13 years old. I need to find the age of Aryan Simhadri.\n",
+ "\n",
+ "Action 5\n",
+ "Aryan Simhadri \n",
+ "Information Source: https://en.wikipedia.org/wiki/Aryan_Simhadri\n",
+ "\n",
+ "Observation 5\n",
+ "Aryan Simhadri, born on May 6, 2006, is an American actor of Indian descent. He is best known for portraying Grover Underwood in the Disney+ series \"Percy Jackson and the Olympians.\" \n",
+ "\n",
+ "In 2021, Simhadri made his Broadway debut as Walter in the production of \"Trevor: The Musical.\"\n",
+ "Thought 6\n",
+ "Aryan Simhadri was born on May 6, 2006, so he is 17 years old. The sum of ages of the main trio from the new Percy Jackson and the Olympians TV series is 14 + 13 + 17 = 44.\n",
+ "\n",
+ "Action 6\n",
+ "44 \n",
+ "Information Sources: ['https://en.wikipedia.org/wiki/Percy_Jackson_and_the_Olympians_(TV_series)', 'https://en.wikipedia.org/wiki/Walker_Scobell', 'https://en.wikipedia.org/wiki/Leah_Jeffries', 'https://en.wikipedia.org/wiki/Aryan_Simhadri']\n"
]
}
],
"source": [
"gemini_ReAct_chat = ReAct(model='gemini-pro', ReAct_prompt='model_instructions.txt')\n",
"# Note: try different combinations of generational_config parameters for variational results\n",
- "gemini_ReAct_chat(\"Which team won the 2006 FIFA world cup?\")"
+ "gemini_ReAct_chat(\"What is the total of ages of the main trio from the new Percy Jackson and the Olympians TV series in real life?\", temperature=0.2)"
]
},
{
@@ -960,7 +972,7 @@
},
{
"cell_type": "code",
- "execution_count": 86,
+ "execution_count": 22,
"metadata": {
"id": "_NUXNbTuakSC"
},
@@ -968,16 +980,16 @@
{
"data": {
"text/plain": [
- "'Italy'"
+ "'The TV series has not yet been released, so the real-life ages of the main trio are not yet known.'"
]
},
- "execution_count": 86,
+ "execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
- "gemini_ReAct_chat.model.generate_content(\"Who won the 2006 world cup?\").text"
+ "gemini_ReAct_chat.model.generate_content(\"What is the total of ages of the main trio from the new Percy Jackson and the Olympians TV series in real life?\").text"
]
},
{
From 8a6022c354a063bb4618bcec760b5e8129d1fe15 Mon Sep 17 00:00:00 2001
From: Mayuresh Agashe
Date: Thu, 7 Mar 2024 08:48:05 +0530
Subject: [PATCH 16/17] Update wikipedia note
---
site/en/docs/react_gemini_prompting.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index 75668bf6d..77b1cb975 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -183,7 +183,7 @@
"id": "DVWIqdtbffau"
},
"source": [
- "> Note: This library was designed for ease of use and simplicity, not for advanced use. If you plan on doing serious scraping or automated requests, please use [Pywikipediabot](http://www.mediawiki.org/wiki/Manual:Pywikipediabot) or one of the other more advanced [Python MediaWiki API wrappers](http://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Python), which has a larger API, rate limiting, and other features so you can be considerate of the MediaWiki infrastructure."
+ "Note: The [`wikipedia` package](https://pypi.org/project/wikipedia/) notes that it was \"designed for ease of use and simplicity, not for advanced use\", and that production or heavy use should instead \"use [Pywikipediabot](http://www.mediawiki.org/wiki/Manual:Pywikipediabot) or one of the other more advanced [Python MediaWiki API wrappers](http://en.wikipedia.org/wiki/Wikipedia:Creating_a_bot#Python)\"."
]
},
{
From cecf1d6aeeb2232f9ef926bbd79c9e3d8d2cea0f Mon Sep 17 00:00:00 2001
From: Mark McDonald
Date: Fri, 15 Mar 2024 10:51:32 +0800
Subject: [PATCH 17/17] Nits: Fix case, indent and heading verb
---
site/en/docs/react_gemini_prompting.ipynb | 17 ++++++++---------
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/site/en/docs/react_gemini_prompting.ipynb b/site/en/docs/react_gemini_prompting.ipynb
index 77b1cb975..8ef03fd5a 100644
--- a/site/en/docs/react_gemini_prompting.ipynb
+++ b/site/en/docs/react_gemini_prompting.ipynb
@@ -90,11 +90,11 @@
"This notebook demonstrates the use of `gemini-pro` to generate reasoning traces and task-specific actions by leveraging a **Few-shot ReAct Prompt**. In this walkthrough, you will learn how to:\n",
"\n",
"\n",
- "1. Set up your development environment and API access to use Gemini.\n",
- "2. Prompt Gemini with ReAct.\n",
- "3. Use the newly prompted model for multi-turn conversations (chat).\n",
- "4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via **Wikipedia API**.\n",
- "5. Have Conversations with deployed **ReAct prompted Gemini bot 🤖**\n"
+ "1. Set up your development environment and API access to use Gemini.\n",
+ "2. Prompt Gemini with ReAct.\n",
+ "3. Use the newly prompted model for multi-turn conversations (chat).\n",
+ "4. How ReAct overcomes issues of hallucination and error propagation by seeking external groundtruth via **Wikipedia API**.\n",
+ "5. Have conversations with deployed **ReAct prompted Gemini bot 🤖**\n"
]
},
{
@@ -267,7 +267,6 @@
" from google.colab import userdata\n",
" GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n",
"except ImportError as e:\n",
- " import os\n",
" GOOGLE_API_KEY = os.environ['GOOGLE_API_KEY']\n",
"genai.configure(api_key=GOOGLE_API_KEY)"
]
@@ -694,7 +693,7 @@
"\n",
" # if successful, return the first 2-3 sentences from the summary as model's context\n",
" observation = self.model.generate_content(f'Retun the first 2 or 3 \\\n",
- " sentences from the following text: {observation}')\n",
+ " sentences from the following text: {observation}')\n",
" observation = observation.text\n",
"\n",
" # keep track of the model's search history\n",
@@ -718,7 +717,7 @@
"id": "v3fUbHUsPyoF"
},
"source": [
- "#### Lookup\n",
+ "#### Look up\n",
"Look for a specific phrase on the Wikipedia page."
]
},
@@ -875,7 +874,7 @@
"\n",
" except (IndexError, AttributeError) as e:\n",
" model_prompt = \"Please try to generate thought-action-observation traces \\\n",
- " as instructed by the prompt.\""
+ " as instructed by the prompt.\""
]
},
{
|