-
-
Notifications
You must be signed in to change notification settings - Fork 463
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
340b3b4
commit 618db97
Showing
17 changed files
with
578 additions
and
162 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,6 @@ | ||
FROM python:3.11-slim | ||
WORKDIR /app | ||
COPY . . | ||
RUN pip install flask praisonai==0.0.30 gunicorn markdown | ||
RUN pip install flask praisonai==0.0.31 gunicorn markdown | ||
EXPOSE 8080 | ||
CMD ["gunicorn", "-b", "0.0.0.0:8080", "api:app"] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,135 @@ | ||
# Create Custom Tools | ||
|
||
Sure! Let's go through the steps to install and set up the PraisonAI tool. | ||
|
||
## Step 1: Install the `praisonai` Package | ||
|
||
First, you need to install the `praisonai` package. Open your terminal and run the following command: | ||
|
||
```bash | ||
pip install praisonai | ||
``` | ||
|
||
## Step 2: Create the `InternetSearchTool` | ||
|
||
Next, create a file named `tools.py` and add the following code to define the `InternetSearchTool`: | ||
|
||
```python | ||
from duckduckgo_search import DDGS | ||
from praisonai_tools import BaseTool | ||
|
||
class InternetSearchTool(BaseTool): | ||
name: str = "Internet Search Tool" | ||
description: str = "Search Internet for relevant information based on a query or latest news" | ||
|
||
def _run(self, query: str): | ||
ddgs = DDGS() | ||
results = ddgs.text(keywords=query, region='wt-wt', safesearch='moderate', max_results=5) | ||
return results | ||
``` | ||
|
||
## Step 3: Define the Agent Configuration | ||
|
||
Create a file named `agents.yaml` and add the following content to configure the agent: | ||
|
||
```yaml | ||
framework: crewai | ||
topic: research about the causes of lung disease | ||
roles: | ||
research_analyst: | ||
backstory: Experienced in analyzing scientific data related to respiratory health. | ||
goal: Analyze data on lung diseases | ||
role: Research Analyst | ||
tasks: | ||
data_analysis: | ||
description: Gather and analyze data on the causes and risk factors of lung diseases. | ||
expected_output: Report detailing key findings on lung disease causes. | ||
tools: | ||
- InternetSearchTool | ||
``` | ||
## Step 4: Run the PraisonAI Tool | ||
To run the PraisonAI tool, simply type the following command in your terminal: | ||
```bash | ||
praisonai | ||
``` | ||
|
||
If you want to run the `autogen` framework, use: | ||
|
||
```bash | ||
praisonai --framework autogen | ||
``` | ||
|
||
## Prerequisites | ||
|
||
Ensure you have the `duckduckgo_search` package installed. If not, you can install it using: | ||
|
||
```bash | ||
pip install duckduckgo_search | ||
``` | ||
|
||
That's it! You should now have the PraisonAI tool installed and configured. | ||
|
||
## Other information | ||
|
||
### TL;DR to Create a Custom Tool | ||
|
||
```bash | ||
pip install praisonai duckduckgo-search | ||
export OPENAI_API_KEY="Enter your API key" | ||
praisonai --init research about the latest AI News and prepare a detailed report | ||
``` | ||
|
||
- Add `- InternetSearchTool` in the agents.yaml file in the tools section. | ||
- Create a file called tools.py and add this code [tools.py](./tools.py) | ||
|
||
```bash | ||
praisonai | ||
``` | ||
|
||
### Pre-requisite to Create a Custom Tool | ||
`agents.yaml` file should be present in the current directory. | ||
|
||
If it doesn't exist, create it by running the command `praisonai --init research about the latest AI News and prepare a detailed report`. | ||
|
||
#### Step 1 to Create a Custom Tool | ||
|
||
Create a file called tools.py in the same directory as the agents.yaml file. | ||
|
||
```python | ||
# example tools.py | ||
from duckduckgo_search import DDGS | ||
from praisonai_tools import BaseTool | ||
|
||
class InternetSearchTool(BaseTool): | ||
name: str = "InternetSearchTool" | ||
description: str = "Search Internet for relevant information based on a query or latest news" | ||
|
||
def _run(self, query: str): | ||
ddgs = DDGS() | ||
results = ddgs.text(keywords=query, region='wt-wt', safesearch='moderate', max_results=5) | ||
return results | ||
``` | ||
|
||
#### Step 2 to Create a Custom Tool | ||
|
||
Add the tool to the agents.yaml file as show below under the tools section `- InternetSearchTool`. | ||
|
||
```yaml | ||
framework: crewai | ||
topic: research about the latest AI News and prepare a detailed report | ||
roles: | ||
research_analyst: | ||
backstory: Experienced in gathering and analyzing data related to AI news trends. | ||
goal: Analyze AI News trends | ||
role: Research Analyst | ||
tasks: | ||
gather_data: | ||
description: Conduct in-depth research on the latest AI News trends from reputable | ||
sources. | ||
expected_output: Comprehensive report on current AI News trends. | ||
tools: | ||
- InternetSearchTool | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
# Firecrawl PraisonAI Integration | ||
|
||
## Firecrawl running in Localhost:3002 | ||
|
||
``` | ||
from firecrawl import FirecrawlApp | ||
from praisonai_tools import BaseTool | ||
import re | ||
class WebPageScraperTool(BaseTool): | ||
name: str = "Web Page Scraper Tool" | ||
description: str = "Scrape and extract information from a given web page URL." | ||
def _run(self, url: str) -> str: | ||
app = FirecrawlApp(api_url='http://localhost:3002') | ||
response = app.scrape_url(url=url) | ||
content = response["content"] | ||
# Remove all content above the line "========================================================" | ||
if "========================================================" in content: | ||
content = content.split("========================================================", 1)[1] | ||
# Remove all menu items and similar patterns | ||
content = re.sub(r'\*\s+\[.*?\]\(.*?\)', '', content) | ||
content = re.sub(r'\[Skip to the content\]\(.*?\)', '', content) | ||
content = re.sub(r'\[.*?\]\(.*?\)', '', content) | ||
content = re.sub(r'\s*Menu\s*', '', content) | ||
content = re.sub(r'\s*Search\s*', '', content) | ||
content = re.sub(r'Categories\s*', '', content) | ||
# Remove all URLs | ||
content = re.sub(r'http\S+', '', content) | ||
# Remove empty lines or lines with only whitespace | ||
content = '\n'.join([line for line in content.split('\n') if line.strip()]) | ||
# Limit to the first 1000 words | ||
words = content.split() | ||
if len(words) > 1000: | ||
content = ' '.join(words[:1000]) | ||
return content | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
# Langchain Tools | ||
|
||
## Integrate Langchain Direct Tools | ||
|
||
``` | ||
pip install youtube_search praisonai langchain_community langchain | ||
``` | ||
|
||
``` | ||
# tools.py | ||
from langchain_community.tools import YouTubeSearchTool | ||
``` | ||
|
||
``` | ||
# agents.yaml | ||
framework: crewai | ||
topic: research about the causes of lung disease | ||
roles: | ||
research_analyst: | ||
backstory: Experienced in analyzing scientific data related to respiratory health. | ||
goal: Analyze data on lung diseases | ||
role: Research Analyst | ||
tasks: | ||
data_analysis: | ||
description: Gather and analyze data on the causes and risk factors of lung | ||
diseases. | ||
expected_output: Report detailing key findings on lung disease causes. | ||
tools: | ||
- 'YouTubeSearchTool' | ||
``` | ||
|
||
## Integrate Langchain with Wrappers | ||
|
||
``` | ||
pip install wikipedia langchain_community | ||
``` | ||
|
||
``` | ||
# tools.py | ||
from langchain_community.utilities import WikipediaAPIWrapper | ||
class WikipediaSearchTool(BaseTool): | ||
name: str = "WikipediaSearchTool" | ||
description: str = "Search Wikipedia for relevant information based on a query." | ||
def _run(self, query: str): | ||
api_wrapper = WikipediaAPIWrapper(top_k_results=4, doc_content_chars_max=100) | ||
results = api_wrapper.load(query=query) | ||
return results | ||
``` | ||
|
||
``` | ||
# agents.yaml | ||
framework: crewai | ||
topic: research about nvidia growth | ||
roles: | ||
data_collector: | ||
backstory: An experienced researcher with the ability to efficiently collect and | ||
organize vast amounts of data. | ||
goal: Gather information on Nvidia's growth by providing the Ticket Symbol to YahooFinanceNewsTool | ||
role: Data Collector | ||
tasks: | ||
data_collection_task: | ||
description: Collect data on Nvidia's growth from various sources such as | ||
financial reports, news articles, and company announcements. | ||
expected_output: A comprehensive document detailing data points on Nvidia's | ||
growth over the years. | ||
tools: | ||
- 'WikipediaSearchTool' | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
# Reddit PraisonAI Integration | ||
|
||
``` | ||
export REDDIT_USER_AGENT=[USER] | ||
export REDDIT_CLIENT_SECRET=xxxxxx | ||
export REDDIT_CLIENT_ID=xxxxxx | ||
``` | ||
|
||
tools.py | ||
|
||
``` | ||
from langchain_community.tools.reddit_search.tool import RedditSearchRun | ||
``` | ||
|
||
agents.yaml | ||
|
||
``` | ||
framework: crewai | ||
topic: research about the causes of lung disease | ||
roles: | ||
research_analyst: | ||
backstory: Experienced in analyzing scientific data related to respiratory health. | ||
goal: Analyze data on lung diseases | ||
role: Research Analyst | ||
tasks: | ||
data_analysis: | ||
description: Gather and analyze data on the causes and risk factors of lung | ||
diseases. | ||
expected_output: Report detailing key findings on lung disease causes. | ||
tools: | ||
- 'RedditSearchRun' | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
# Tavily PraisonAI Integration | ||
|
||
```` | ||
from praisonai_tools import BaseTool | ||
from langchain.utilities.tavily_search import TavilySearchAPIWrapper | ||
class TavilyTool(BaseTool): | ||
name: str = "TavilyTool" | ||
description: str = "Search Tavily for relevant information based on a query." | ||
def _run(self, query: str): | ||
api_wrapper = TavilySearchAPIWrapper() | ||
results = api_wrapper.results(query=query, max_results=5) | ||
return results | ||
``` |
Oops, something went wrong.