Skip to content

Commit

Permalink
Merge pull request #53 from MervinPraison/develop
Browse files Browse the repository at this point in the history
v0.0.31
  • Loading branch information
MervinPraison authored Jun 19, 2024
2 parents 5f59e90 + 618db97 commit dc3f248
Show file tree
Hide file tree
Showing 17 changed files with 578 additions and 162 deletions.
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install flask praisonai==0.0.30 gunicorn markdown
RUN pip install flask praisonai==0.0.31 gunicorn markdown
EXPOSE 8080
CMD ["gunicorn", "-b", "0.0.0.0:8080", "api:app"]
6 changes: 3 additions & 3 deletions docs/api/praisonai/deploy.html
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ <h1 class="title">Module <code>praisonai.deploy</code></h1>
file.write(&#34;FROM python:3.11-slim\n&#34;)
file.write(&#34;WORKDIR /app\n&#34;)
file.write(&#34;COPY . .\n&#34;)
file.write(&#34;RUN pip install flask praisonai==0.0.30 gunicorn markdown\n&#34;)
file.write(&#34;RUN pip install flask praisonai==0.0.31 gunicorn markdown\n&#34;)
file.write(&#34;EXPOSE 8080\n&#34;)
file.write(&#39;CMD [&#34;gunicorn&#34;, &#34;-b&#34;, &#34;0.0.0.0:8080&#34;, &#34;api:app&#34;]\n&#39;)

Expand Down Expand Up @@ -250,7 +250,7 @@ <h2 id="raises">Raises</h2>
file.write(&#34;FROM python:3.11-slim\n&#34;)
file.write(&#34;WORKDIR /app\n&#34;)
file.write(&#34;COPY . .\n&#34;)
file.write(&#34;RUN pip install flask praisonai==0.0.30 gunicorn markdown\n&#34;)
file.write(&#34;RUN pip install flask praisonai==0.0.31 gunicorn markdown\n&#34;)
file.write(&#34;EXPOSE 8080\n&#34;)
file.write(&#39;CMD [&#34;gunicorn&#34;, &#34;-b&#34;, &#34;0.0.0.0:8080&#34;, &#34;api:app&#34;]\n&#39;)

Expand Down Expand Up @@ -416,7 +416,7 @@ <h2 id="raises">Raises</h2>
file.write(&#34;FROM python:3.11-slim\n&#34;)
file.write(&#34;WORKDIR /app\n&#34;)
file.write(&#34;COPY . .\n&#34;)
file.write(&#34;RUN pip install flask praisonai==0.0.30 gunicorn markdown\n&#34;)
file.write(&#34;RUN pip install flask praisonai==0.0.31 gunicorn markdown\n&#34;)
file.write(&#34;EXPOSE 8080\n&#34;)
file.write(&#39;CMD [&#34;gunicorn&#34;, &#34;-b&#34;, &#34;0.0.0.0:8080&#34;, &#34;api:app&#34;]\n&#39;)</code></pre>
</details>
Expand Down
60 changes: 0 additions & 60 deletions docs/create_custom_tools.md

This file was deleted.

135 changes: 135 additions & 0 deletions docs/custom_tools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# Create Custom Tools

Sure! Let's go through the steps to install and set up the PraisonAI tool.

## Step 1: Install the `praisonai` Package

First, you need to install the `praisonai` package. Open your terminal and run the following command:

```bash
pip install praisonai
```

## Step 2: Create the `InternetSearchTool`

Next, create a file named `tools.py` and add the following code to define the `InternetSearchTool`:

```python
from duckduckgo_search import DDGS
from praisonai_tools import BaseTool

class InternetSearchTool(BaseTool):
name: str = "Internet Search Tool"
description: str = "Search Internet for relevant information based on a query or latest news"

def _run(self, query: str):
ddgs = DDGS()
results = ddgs.text(keywords=query, region='wt-wt', safesearch='moderate', max_results=5)
return results
```

## Step 3: Define the Agent Configuration

Create a file named `agents.yaml` and add the following content to configure the agent:

```yaml
framework: crewai
topic: research about the causes of lung disease
roles:
research_analyst:
backstory: Experienced in analyzing scientific data related to respiratory health.
goal: Analyze data on lung diseases
role: Research Analyst
tasks:
data_analysis:
description: Gather and analyze data on the causes and risk factors of lung diseases.
expected_output: Report detailing key findings on lung disease causes.
tools:
- InternetSearchTool
```
## Step 4: Run the PraisonAI Tool
To run the PraisonAI tool, simply type the following command in your terminal:
```bash
praisonai
```

If you want to run the `autogen` framework, use:

```bash
praisonai --framework autogen
```

## Prerequisites

Ensure you have the `duckduckgo_search` package installed. If not, you can install it using:

```bash
pip install duckduckgo_search
```

That's it! You should now have the PraisonAI tool installed and configured.

## Other information

### TL;DR to Create a Custom Tool

```bash
pip install praisonai duckduckgo-search
export OPENAI_API_KEY="Enter your API key"
praisonai --init research about the latest AI News and prepare a detailed report
```

- Add `- InternetSearchTool` in the agents.yaml file in the tools section.
- Create a file called tools.py and add this code [tools.py](./tools.py)

```bash
praisonai
```

### Pre-requisite to Create a Custom Tool
`agents.yaml` file should be present in the current directory.

If it doesn't exist, create it by running the command `praisonai --init research about the latest AI News and prepare a detailed report`.

#### Step 1 to Create a Custom Tool

Create a file called tools.py in the same directory as the agents.yaml file.

```python
# example tools.py
from duckduckgo_search import DDGS
from praisonai_tools import BaseTool

class InternetSearchTool(BaseTool):
name: str = "InternetSearchTool"
description: str = "Search Internet for relevant information based on a query or latest news"

def _run(self, query: str):
ddgs = DDGS()
results = ddgs.text(keywords=query, region='wt-wt', safesearch='moderate', max_results=5)
return results
```

#### Step 2 to Create a Custom Tool

Add the tool to the agents.yaml file as show below under the tools section `- InternetSearchTool`.

```yaml
framework: crewai
topic: research about the latest AI News and prepare a detailed report
roles:
research_analyst:
backstory: Experienced in gathering and analyzing data related to AI news trends.
goal: Analyze AI News trends
role: Research Analyst
tasks:
gather_data:
description: Conduct in-depth research on the latest AI News trends from reputable
sources.
expected_output: Comprehensive report on current AI News trends.
tools:
- InternetSearchTool
```
42 changes: 42 additions & 0 deletions docs/firecrawl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Firecrawl PraisonAI Integration

## Firecrawl running in Localhost:3002

```
from firecrawl import FirecrawlApp
from praisonai_tools import BaseTool
import re
class WebPageScraperTool(BaseTool):
name: str = "Web Page Scraper Tool"
description: str = "Scrape and extract information from a given web page URL."
def _run(self, url: str) -> str:
app = FirecrawlApp(api_url='http://localhost:3002')
response = app.scrape_url(url=url)
content = response["content"]
# Remove all content above the line "========================================================"
if "========================================================" in content:
content = content.split("========================================================", 1)[1]
# Remove all menu items and similar patterns
content = re.sub(r'\*\s+\[.*?\]\(.*?\)', '', content)
content = re.sub(r'\[Skip to the content\]\(.*?\)', '', content)
content = re.sub(r'\[.*?\]\(.*?\)', '', content)
content = re.sub(r'\s*Menu\s*', '', content)
content = re.sub(r'\s*Search\s*', '', content)
content = re.sub(r'Categories\s*', '', content)
# Remove all URLs
content = re.sub(r'http\S+', '', content)
# Remove empty lines or lines with only whitespace
content = '\n'.join([line for line in content.split('\n') if line.strip()])
# Limit to the first 1000 words
words = content.split()
if len(words) > 1000:
content = ' '.join(words[:1000])
return content
```
69 changes: 69 additions & 0 deletions docs/langchain.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Langchain Tools

## Integrate Langchain Direct Tools

```
pip install youtube_search praisonai langchain_community langchain
```

```
# tools.py
from langchain_community.tools import YouTubeSearchTool
```

```
# agents.yaml
framework: crewai
topic: research about the causes of lung disease
roles:
research_analyst:
backstory: Experienced in analyzing scientific data related to respiratory health.
goal: Analyze data on lung diseases
role: Research Analyst
tasks:
data_analysis:
description: Gather and analyze data on the causes and risk factors of lung
diseases.
expected_output: Report detailing key findings on lung disease causes.
tools:
- 'YouTubeSearchTool'
```

## Integrate Langchain with Wrappers

```
pip install wikipedia langchain_community
```

```
# tools.py
from langchain_community.utilities import WikipediaAPIWrapper
class WikipediaSearchTool(BaseTool):
name: str = "WikipediaSearchTool"
description: str = "Search Wikipedia for relevant information based on a query."
def _run(self, query: str):
api_wrapper = WikipediaAPIWrapper(top_k_results=4, doc_content_chars_max=100)
results = api_wrapper.load(query=query)
return results
```

```
# agents.yaml
framework: crewai
topic: research about nvidia growth
roles:
data_collector:
backstory: An experienced researcher with the ability to efficiently collect and
organize vast amounts of data.
goal: Gather information on Nvidia's growth by providing the Ticket Symbol to YahooFinanceNewsTool
role: Data Collector
tasks:
data_collection_task:
description: Collect data on Nvidia's growth from various sources such as
financial reports, news articles, and company announcements.
expected_output: A comprehensive document detailing data points on Nvidia's
growth over the years.
tools:
- 'WikipediaSearchTool'
```
32 changes: 32 additions & 0 deletions docs/reddit.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Reddit PraisonAI Integration

```
export REDDIT_USER_AGENT=[USER]
export REDDIT_CLIENT_SECRET=xxxxxx
export REDDIT_CLIENT_ID=xxxxxx
```

tools.py

```
from langchain_community.tools.reddit_search.tool import RedditSearchRun
```

agents.yaml

```
framework: crewai
topic: research about the causes of lung disease
roles:
research_analyst:
backstory: Experienced in analyzing scientific data related to respiratory health.
goal: Analyze data on lung diseases
role: Research Analyst
tasks:
data_analysis:
description: Gather and analyze data on the causes and risk factors of lung
diseases.
expected_output: Report detailing key findings on lung disease causes.
tools:
- 'RedditSearchRun'
```
15 changes: 15 additions & 0 deletions docs/tavily.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Tavily PraisonAI Integration

````
from praisonai_tools import BaseTool
from langchain.utilities.tavily_search import TavilySearchAPIWrapper
class TavilyTool(BaseTool):
name: str = "TavilyTool"
description: str = "Search Tavily for relevant information based on a query."
def _run(self, query: str):
api_wrapper = TavilySearchAPIWrapper()
results = api_wrapper.results(query=query, max_results=5)
return results
```
Loading

0 comments on commit dc3f248

Please sign in to comment.