Have any questions? Need support? Please reach out on Twitter (@phasellm) or via email: w (at) phaseai (dot) com
- Adding support for Anthropic's Messages API. You can use Claude 3.5 Sonnet and other models via
ClaudeMessagesWrapper
andStreamingClaudeMessagesWrapper
.
- None.
- None.
- Fixes to Google SERP API -- making it more resilient to errors and ensuring it doesn't error out if no results are obtained.
- Support for Google's Gemini and related models:
VertexAIWrapper
andStreamingVertexAIWrapper
for static and streaming queries, respectively - See https://phasellm.com/google-gemini-phasellm for more info on how to use the above
- None.
ReplicateLlama2Wrapper
to enable you to use Llama 2 via Replicate- Note that the
ChatBot
class supports theReplicateLlama2Wrapper
, so you can plug and play the Llama 2 model just like any other chat models; same goes for text completions
- None.
phasellm.logging
comes withPhaseLogger
, a class that allows you to automatically send your chats to evals.phasellm.com for visual, no-code reviews later- PhaseLLM requests now contain the header information received from the APIs they are calling; you can review whatever information OpenAI, Anthropic, etc. are sending you
Claude 2
support is back! Older versions weren't parsing responses properly- Support for versions 1.x of OpenAI's
openai
Python SDK
PhaseLLM Evaluations
project, a Django-powered front-end for evaluating LLMs and running batch jobs- Added phasellm.html.chatbotToJson() function to enable easy exporting of chatbot messages to JSON
None.
None.
- Fixing backwards compatibility issue with new API configuration options.
- New RSS agent
- Crawl and read RSS feeds with LLMs
- Demo project in
/demos-and-products/arxiv-assistant
- Support for OpenAI Azure implementations; use our new
AzureAPIConfiguration
class
- Adding support for
claude-2
due to Anthropic API changes - Fix for user agent when running website crawls
- New agents:
WebpageAgent
for scraping HTML from web pages (+ extracting text)WebSearchAgent
for using Google, Brave, and other search APIs to run queries for you
- New demo project:
web-search-chatbot
! This shows how you can use the new agents above in chatbot applications
- Installation for
phasellm
(i.e., default installation) includestransformers
to avoid errors - Installation option for
phasellm[complete]
to enable installing packages for running LLMs locally. The default setup will only provide support for LLM APIs (e.g., OpenAI, Anthropic, Cohere, Hugging Face)
None
- ChatPrompt fills were losing additional data (e.g., time stamps); this is now fixed.
- All LLMs now support
temperature
setting - All LLMs now accept
kwargs
that they pass on to their various APIs phasellm.llms.swap_roles()
helper funtion whereuser
andassistant
get swapped. Extremely useful for testing/evaluations and running simulationsphasellm.eval.simulate_n_chat_simulations()
allows you to resend the same chat history multiple times to generate an array of responses (for testing purposes)
- Fixed Server Side Event streaming bug where new lines weren't being escaped properly.
- Updating
requirements.txt
andsetup.py
to include all new relevant packages.
- We now have support for streaming LLMs
StreamingLanguageModelWrapper
allows you to build your own streaming wrappersStreamingOpenAIGPTWrapper
for GPT-4 and GPT-3.5StreamingClaudeWrapper
for Claude (Anthropic)- Significant additions to tests, to ensure stability of future releases (see
tests
folder) - ChatBot class supports both streaming and non-streaming LLMs, so you can plug and play with either
- Starting to add type hints to our code; this will take a bit of time but let us know if you have questions
None
- Hotfix for ChatBot chat() function to remove new fields (e.g., time stamp) when making API calls
- ChatBot
messages
stack now also tracks the timestamp for when a message was sent, and how long it took to process in the case of external API calls - HTML module that enables HTML outputs for ChatBot
None
- ChatBot now has a
resend()
function to redo the last chat message in case of errors or if building message arrays outside of a bot - Newsbot now has sample Claude code as well (the 100K token model is a fantastic model for news bots)
- Demo project: "chaining workshop" -- we'll be exploring unique ways to build prompt chains soon
- Demo project: basic chatbot. Use this as a base for other projects
- ClaudeWrapper bug fix: appending "Assistant:" to chats by default.
- Reverted
requirements.txt
to earlier version (v0.0.5)
Note: a number of changes in this release are not backwards compatible. They contain a '🚨' emoji by the bullet point in case you want to review.
- Lots of new classes!
- LLMs: added HuggingFaceInferenceWrapper so you can now query models via HuggingFace
- Data: added ChatPrompt to build chat sessions with variables
- Evaluation: added EvaluationStream to make it easy to evaluate models
- Exceptions: added ChatStructureException to be called when a chat doesn't follow OpenAI's messaging requirements
- phasellm.eval has isProperlyStructuredChat() to validate
- 🚨 Changed fill_prompt() to fill() so we are consistent across Prompt and ChatPrompt classes
- 🚨 GPT35Evaluator is now GPTEvaluator since you can use GPT-4 as well; the evaluation approach randomizes the order in which options are presented to the LLM to avoid any bias it might have
- Fixes to ResearchLLM
- generateOverview() now limits examples for categorical variables to 10, though this can also be set at the top of the file to another #. Previously we'd include all possible values.
- Making a list of categorical values in generateOverview() often errored out. This has been fixed.
- New agents
- EmailSenderAgent for sending emails (tested on GMail)
- NewsSummaryAgent for newsapi.org; summarizes lists of news articles
- Demo projects
- 'News Bot' demo that uses the new agents above to email daily summaries of news topics
- 'Chain of Thought' demo that generats a markdown file with plans for how to analyze a data set
None
- Added Exceptions submodule to track specific errors/issues with LLM workflows
- LLMCodeException for errors with LLM-generated code
- LLMResponseException to ensure an LLM responds properly (i.e., from a list of potential responses)
- Added Agents submodule to enable autonomous agents and task execution
- CodeExecutionAgent for executing LLM-generated code
- ResearchGPT will retry requests if code generated causes an error
- ResearchGPT code now includes examples for Claude and GPT-4
- Added python-dotenv to requirements
- Fixed folder structure in phasellm source code (removed subfolders for submodules)
- ResearchGPT (demo product)
- Added max # of tokens to ClaudeWrapper (required by Anthropic API)
- Dolly 2.0 wrapper
- BLOOM wrapper
- ChatBot bug where messages were erroring out
- Model Support
- GPT (3, 3.5, 4)
- Claude
- Cohere
- HuggingFace (GPT-2)
- Dolly 2.0 (via HuggingFace)
- Evaluation
- GPT 3.5 evaluator
- Human evaluation
Nothing, since this is a new release!