Releases: pgalko/BambooAI
v0.3.52
This is a relatively large, and potentially breaking update.
- Added support for Anthropic Claude 3.5 including function calls (streaming)
- Major overhaul to vector storage and retrieval
- Added a functionality when data frames can be explained/described with ontology. This significantly improves the accuracy of the responses.
- Added a new agent "Dataframe Inspector"
- A bunch of changes to prompts to facilitate the new features
v0.3.50
- Library now supports scraping of dynamic web content via Selenium
- Requires manual ChromeDriver download, and the path to it set in
SELENIUM_WEBDRIVER_PATH
env var - If the env var is set, the library selects the Selenium for all scraping tasks
- a couple bug fixes
v0.3.48
Major refactor of qa_retrieval.py, new gemini models
- Adds support for new pinecone client
- Removes vector db records duplication
- Add support for OpenAI embeddings models in addition to
hf_sentence_transformers
text-embedding-3-small
now default embeddings model
v0.3.44
- Google search seamlessly incorporated into the flow. A very positive results with Groq:Llama 3 70B when selected as a model for the Search agent.
{"agent": "Google Search Query Generator", "details": {"model": "llama3-70b-8192", "provider":"groq","max_tokens": 4000, "temperature": 0}},
{"agent": "Google Search Summarizer", "details": {"model": "llama3-70b-8192", "provider":"groq","max_tokens": 4000, "temperature": 0}}
- Some improvements to jupyter notebook output formatting.
- Search could benefit further from Something like
Selenium
orpypeteer
to allow for scraping of dynamic websites. At the moment only static content is supported. Tricky, as we do not want the library to become too bloated
v0.3.42
-
I have now updated the notebook output formatting using Markdown instead of HTML. It is now much more pleasant for the user. The changes are included in the this version (v0.3.42), and pushed to PyPi.
-
Video to illustrate the new output here: https://github.com/pgalko/BambooAI/assets/39939157/6058a3a2-63d9-44b9-b065-0a0cda5d7e17
-
Also benchmarked the library against "OpenAI Assitants API + Code Interpreter". BambooAI much cheaper and faster :-).
Task: Devise a machine learning model to predict the survival of passengers on the Titanic. The output should include the accuracy of the model and visualizations of the confusion matrix, correlation matrix, and other relevant metrics.
Dataset: Titanic.csv
Model: GPT-4-Turbo
OpenAI Assistants API (Code Interpreter)
- Result:
- Confusion Matrix:
- True Negative (TN): 90 passengers were correctly predicted as not surviving.
- True Positive (TP): 56 passengers were correctly predicted as surviving.
- False Negative (FN): 18 passengers were incorrectly predicted as not surviving.
- False Positive (FP): 15 passengers were incorrectly predicted as surviving.
- Confusion Matrix:
Metric | Value |
---|---|
Execution Time | 77.12 seconds |
Input Tokens | 7128 |
Output Tokens | 1215 |
Total Cost | $0.1077 |
BambooAI (No Planning, Google Search or Vector DB)
- Result:
- Confusion Matrix:
- True Negative (TN): 92 passengers were correctly predicted as not surviving.
- True Positive (TP): 55 passengers were correctly predicted as surviving.
- False Negative (FN): 19 passengers were incorrectly predicted as not surviving.
- False Positive (FP): 13 passengers were incorrectly predicted as surviving.
- Confusion Matrix:
Metric | Value |
---|---|
Execution Time | 47.39 seconds |
Input Tokens | 722 |
Output Tokens | 931 |
Total Cost | $0.0353 |
v0.3.38
- Planning agent can now use google search via function_calls. This is currently only available for OpenAI LLMs.
- A new logic for expert selector
- Plans now included in Vector DB record metadata alongside code. This is particularly beneficial for non OpenAI models.
- A completely new
google_search.py
module using ReAct method - Some prompt adjustments. Current date now included in some system prompts.
- A bunch of bug fixes
v0.3.32
v0.3.30
v0.3.29
- Load llm config from env var or json file
- Load prompt templates from json file
- Add ability to specify an llm config individually for each agent
- Append full traceback to error correction calls
- Refactor the code for functions and classes to match agent work flow
- Change variable names to be more descriptive
- Change output messages to be more descriptive
Deprecation Notice (October 25, 2023):
Please note that the "llm", "local_code_model", "llm_switch_plan", and "llm_switch_code" parameters have been deprecated as of v 0.3.29. The assignment of models and model parameters to agents is now handled via LLM_CONFIG. This can be set either as an environment variable or via a LLM_CONFIG.json file in the working directory. Please see README "Usage" section for details.