Releases: BerriAI/litellm
Releases · BerriAI/litellm
v1.21.6
What's Changed
- Litellm cost tracking caching fixes (should be 0.0) by @krrishdholakia in #1786
- [Fix] /key/delete + add delete cache keys by @ishaan-jaff in #1788
Full Changelog: v1.21.5...v1.21.6
v1.21.5
What's Changed
⭐️ [Feat] Show correct provider in exceptions - for Mistral API, PerplexityAPI, Anyscale, XInference by @ishaan-jaff in #1765, #1776
(Thanks @dhruv-anand-aintech for the issue/help)
Exceptions for Mistral API, PerplexityAPI, Anyscale, XInference now show the correct provider name, before they would show OPENAI_API_KEY
is missing when using PerplexityAI
exception: PerplexityException - Traceback (most recent call last):
File "/Users/ishaanjaffer/Github/litellm/litellm/llms/perplexity.py", line 349, in completion
raise e
File "/Users/ishaanjaffer/Github/litellm/litellm/llms/perplexity.py", line 292, in completion
perplexity_client = perplexity(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/perplexity/_client.py", line 98, in __init__
raise perplexityError(
perplexity.perplexityError: The api_key client option must be set either by passing api_key to the client or by setting the PERPLEXITY_API_KEY environment variable
- fix(view_key_table_tsx): show abbreviated key name instead of hashed token by @krrishdholakia in #1782
- fix(main.py): for health checks, don't use cached responses by @krrishdholakia in #1785
Full Changelog: v1.21.4...v1.21.5
v1.21.4
What's Changed
- (feat) langfuse set generation_id to chatcmpl-response ID by @ishaan-jaff in #1780

Full Changelog: v1.21.1...v1.21.4
v1.21.1
What's Changed
Full Changelog: v1.21.0...v1.21.1
v1.21.0
What's Changed
- Set OpenAPI API version metadata by @ushuz in #1733
- fix(router.py): support http and https proxys by @krrishdholakia in #1497
- (fix) UI Proxy - Fix accidental redirect by @Manouchehri in #1744
- [Feat-UI] Quick spin Up - remove proxyBaseUrl by @ishaan-jaff in #1746
- (ui) allow auth without sso by @ishaan-jaff in #1747
- [Fix] user_custom_auth fixes when user passed bad api_keys by @ishaan-jaff in #1748
- Update model_prices_and_context_window.json - added gpt-3.5-turbo-0125 by @7flash in #1752
- feat(utils.py): Set team id specific params in config.yaml by @krrishdholakia in #1754
- feat(vertex_ai.py): vertex ai model garden support by @krrishdholakia in #1749
New Contributors
Full Changelog: v1.20.9...v1.21.0
v1.20.9
Ui v2.0
Admin UI is now on the Proxy Server
- When you start the proxy you'll be able to find your admin UI link on the swagger docs
- The UI is a Static App h/t @Manouchehri for this suggestion
- Doc on getting started: https://docs.litellm.ai/docs/proxy/ui
- cc @bsu3338 this change impacts you - the UI is by default on the proxy server (GIF shows how to get the UI link), let me know if you have any questions
Admin UI uses jwts
- The UI never shows a Proxy API key in the URL param (we've move to jwts in the query params) cc @Manouchehri
Admin UI - Remove'd the need for setting allow_user_auth: True if user is logged in with SSO)
- [Fix] UI - Use jwts by @ishaan-jaff in #1730
- [Feat] Add Admin UI on Proxy Server (Static Web App) by @ishaan-jaff in #1726
- [Fix-UI] If user is already logged in using SSO, set allow_user_auth: True by @ishaan-jaff in #1728 * [Fix] UI - Use jwts by @ishaan-jaff in #1730
Full Changelog: v1.20.8...v1.20.9
v1.20.8
What's Changed
- [Feat] Add Admin UI on Proxy Server (Static Web App) by @ishaan-jaff in #1726
- [Fix-UI] If user is already logged in using SSO, set allow_user_auth: True by @ishaan-jaff in #1728
- feat: add langfuse cost tracking by @maxdeichmann in #1704
- [Fix] UI - Use jwts by @ishaan-jaff in #1730
- fix(utils.py): support checking if user defined max tokens exceeds model limit by @krrishdholakia in #1729
Full Changelog: v1.20.7...v1.20.8
v1.20.7
What's Changed
- [Feat-UI] Make SSO Optional by @ishaan-jaff in #1697
- fix(proxy_server.py): speed up proxy startup time by @krrishdholakia in #1696
- feat(proxy_server.py): enable cache controls per key + no-store cache flag by @krrishdholakia in #1700
- fix(proxy_server.py): don't log sk-.. as part of logging object by @krrishdholakia in #1720
- [Feat] Langfuse log embeddings by @ishaan-jaff in #1722
Full Changelog: v1.20.6...v1.20.7
v1.20.6
What's Changed
- [Fix] Graceful rejection of token input for AWS Embeddings API by @ishaan-jaff in #1685
- fix(main.py): register both model name and model name with provider by @krrishdholakia in #1690
- [Feat] LiteLLM Proxy: MSFT SSO Login by @ishaan-jaff in #1691
- Fixes for model cost check and streaming by @krrishdholakia in #1693
[Feat] Set OpenAI organization for litellm.completion, Proxy Config by @ishaan-jaff in #1689
Full Changelog: v1.20.5...v1.20.6
v1.20.5
What's Changed
- [Feat-UI] Add form for /key/gen params by @ishaan-jaff in #1678
- build(proxy_cli.py): make running gunicorn an optional cli arg by @krrishdholakia in #1675
- Change quota project to the correct project being used for the call by @eslamkarim in #1657
New Contributors
- @eslamkarim made their first contribution in #1657
Full Changelog: v1.20.3...v1.20.5