Releases: BerriAI/litellm
v1.61.8-nightly
What's Changed
- (UI) Allow adding models for a Team (#8598) by @ishaan-jaff in #8601
- (UI) Refactor Add Models for Specific Teams by @ishaan-jaff in #8592
- (UI) Improvements to Add Team Model Flow by @ishaan-jaff in #8603
Full Changelog: v1.61.7...v1.61.8-nightly
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.8-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.8-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 110.0 | 134.41650367760357 | 6.414710257507693 | 6.414710257507693 | 1920 | 1920 | 98.11249399996314 | 4277.784283000017 |
Aggregated | Failed ❌ | 110.0 | 134.41650367760357 | 6.414710257507693 | 6.414710257507693 | 1920 | 1920 | 98.11249399996314 | 4277.784283000017 |
v1.61.7.dev1
What's Changed
- (UI) Allow adding models for a Team (#8598) by @ishaan-jaff in #8601
- (UI) Refactor Add Models for Specific Teams by @ishaan-jaff in #8592
- (UI) Improvements to Add Team Model Flow by @ishaan-jaff in #8603
Full Changelog: v1.61.7...v1.61.7.dev1
v1.61.7-nightly
What's Changed
- docs: update README.md API key and model example typos by @colesmcintosh in #8590
- Fix typo in main readme by @scosman in #8574
- (UI) Allow adding models for a Team by @ishaan-jaff in #8598
- feat(ui): alert when adding model without STORE_MODEL_IN_DB by @Aditya8840 in #8591
- Revert "(UI) Allow adding models for a Team" by @ishaan-jaff in #8600
- Litellm stable UI 02 17 2025 p1 by @krrishdholakia in #8599
New Contributors
- @colesmcintosh made their first contribution in #8590
- @scosman made their first contribution in #8574
- @Aditya8840 made their first contribution in #8591
Full Changelog: v1.61.6-nightly...v1.61.7-nightly
v1.61.7
What's Changed
- docs(perplexity.md): removing
return_citations
documentation by @miraclebakelaser in #8527 - (docs - cookbook) litellm proxy x langfuse by @ishaan-jaff in #8541
- UI Fixes and Improvements (02/14/2025) p1 by @krrishdholakia in #8546
- (Feat) - Add
/bedrock/meta.llama3-3-70b-instruct-v1:0
tool calling support + cost tracking + base llm unit test for tool calling by @ishaan-jaff in #8545 - fix(general_settings.tsx): filter out empty dictionaries post fallbac… by @krrishdholakia in #8550
- (perf) Fix memory leak on
/completions
route by @ishaan-jaff in #8551 - Org Flow Improvements by @krrishdholakia in #8549
- feat(openai/o_series_transformation.py): support native streaming for o1 by @krrishdholakia in #8552
- fix(team_endpoints.py): fix team info check to handle team keys by @krrishdholakia in #8529
- build: ui build update by @krrishdholakia in #8553
- Optimize Alpine Dockerfile by removing redundant apk commands by @PeterDaveHello in #5016
- fix(main.py): fix key leak error when unknown provider given by @krrishdholakia in #8556
- (Feat) - return
x-litellm-attempted-fallbacks
in responses from litellm proxy by @ishaan-jaff in #8558 - Add remaining org CRUD endpoints + support deleting orgs on UI by @krrishdholakia in #8561
- Enable update/delete org members on UI by @krrishdholakia in #8560
- (Bug Fix) - Add Regenerate Key on Virtual Keys Tab by @ishaan-jaff in #8567
- (Bug Fix + Better Observability) - BudgetResetJob: for reseting key, team, user budgets by @ishaan-jaff in #8562
- (Patch/bug fix) - UI, filter out litellm ui session tokens on Virtual Keys Page by @ishaan-jaff in #8568
- refactor(teams.tsx): refactor to display all teams, across all orgs by @krrishdholakia in #8565
- docs: update README.md API key and model example typos by @colesmcintosh in #8590
- Fix typo in main readme by @scosman in #8574
- (UI) Allow adding models for a Team by @ishaan-jaff in #8598
- feat(ui): alert when adding model without STORE_MODEL_IN_DB by @Aditya8840 in #8591
- Revert "(UI) Allow adding models for a Team" by @ishaan-jaff in #8600
- Litellm stable UI 02 17 2025 p1 by @krrishdholakia in #8599
New Contributors
- @PeterDaveHello made their first contribution in #5016
- @colesmcintosh made their first contribution in #8590
- @scosman made their first contribution in #8574
- @Aditya8840 made their first contribution in #8591
Full Changelog: v1.61.3...v1.61.7
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.7
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 180.0 | 206.98769618433857 | 6.145029010811349 | 6.145029010811349 | 1839 | 1839 | 146.21495699998377 | 3174.8161250000067 |
Aggregated | Failed ❌ | 180.0 | 206.98769618433857 | 6.145029010811349 | 6.145029010811349 | 1839 | 1839 | 146.21495699998377 | 3174.8161250000067 |
v1.61.6.dev1
What's Changed
- docs: update README.md API key and model example typos by @colesmcintosh in #8590
- Fix typo in main readme by @scosman in #8574
- (UI) Allow adding models for a Team by @ishaan-jaff in #8598
- feat(ui): alert when adding model without STORE_MODEL_IN_DB by @Aditya8840 in #8591
- Revert "(UI) Allow adding models for a Team" by @ishaan-jaff in #8600
- Litellm stable UI 02 17 2025 p1 by @krrishdholakia in #8599
- (UI) Allow adding models for a Team (#8598) by @ishaan-jaff in #8601
- (UI) Refactor Add Models for Specific Teams by @ishaan-jaff in #8592
- (UI) Improvements to Add Team Model Flow by @ishaan-jaff in #8603
New Contributors
- @colesmcintosh made their first contribution in #8590
- @scosman made their first contribution in #8574
- @Aditya8840 made their first contribution in #8591
Full Changelog: v1.61.6-nightly...v1.61.6.dev1
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.6.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 170.0 | 197.04136517618934 | 6.316924319787487 | 6.316924319787487 | 1890 | 1890 | 142.7094059999945 | 2646.323271999961 |
Aggregated | Failed ❌ | 170.0 | 197.04136517618934 | 6.316924319787487 | 6.316924319787487 | 1890 | 1890 | 142.7094059999945 | 2646.323271999961 |
v1.61.6-nightly
What's Changed
- refactor(teams.tsx): refactor to display all teams, across all orgs by @krrishdholakia in #8565
Full Changelog: v1.61.5-nightly...v1.61.6-nightly
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.6-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 170.0 | 197.37858561234376 | 6.172709160882249 | 6.172709160882249 | 1847 | 1847 | 139.8097940000298 | 3194.1706680000266 |
Aggregated | Failed ❌ | 170.0 | 197.37858561234376 | 6.172709160882249 | 6.172709160882249 | 1847 | 1847 | 139.8097940000298 | 3194.1706680000266 |
v1.61.5-nightly
What's Changed
- Optimize Alpine Dockerfile by removing redundant apk commands by @PeterDaveHello in #5016
- fix(main.py): fix key leak error when unknown provider given by @krrishdholakia in #8556
- (Feat) - return
x-litellm-attempted-fallbacks
in responses from litellm proxy by @ishaan-jaff in #8558 - Add remaining org CRUD endpoints + support deleting orgs on UI by @krrishdholakia in #8561
- Enable update/delete org members on UI by @krrishdholakia in #8560
- (Bug Fix) - Add Regenerate Key on Virtual Keys Tab by @ishaan-jaff in #8567
- (Bug Fix + Better Observability) - BudgetResetJob: for reseting key, team, user budgets by @ishaan-jaff in #8562
- (Patch/bug fix) - UI, filter out litellm ui session tokens on Virtual Keys Page by @ishaan-jaff in #8568
New Contributors
- @PeterDaveHello made their first contribution in #5016
Full Changelog: v1.61.3.dev1...v1.61.5-nightly
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.5-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 150.0 | 169.92952748954406 | 6.233287189548679 | 6.233287189548679 | 1865 | 1865 | 130.2254270000276 | 1515.568768999998 |
Aggregated | Failed ❌ | 150.0 | 169.92952748954406 | 6.233287189548679 | 6.233287189548679 | 1865 | 1865 | 130.2254270000276 | 1515.568768999998 |
v1.61.3-stable
Full Changelog: v1.61.3...v1.61.3-stable
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm_stable_release_branch-v1.61.3-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 150.0 | 178.8079934268893 | 6.237874881624696 | 6.237874881624696 | 1867 | 1867 | 130.88419100000692 | 2746.1132829999997 |
Aggregated | Failed ❌ | 150.0 | 178.8079934268893 | 6.237874881624696 | 6.237874881624696 | 1867 | 1867 | 130.88419100000692 | 2746.1132829999997 |
v1.61.4-nightly
What's Changed
- docs(perplexity.md): removing
return_citations
documentation by @miraclebakelaser in #8527 - (docs - cookbook) litellm proxy x langfuse by @ishaan-jaff in #8541
- UI Fixes and Improvements (02/14/2025) p1 by @krrishdholakia in #8546
- (Feat) - Add
/bedrock/meta.llama3-3-70b-instruct-v1:0
tool calling support + cost tracking + base llm unit test for tool calling by @ishaan-jaff in #8545 - fix(general_settings.tsx): filter out empty dictionaries post fallbac… by @krrishdholakia in #8550
- (perf) Fix memory leak on
/completions
route by @ishaan-jaff in #8551 - Org Flow Improvements by @krrishdholakia in #8549
- feat(openai/o_series_transformation.py): support native streaming for o1 by @krrishdholakia in #8552
- fix(team_endpoints.py): fix team info check to handle team keys by @krrishdholakia in #8529
- build: ui build update by @krrishdholakia in #8553
Full Changelog: v1.61.3...v1.61.4-nightly
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.4-nightly
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 190.0 | 216.89425311206062 | 6.2617791082055785 | 6.2617791082055785 | 1874 | 1874 | 143.52555700003222 | 3508.21726800001 |
Aggregated | Failed ❌ | 190.0 | 216.89425311206062 | 6.2617791082055785 | 6.2617791082055785 | 1874 | 1874 | 143.52555700003222 | 3508.21726800001 |
v1.61.3.dev1
What's Changed
- docs(perplexity.md): removing
return_citations
documentation by @miraclebakelaser in #8527 - (docs - cookbook) litellm proxy x langfuse by @ishaan-jaff in #8541
- UI Fixes and Improvements (02/14/2025) p1 by @krrishdholakia in #8546
- (Feat) - Add
/bedrock/meta.llama3-3-70b-instruct-v1:0
tool calling support + cost tracking + base llm unit test for tool calling by @ishaan-jaff in #8545 - fix(general_settings.tsx): filter out empty dictionaries post fallbac… by @krrishdholakia in #8550
- (perf) Fix memory leak on
/completions
route by @ishaan-jaff in #8551 - Org Flow Improvements by @krrishdholakia in #8549
- feat(openai/o_series_transformation.py): support native streaming for o1 by @krrishdholakia in #8552
- fix(team_endpoints.py): fix team info check to handle team keys by @krrishdholakia in #8529
- build: ui build update by @krrishdholakia in #8553
Full Changelog: v1.61.3...v1.61.3.dev1
## Docker Run LiteLLM Proxy
```
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.61.3.dev1
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed ❌ | 110.0 | 135.69110078279184 | 6.367509457796576 | 6.367509457796576 | 1906 | 1906 | 93.52984899999228 | 3754.151283999988 |
Aggregated | Failed ❌ | 110.0 | 135.69110078279184 | 6.367509457796576 | 6.367509457796576 | 1906 | 1906 | 93.52984899999228 | 3754.151283999988 |