Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Custom logging callback not called when json_logs is set to true #6739

Open
Jacobh2 opened this issue Nov 14, 2024 · 7 comments
Open
Assignees
Labels
bug Something isn't working unable to repro

Comments

@Jacobh2
Copy link
Contributor

Jacobh2 commented Nov 14, 2024

What happened?

Tested to create a custom logging callback according to the docs here, and did a very simple just to try it out, which implements the following methods:

def log_pre_api_call(self, model, messages, kwargs): 
    print("*********** Pre-API Call")

def log_post_api_call(self, kwargs, response_obj, start_time, end_time): 
    print("*********** Post-API Call")

def log_stream_event(self, kwargs, response_obj, start_time, end_time):
    print("*********** On Stream")
    
def log_success_event(self, kwargs, response_obj, start_time, end_time): 
    print("*********** On Success")

def log_failure_event(self, kwargs, response_obj, start_time, end_time): 
    print("*********** On Failure")

And if I set my config to:

litellm_settings:
  allowed_fails: 3
  callbacks: custom_callbacks.proxy_handler_instance
  num_retries: 3
  request_timeout: 600

it all works as expected and I can see in my logs:

08:27:21 - LiteLLM Router:INFO: simple_shuffle.py:72 - get_available_deployment for model: llama3.1-8b, Selected deployment: {'model_name': 'llama3.1-8b', 'litellm_params': {'model': 'bedrock/meta.llama3-1-8b-instruct-v1:0'}, 'model_info': {'id': 'llama3.1-8b-us-west-2', 'db_model': False}} for model: llama3.1-8b
08:27:23 - LiteLLM Router:INFO: router.py:936 - litellm.acompletion(model=bedrock/meta.llama3-1-8b-instruct-v1:0) 200 OK
*********** Pre-API Call
*********** Post-API Call
*********** On Async Success
INFO:     172.20.0.1:58222 - "POST /chat/completions HTTP/1.1" 200 OK

But if I set the LiteLLM config to log using JSON, to be able to more easily handle the logs and set alerts given ERRORs, like this:

litellm_settings:
  allowed_fails: 3
  callbacks: custom_callbacks.proxy_handler_instance
  json_logs: true
  num_retries: 3
  request_timeout: 600

my custom logger isn't called! The output then looks like this:

{"message": "get_available_deployment for model: llama3.1-8b, Selected deployment: {'model_name': 'llama3.1-8b', 'litellm_params': {'model': 'bedrock/meta.llama3-1-8b-instruct-v1:0'}, 'model_info': {'id': 'llama3.1-8b-us-west-2', 'db_model': False}} for model: llama3.1-8b", "level": "INFO", "timestamp": "2024-11-14T08:26:46.522708"}
{"message": "litellm.acompletion(model=bedrock/meta.llama3-1-8b-instruct-v1:0)\u001b[32m 200 OK\u001b[0m", "level": "INFO", "timestamp": "2024-11-14T08:26:49.473654"}
{"message": "get_available_deployment for model: llama3.1-8b, Selected deployment: {'model_name': 'llama3.1-8b', 'litellm_params': {'model': 'bedrock/meta.llama3-1-8b-instruct-v1:0'}, 'model_info': {'id': 'llama3.1-8b-us-west-2', 'db_model': False}} for model: llama3.1-8b", "level": "INFO", "timestamp": "2024-11-14T08:26:52.800844"}
{"message": "litellm.acompletion(model=bedrock/meta.llama3-1-8b-instruct-v1:0)\u001b[32m 200 OK\u001b[0m", "level": "INFO", "timestamp": "2024-11-14T08:26:54.350644"}

Note that the only change is to set json_logs: true in the config and it results in no printouts for my custom logs.

Expected behaviour

I would expect my custom logger to still be called, regardless of LiteLLM sets its internal logging to JSON format or not.

Why I want this: We want to use structured logging in order to more easily setup alerts on errors. The LiteLLM standard logging is not enough to get all the information out that we want to get, but that is OK since it supports for custom loggers. In these, we will format a nice JSON message that we later can use for alerts. However, not all logging messages go via the custom logger, hence we are missing some of the ERROR logs, so we want LiteLLM to still log in JSON format so that we can catch those as well.

Currently this is not possible, since we have to decide if we want to catch LiteLLM logs that doesn't go via the custom logger or we set a custom logger. I would love to be able to get both 😄

Relevant log output

No response

Twitter / LinkedIn details

No response

@Jacobh2 Jacobh2 added the bug Something isn't working label Nov 14, 2024
@Jacobh2
Copy link
Contributor Author

Jacobh2 commented Nov 18, 2024

@krrishdholakia, is this expected behaviour? Do I have to disable JSON logging in order to use the custom loggin callback? 😄

@Jacobh2
Copy link
Contributor Author

Jacobh2 commented Nov 21, 2024

@ishaan-jaff, Is this expected behaviour?

@krrishdholakia
Copy link
Contributor

no

@krrishdholakia krrishdholakia self-assigned this Nov 21, 2024
@krrishdholakia
Copy link
Contributor

unable to repro
Screenshot 2024-11-21 at 6 46 50 PM
Screenshot 2024-11-21 at 6 46 48 PM

Screenshot 2024-11-21 at 6 46 54 PM

@krrishdholakia
Copy link
Contributor

your logging callbacks should be working - i suspect there might be some issue with print / json logging, although i'm unable to repro it.

We have other users using our otel logging which is a custom logger with json logging enabled. Closing this ticket for now, as i can confirm customlogger works with json logging, you can try emitting the event to a webhook to confirm this independent of the print statement logic as well @Jacobh2

@krrishdholakia krrishdholakia closed this as not planned Won't fix, can't repro, duplicate, stale Nov 21, 2024
@Jacobh2
Copy link
Contributor Author

Jacobh2 commented Nov 21, 2024

I'm sorry, but I am honestly not able to make this work. Let me try to provide a more complete example and please try it out.

I am running ghcr.io/berriai/litellm-database:main-v1.52.12 in docker compose:

services:

  api2:
    image: ghcr.io/berriai/litellm-database:main-v1.52.12
    command: ["--config", "/app/config.yaml"]
    env_file:
      - .env
    environment:
      - DATABASE_URL=postgresql://username:password@db2:5432/myDatabase
    ports:
      - 4000:4000
    volumes:
      - ./config.yaml:/app/config.yaml
      - ./custom_callbacks.py:/app/custom_callbacks.py

  db2:
    image: postgres:13.12
    environment:
      POSTGRES_USER: username
      POSTGRES_PASSWORD: password
      POSTGRES_DB: myDatabase

The .env file includes all the secrets for my models. Add according to your config.

The custom logger is:

from litellm.integrations.custom_logger import CustomLogger

class MyCustomHandler(CustomLogger):
    def log_pre_api_call(self, model, messages, kwargs): 
        print("************** Pre-API Call")
    
    def log_post_api_call(self, kwargs, response_obj, start_time, end_time): 
        print("************** Post-API Call")
    
    def log_stream_event(self, kwargs, response_obj, start_time, end_time):
        print("************** On Stream")
        
    def log_success_event(self, kwargs, response_obj, start_time, end_time): 
        print("************** On Success")

    def log_failure_event(self, kwargs, response_obj, start_time, end_time): 
        print("************** On Failure")
    
    #### ASYNC #### - for acompletion/aembeddings
    
    async def async_log_stream_event(self, kwargs, response_obj, start_time, end_time):
        print("************** On Async Streaming")

    async def async_log_success_event(self, kwargs, response_obj, start_time, end_time):
        print("************** On Async Success")

    async def async_log_failure_event(self, kwargs, response_obj, start_time, end_time):
        print("************** On Async Failure")


proxy_handler_instance = MyCustomHandler()

The config, without the models look like:

general_settings:
  disable_master_key_return: true
  store_model_in_db: false
litellm_settings:
  allowed_fails: 3
  callbacks: custom_callbacks.proxy_handler_instance
  json_logs: false
  num_retries: 3
  redact_messages_in_exceptions: true
  request_timeout: 600

And when I set json_logs to true, nothing from my custom logger is printed, but when I switch it to false, I correctly get the:

api2-1  | ************** Pre-API Call
api2-1  | ************** Post-API Call
api2-1  | ************** On Async Success

as expected @krrishdholakia

@krrishdholakia
Copy link
Contributor

krrishdholakia commented Nov 22, 2024

@Jacobh2

please try either:

  1. using verbose_logger.info -> from litellm import verbose_logger
  2. you can try emitting the event to a webhook / writing to a local .txt file to confirm this independent of the print statement logic as well

This issue is closed as unable to repro + we have loggers today which are based on customlogger and work correctly with json_logs set. Right now this seems more to do with how json logging interacts with print statements than a bug with litellm. Please feel free to reopen if even suggestion #2 (emitting the call to an external service / writing to log.txt file ) indicates it's not actually reaching there

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unable to repro
Projects
None yet
Development

No branches or pull requests

2 participants