Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(ollama): metrics handling #1514

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

puffo
Copy link
Contributor

@puffo puffo commented Jan 19, 2024

The previous fix was flawed as the /chat API is different from the /generate API.

While fixing the regression, I noticed that there is inconsistent behaviour in Ollama where the prompt_eval_count disappears after subsequent requests, yet prompt_eval_duration is persisted. I believe this inconsistent behaviour is why the workaround is needed in the first place and adds some unnecessary complexity in litellm (which might otherwise not be required).

I suspect this is a bug on Ollama's side and I've opened an issue to confirm my assumptions with the community there. ollama/ollama#2068

I'm pushing my changes here in the meantime for review and it can be merged up once we get clarity from the Ollama team.

Copy link

vercel bot commented Jan 19, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 24, 2024 4:00am

@puffo
Copy link
Contributor Author

puffo commented Jan 19, 2024

Side note: I wasn't able to run the local tests for Ollama after uncommenting them out due to missing images and other odd behaviour with async calls. It might be related to these issues, or it might just be confusion on my part.

Perhaps some additional instructions with explicit dependencies at the top of those tests would help making local Ollama testing easier?

Another option is to see if we can get ollama deployed via CLI for testing, but that seems pretty ambitious ;)

total_tokens=None,
)

prompt_tokens = self.response_json["prompt_eval_count"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prompt_eval_count cannot be found, but there is eval_count in the JSON response.
image

@@ -120,6 +120,27 @@ def get_config(cls):
and v is not None
}

# Usage metrics are only populated when the ollama response indicates `"done": true`
# https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion
class OllamaUsage:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this won't actually solve the inconsistent return of prompt eval count being returned by ollama. we already have an existing fix in place for prompt count, i believe all we're missing is the same thing for the completion tokens

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Alright I can mirror the prior fix for eval_count.

I was hoping that we we'd be able to remove some of the fallback behaviour but it looks like the bugfix on Ollama's side (to reliably return values for these keys) might not be a straightforward solution so building for robustness will be a good approach.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Applied fallbacks for both prompt_eval_count and eval_count below.

They're both encapsulated within the OllamaUsage class to provide a clear internal interface (and easier testing) for that logic.

@krrishdholakia
Copy link
Contributor

@puffo let me know when the PR is ready for review again

Copy link
Contributor Author

@puffo puffo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ready to review again @krrishdholakia !

I added some more type annotations and extra test cases to try increase our confidence in these fallbacks but let me know if there's more to do and I'd be happy to help :)

@@ -120,6 +120,27 @@ def get_config(cls):
and v is not None
}

# Usage metrics are only populated when the ollama response indicates `"done": true`
# https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion
class OllamaUsage:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Applied fallbacks for both prompt_eval_count and eval_count below.

They're both encapsulated within the OllamaUsage class to provide a clear internal interface (and easier testing) for that logic.

@@ -121,6 +124,61 @@ def get_config(cls):
}


class ResponseJSON(TypedDict, total=False):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found myself frequently having to revisit the documentation to check the response shape so I added these TypedDict specs to make the Ollama API specification a bit clearer.


def _prompt_tokens_fallback(self):
print_verbose(f"Warning: `prompt_eval_count` missing from response, estimating by using OpenAI token counting with cl100k_base encoding.")
return openai_token_counter(messages=self.messages)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Token counting is a lot trickier for the /chat/completions endpoint. I figured that we should just reuse the openai_token_counter for the fallback and provide a warning message?

@Shadoweee77
Copy link

Any news on this? I also suffer from this issue :)

@puffo
Copy link
Contributor Author

puffo commented Jan 25, 2024

Any news on this? I also suffer from this issue :)

It has been fixed in this commit and released in v1.19.2 .

Happy to close this if you'd rather avoid the extra overhead @krrishdholakia

@Shadoweee77
Copy link

It still persists for me even in 1.19.2 - I tested it yesterday. I'd like to help track it down if it is an issue with LiteLLM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants