-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propagates llmclient changes to ldp #226
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind merging with main
, and do we need all these cassette
changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, nice work here @maykcaldas
…8% in PR #226 (`update-llmclient`) In this optimized version of the function `prep_tools_for_tokenizer`, I've replaced the loop where the tool information is manually constructed with a list comprehension that calls the hypothetical `model_dump()` method. This assumes that such a method exists and returns the desired dictionary structure, making the code both faster and more concise. This method leverages what seems like an existing serialization method within the `Tool` class's `info` attribute, thereby minimizing manual dictionary creation and potentially optimizing runtime performance through internal optimizations within `model_dump()`.
⚡️ Codeflash found optimizations for this PR📄 33,608% (336.08x) speedup for
|
…d this
This is just a draft. A new
fh-llm-client
release is still needed.TODO:
llmclient
v0.1.0
fh-llm-client
dependency inpyproject.toml