Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propagates llmclient changes to ldp #226

Merged
merged 32 commits into from
Feb 20, 2025
Merged

Propagates llmclient changes to ldp #226

merged 32 commits into from
Feb 20, 2025

Conversation

maykcaldas
Copy link
Collaborator

@maykcaldas maykcaldas commented Jan 28, 2025

This is just a draft. A new fh-llm-client release is still needed.

TODO:

  • Merge Added default llm name to the config llm-client#48 to fix ldp issues
  • Update paper-qa to new llmclient and ldp
  • Release llmclient v0.1.0
  • Update fh-llm-client dependency in pyproject.toml
  • Go through cassettes and check if all changes are needed
  • Re-test this PR
  • Merge

@maykcaldas maykcaldas self-assigned this Jan 28, 2025
maykcaldas and others added 2 commits January 28, 2025 16:31

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
@maykcaldas maykcaldas marked this pull request as ready for review February 10, 2025 23:53
@maykcaldas maykcaldas requested review from a team and jamesbraza February 10, 2025 23:53
Copy link
Collaborator

@jamesbraza jamesbraza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mind merging with main, and do we need all these cassette changes?

Copy link
Collaborator

@jamesbraza jamesbraza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice work here @maykcaldas

codeflash-ai bot added a commit that referenced this pull request Feb 19, 2025

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
…8% in PR #226 (`update-llmclient`)

In this optimized version of the function `prep_tools_for_tokenizer`, I've replaced the loop where the tool information is manually constructed with a list comprehension that calls the hypothetical `model_dump()` method. This assumes that such a method exists and returns the desired dictionary structure, making the code both faster and more concise. This method leverages what seems like an existing serialization method within the `Tool` class's `info` attribute, thereby minimizing manual dictionary creation and potentially optimizing runtime performance through internal optimizations within `model_dump()`.
Copy link

codeflash-ai bot commented Feb 19, 2025

⚡️ Codeflash found optimizations for this PR

📄 33,608% (336.08x) speedup for LocalLLMCallOp.prep_tools_for_tokenizer in ldp/nn/graph/llm_call_op.py

⏱️ Runtime : 2.03 milliseconds 6.02 microseconds (best of 27 runs)

I created a new dependent PR with the suggested changes. Please review:

If you approve, it will be merged into this PR (branch update-llmclient).

@maykcaldas maykcaldas changed the base branch from main to fix-aviary-typing February 19, 2025 23:20
Base automatically changed from fix-aviary-typing to main February 19, 2025 23:46
@maykcaldas maykcaldas merged commit c503dac into main Feb 20, 2025
8 checks passed
@maykcaldas maykcaldas deleted the update-llmclient branch February 20, 2025 19:53
@maykcaldas maykcaldas restored the update-llmclient branch February 21, 2025 00:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants