Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat:Updated Deep Infra models #135

Merged
merged 1 commit into from
Dec 30, 2024

Conversation

HavenDV
Copy link
Contributor

@HavenDV HavenDV commented Dec 30, 2024

Created by Github Actions

Summary by CodeRabbit

  • New Features

    • Added a new AI model QVQ-72B-Preview to the available model list
  • Pricing Updates

    • Reduced pricing for several AI models, including:
      • Llama-3.3-70B-Instruct-Turbo
      • Meta-Llama-3.1-405B-Instruct
      • Meta-Llama-3.1-70B-Instruct-Turbo
      • Qwen2.5-Coder-32B-Instruct
      • Llama-3.1-Nemotron-70B-Instruct
      • Hermes-3-Llama-3.1-405B

@github-actions github-actions bot enabled auto-merge December 30, 2024 06:47
Copy link
Contributor

coderabbitai bot commented Dec 30, 2024

Walkthrough

The pull request introduces updates to the DeepInfra model integration, focusing on pricing adjustments for several AI models and the addition of a new model, QVQ-72B-Preview. The changes span multiple files within the DeepInfra provider implementation, including model identifiers, model provider configuration, and predefined model classes. The modifications primarily involve updating token costs for existing models and expanding the model catalog with a new entry.

Changes

File Change Summary
src/DeepInfra/src/DeepInfraModelIds.cs - Added new enum QVQ-72B-Preview
- Reduced prompt/completion costs for multiple models including Llama-3.3-70B, Meta-Llama-3.1-405B, Qwen2.5-Coder models
src/DeepInfra/src/DeepInfraModelProvider.cs - Updated pricing metadata for existing models
- Added metadata entry for QVQ-72B-Preview
src/DeepInfra/src/Predefined/AllModels.cs - Added new Qvq72BPreviewModel class inheriting from DeepInfraModel

Sequence Diagram

sequenceDiagram
    participant Provider as DeepInfraProvider
    participant ModelIds as DeepInfraModelIds
    participant Models as Predefined Models
    
    Provider->>ModelIds: Retrieve Model Identifier
    ModelIds-->>Provider: Return Model ID (e.g., QVQ-72B-Preview)
    Provider->>Models: Initialize Model with Provider
    Models-->>Provider: Create Model Instance
Loading

Poem

🐰 Hop, hop, pricing takes a leap!
Models dance, their tokens now more cheap
QVQ joins the crew, fresh and bright
DeepInfra's models shine with might
A rabbit's cheer for cost delight! 🤖


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
src/DeepInfra/src/DeepInfraModelProvider.cs (1)

21-21: Llama31Nemotron70BInstruct cost updates.

Prompt and completion cost are now 1.2E-07 / 3E-07. If you intend to reflect a major discount, consider highlighting it in the documentation or release notes.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b21d040 and 2575f86.

📒 Files selected for processing (3)
  • src/DeepInfra/src/DeepInfraModelIds.cs (7 hunks)
  • src/DeepInfra/src/DeepInfraModelProvider.cs (3 hunks)
  • src/DeepInfra/src/Predefined/AllModels.cs (1 hunks)
🔇 Additional comments (15)
src/DeepInfra/src/DeepInfraModelProvider.cs (7)

13-13: Decreased pricing for Llama3370BInstructTurbo looks consistent.

The updated prompt and completion costs (1.2E-07 / 3E-07) match the recently announced pricing changes.


16-16: Updated pricing for MetaLlama31405BInstruct.

Good to see an explicit reduction to 8.000000000000001E-07 for both prompt and completion tokens, aligning with your new cost structure.


19-19: Pricing for MetaLlama3170BInstructTurbo updated.

The cost adjustments (1.2E-07 / 3E-07) align well with the new pricing strategy.


20-20: Qwen25Coder32BInstruct cost changes acknowledged.

Reduced costs (7E-08 / 1.6E-07) appear consistent with the file-level updates.


32-32: Hermes3Llama31405B pricing updated successfully.

The new 8.000000000000001E-07 rates are consistent with the rest of the updated cost structure.


34-34: New Qvq72BPreview entry recognized.

Thanks for adding the new model (Qvq72BPreview) at line 34. This maintains the dictionary’s consistent structure and provides a broader model offering.


60-60: Llama323BInstruct adjusted prompt and completion cost.

The new cost values 2E-08 / 2E-08 keep it consistent with the minimal cost tier. Confirm that downstream references have been tested for correctness.

src/DeepInfra/src/Predefined/AllModels.cs (1)

113-117: New Qvq72BPreviewModel class addition looks good.

The class inherits correctly from DeepInfraModel and references the proper enum ID. Good job keeping consistency with existing patterns and documentation comments.

src/DeepInfra/src/DeepInfraModelIds.cs (7)

24-25: Llama3370BInstructTurbo cost reduction confirmed.

Prompt and completion costs updated to $0.12/MTok. This is consistent with your downward pricing strategy for advanced models.


57-58: MetaLlama31405BInstruct cost changes.

Both prompt and completion costs are now $0.8/MTok. Ensure the new rate is appropriately documented in marketing or usage instructions.


90-91: MetaLlama3170BInstructTurbo cost lowered.

Good to see the costs reduced to $0.12/MTok each. This is a straightforward, consistent update.


101-102: Qwen25Coder32BInstruct pricing updated to $0.07/MTok.

Matches the number in DeepInfraModelProvider.cs.


112-113: Llama31Nemotron70BInstruct rates updated.

Now $0.12/MTok. Make sure your upstream billing or usage-tracking system reflects these changes.


233-234: Hermes3Llama31405B prompt and completion cost.

Updated to $0.8/MTok. Please confirm usage references for consistency.


251-261: QVQ-72B-Preview model addition.

The newly introduced model entry includes proper metadata and pricing ($0.25/MTok). Great job documenting its unique multimodal benchmarks.

@github-actions github-actions bot merged commit 1cac1e8 into main Dec 30, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant