Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for gemini-2.0-flash in _model_info.py #158

Merged
merged 1 commit into from
Feb 13, 2025

Conversation

chriseckinger
Copy link
Contributor

@chriseckinger chriseckinger commented Feb 13, 2025

Wanted to use this model but couldn't, so I coded it in.

(This is my first pull request in general, so please by kind ;) Let me know if I can improve/do something differently)


Important

Added support for GEMINI_2_0_FLASH model in _model_info.py with specific constraints and MIME categories.

  • Behavior:
    • Added GEMINI_2_0_FLASH to LLMModels enum in _model_info.py.
    • Updated get_model_info() to include GEMINI_2_0_FLASH with constraints: max_tokens=8192, max_temperature=2.0.
    • Supports MIME categories: IMAGES, AUDIO, VIDEO, DOCUMENTS, TEXT.

This description was created by Ellipsis for 8bc7199. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Looks good to me! Reviewed everything up to 8bc7199 in 2 minutes and 38 seconds

More details
  • Looked at 36 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 drafted comments based on config settings.
1. backend/app/nodes/llm/_model_info.py:98
  • Draft comment:
    New enum member GEMINI_2_0_FLASH added. Please ensure naming remains consistent with other Gemini models.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 30% <= threshold 50%
    The comment is asking the author to ensure naming consistency, which is a form of asking for confirmation. This violates the rule against asking the author to confirm or ensure something. However, it does point out a potential issue with naming consistency, which could be useful if rephrased as a suggestion or observation rather than a request for confirmation.
2. backend/app/nodes/llm/_model_info.py:357
  • Draft comment:
    Gemini 2.0 Flash model uses max_temperature=2.0 (unlike Gemini 1.5 Flash with 1.0). Confirm this difference is intended.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50%
    The comment violates our rules by asking for confirmation ("Confirm this difference is intended"). We should not ask authors to confirm their intentions. Additionally, max_temperature=2.0 appears to be a common valid value used by many other models in the codebase, so this difference is likely intentional and not a bug. The comment is raising doubt without strong evidence of an actual issue.
    Maybe there's some Gemini-specific documentation that indicates all Gemini models should use the same temperature range? Maybe this is actually a bug that needs attention?
    Without specific evidence from Gemini documentation that this is incorrect, and given that 2.0 is a valid temperature used by many other models, we should assume the author intentionally set this value. The comment is speculative.
    Delete this comment. It asks for confirmation without strong evidence of an issue, and max_temperature=2.0 appears to be a valid value used throughout the codebase.
3. backend/app/nodes/llm/_model_info.py:100
  • Draft comment:
    Ensure naming consistency: The new GEMINI_2_0_FLASH enum value follows GEMINI_2_0_FLASH_EXP. Verify if its placement and naming are intentional.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50%
    None
4. backend/app/nodes/llm/_model_info.py:357
  • Draft comment:
    Check if the max_temperature=2.0 for Gemini 2.0 Flash (vs 1.0 for Flash Exp) is intentional for consistency.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50%
    None

Workflow ID: wflow_8I7vKArOhfv7VQ5u


You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

@JeanKaddour JeanKaddour merged commit 00d8498 into PySpur-Dev:main Feb 13, 2025
@JeanKaddour
Copy link
Contributor

Thank you very much! Looks great :)

@chriseckinger
Copy link
Contributor Author

Thanks, and you're welcome!

@chriseckinger chriseckinger deleted the patch-1 branch February 14, 2025 09:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants