Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Ollama #218

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

vishwamartur
Copy link

@vishwamartur vishwamartur commented Jan 5, 2025

Related to #188

Add support for Ollama to enable users to run open source models locally.

  • Provider Service Integration

    • Add Ollama API integration in app/modules/intelligence/provider/provider_service.py
    • Implement method to get Ollama LLM
    • Update list_available_llms method to include Ollama
  • Configuration Options

    • Add configuration options for Ollama endpoint and model selection in app/core/config_provider.py
    • Update ConfigProvider class to include Ollama settings
  • Agent Factory and Injector Service

    • Add support for Ollama models in app/modules/intelligence/agents/agent_factory.py
    • Implement method to create Ollama agent
    • Add support for Ollama models in app/modules/intelligence/agents/agent_injector_service.py
    • Implement method to get Ollama agent
  • Tool Service

    • Add tools for Ollama model support in app/modules/intelligence/tools/tool_service.py
    • Implement methods to interact with Ollama models

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for Ollama as a new Language Model (LLM) provider.
    • Introduced configuration options for Ollama endpoint and model.
    • Expanded agent and tool capabilities to include Ollama-based services.
  • Improvements

    • Enhanced configuration management for LLM providers.
    • Extended agent and tool initialization to support new Ollama integration.

Related to potpie-ai#188

Add support for Ollama to enable users to run open source models locally.

* **Provider Service Integration**
  - Add Ollama API integration in `app/modules/intelligence/provider/provider_service.py`
  - Implement method to get Ollama LLM
  - Update `list_available_llms` method to include Ollama

* **Configuration Options**
  - Add configuration options for Ollama endpoint and model selection in `app/core/config_provider.py`
  - Update `ConfigProvider` class to include Ollama settings

* **Agent Factory and Injector Service**
  - Add support for Ollama models in `app/modules/intelligence/agents/agent_factory.py`
  - Implement method to create Ollama agent
  - Add support for Ollama models in `app/modules/intelligence/agents/agent_injector_service.py`
  - Implement method to get Ollama agent

* **Tool Service**
  - Add tools for Ollama model support in `app/modules/intelligence/tools/tool_service.py`
  - Implement methods to interact with Ollama models
Copy link
Contributor

coderabbitai bot commented Jan 5, 2025

Walkthrough

The pull request introduces support for the Ollama language model provider across multiple components of the application. The changes include enhancements to configuration management, agent creation, provider services, and tool initialization. New attributes for Ollama configuration are added, alongside the integration of Ollama as a new agent and tool. This implementation allows the system to leverage Ollama's capabilities alongside existing language model providers, ensuring consistent integration across various modules.

Changes

File Changes
app/core/config_provider.py Added ollama_endpoint and ollama_model configuration attributes with environment variable retrieval and introduced get_ollama_config method.
app/modules/intelligence/agents/agent_factory.py Imported Ollama and added ollama_agent to the agent creation map in _create_agent method.
app/modules/intelligence/agents/agent_injector_service.py Added initialization for ollama_agent in _initialize_agents method.
app/modules/intelligence/provider/provider_service.py Integrated Ollama as a new LLM provider with updates to list_available_llms, get_large_llm, get_small_llm, and get_llm_provider_name methods.
app/modules/intelligence/tools/tool_service.py Added methods _get_ollama_endpoint and _get_ollama_model, initialized ollama_tool in _initialize_tools.

Sequence Diagram

sequenceDiagram
    participant ConfigProvider
    participant ProviderService
    participant AgentFactory
    participant AgentInjectorService
    participant ToolService

    ConfigProvider->>ProviderService: Provide Ollama config
    ProviderService->>AgentFactory: Create Ollama agent
    AgentFactory->>AgentInjectorService: Initialize Ollama agent
    AgentInjectorService->>ToolService: Initialize Ollama tool
    ToolService-->>AgentInjectorService: Ollama tool ready
Loading

Poem

🐰 In the realm of code, a new star shines bright,
Ollama arrives with linguistic might!
Configs aligned, agents set free,
A rabbit's dance of AI glee 🎉
Expanding horizons, one model at a time! 🚀

Finishing Touches

  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
app/modules/intelligence/agents/agent_factory.py (1)

74-77: New ollama_agent creation looks correct
The code properly retrieves base_url and model from the provider service before instantiating Ollama. This maintains consistency with existing agent definitions. Ensure that upstream configuration calls (e.g., get_ollama_endpoint() and get_ollama_model()) handle missing or invalid environment variables gracefully.

app/modules/intelligence/tools/tool_service.py (1)

69-72: ollama_tool initialization
Instantiating Ollama in _initialize_tools aligns with the approach used elsewhere and keeps the code tidy. Consider adding docstrings or usage instructions for ollama_tool to help future contributors.

app/modules/intelligence/provider/provider_service.py (2)

204-209: Refactor suggestion to avoid duplication.

The Ollama initialization logic here is nearly identical to the logic in get_small_llm. Consider extracting the repeated initialization into a shared helper method to reduce code duplication and improve maintainability.

+ def _init_ollama_llm(self):
+     logging.info("Initializing Ollama LLM")
+     ollama_endpoint = os.getenv("OLLAMA_ENDPOINT", "http://localhost:11434")
+     ollama_model = os.getenv("OLLAMA_MODEL", "llama2")
+     return Ollama(base_url=ollama_endpoint, model=ollama_model)

def get_large_llm(self, agent_type: AgentType):
    ...
    elif preferred_provider == "ollama":
-       logging.info("Initializing Ollama LLM")
-       ollama_endpoint = os.getenv("OLLAMA_ENDPOINT", "http://localhost:11434")
-       ollama_model = os.getenv("OLLAMA_MODEL", "llama2")
-       self.llm = Ollama(base_url=ollama_endpoint, model=ollama_model)
+       self.llm = self._init_ollama_llm()
    ...

338-343: Consistent approach for small LLM initialization.

The code block for initializing Ollama in get_small_llm is consistent with the logic in get_large_llm. The same refactoring advice applies: extracting these lines into a shared method would simplify future updates.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 55eb585 and 6ecad88.

📒 Files selected for processing (5)
  • app/core/config_provider.py (1 hunks)
  • app/modules/intelligence/agents/agent_factory.py (2 hunks)
  • app/modules/intelligence/agents/agent_injector_service.py (2 hunks)
  • app/modules/intelligence/provider/provider_service.py (5 hunks)
  • app/modules/intelligence/tools/tool_service.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
app/modules/intelligence/tools/tool_service.py

76-76: Undefined name ConfigProvider

(F821)


79-79: Undefined name ConfigProvider

(F821)

🔇 Additional comments (9)
app/modules/intelligence/agents/agent_factory.py (1)

27-27: Import from langchain_ollama is appropriate
Good job importing Ollama here. This addition is consistent with the new Ollama integration introduced in the PR.

app/modules/intelligence/agents/agent_injector_service.py (2)

31-31: Ollama import is consistent
This import mirrors the pattern seen in agent_factory.py and neatly ties into the newly added _initialize_agents logic.


63-66: ollama_agent instantiation is well structured
The agent is clearly initialized with base_url and model from the provider. Confirm that any potential exceptions (e.g., missing/invalid endpoints) are handled and logged.

app/core/config_provider.py (2)

16-17: Useful defaults for Ollama
Providing default values for ollama_endpoint and ollama_model helps avoid errors in development environments where these environment variables may be undefined.


25-29: Convenient get_ollama_config method
Encapsulating Ollama config in one method is a good pattern for clarity and adaptation to future changes.

app/modules/intelligence/tools/tool_service.py (1)

39-39: Ollama import lines up with the new tool
This import matches the rest of the PR’s Ollama integration strategy, enabling the new ollama_tool.

app/modules/intelligence/provider/provider_service.py (3)

48-52: Great addition to the LLM options.

Listing the Ollama provider in the available LLMs ensures visibility and discoverability for users. This is consistent with the approach used for other providers. No issues found.


358-359: Provider identification logic looks correct.

Returning "Ollama" when the LLM instance is of type Ollama ensures correct provider identification for downstream consumers. No issues found.


10-10: Confirm the version and stability of the imported library.

While the langchain_ollama import is crucial for providing Ollama functionality, consider verifying its version and stability before merging. If the library is experimental or unstable, it could introduce unforeseen issues.

✅ Verification successful

The langchain_ollama import appears to be part of a stable langchain ecosystem

The codebase is using a well-established langchain ecosystem with specific version constraints:

  • langchain-core==0.2.43
  • langchain-community==0.2.19
  • langchain==0.2.17

The langchain_ollama import is consistently used across multiple core services in the application:

  • agent_factory.py
  • agent_injector_service.py
  • provider_service.py
  • tool_service.py

This consistent usage pattern and integration with versioned langchain dependencies suggests the library is stable and properly integrated into the codebase.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the version of langchain_ollama in the repo's dependencies.
# This script searches for a requirements or pyproject file referencing langchain_ollama.

fd -IH "requirements" --exec rg 'langchain_ollama'
fd -IH "pyproject" --exec rg 'langchain_ollama'

Length of output: 94


Script:

#!/bin/bash
# Let's try searching for any dependency management files with different patterns
fd -e txt -e toml -e ini -e cfg -IH "requirements|pyproject|setup|poetry" --exec rg -i 'langchain|ollama'

# Also check for any usage patterns of this library in the codebase
rg -i "from\s+langchain_ollama|import\s+langchain_ollama" -A 3

Length of output: 1526

app/modules/intelligence/tools/tool_service.py Outdated Show resolved Hide resolved
@dhirenmathur dhirenmathur self-requested a review January 6, 2025 10:59
@dhirenmathur
Copy link
Contributor

@vishwamartur thanks for your contribution! Could you please go through the CodeRabbit review comments and fix the major ones? Thanks

@vishwamartur
Copy link
Author

@dhirenmathur Sure I will do that

* **Configuration**: Use `config_provider` to retrieve Ollama configuration instead of querying the database
  - Import `config_provider` from `app.core.config_provider`
  - Update `_get_ollama_endpoint` method to use `config_provider`
  - Update `_get_ollama_model` method to use `config_provider`
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6ecad88 and e38c81a.

📒 Files selected for processing (1)
  • app/modules/intelligence/tools/tool_service.py (2 hunks)
🔇 Additional comments (2)
app/modules/intelligence/tools/tool_service.py (2)

39-40: LGTM! Clean and focused imports.

The new imports are correctly placed and specifically target the required functionality.


70-73: Verify async compatibility of the Ollama tool.

Since the run_tool method expects an arun method for async execution, verify that the Ollama class from langchain_ollama supports async operations.

Comment on lines +70 to +73
"ollama_tool": Ollama(
base_url=self._get_ollama_endpoint(),
model=self._get_ollama_model(),
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling and description for the Ollama tool.

Two improvements needed:

  1. Add error handling for missing configuration
  2. Add a description attribute for the tool listing functionality

Consider this implementation:

     "ollama_tool": Ollama(
         base_url=self._get_ollama_endpoint(),
         model=self._get_ollama_model(),
+        description="Local LLM powered by Ollama",
     ),

Also, add error handling:

     "ollama_tool": (
+        lambda: Ollama(
             base_url=self._get_ollama_endpoint(),
             model=self._get_ollama_model(),
+            description="Local LLM powered by Ollama",
+        )
+        if config_provider.get_ollama_config()
+        else None
     )(),

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +76 to +81
def _get_ollama_endpoint(self) -> str:
return config_provider.get_ollama_config()["endpoint"]

def _get_ollama_model(self) -> str:
return config_provider.get_ollama_config()["model"]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation and error handling to configuration methods.

The methods should validate the configuration values and handle missing or malformed config gracefully.

Consider this implementation:

     def _get_ollama_endpoint(self) -> str:
-        return config_provider.get_ollama_config()["endpoint"]
+        config = config_provider.get_ollama_config()
+        if not config or "endpoint" not in config:
+            raise ValueError("Ollama endpoint configuration is missing")
+        endpoint = config["endpoint"]
+        if not endpoint.startswith(("http://", "https://")):
+            raise ValueError(f"Invalid Ollama endpoint URL format: {endpoint}")
+        return endpoint

     def _get_ollama_model(self) -> str:
-        return config_provider.get_ollama_config()["model"]
+        config = config_provider.get_ollama_config()
+        if not config or "model" not in config:
+            raise ValueError("Ollama model configuration is missing")
+        return config["model"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _get_ollama_endpoint(self) -> str:
return config_provider.get_ollama_config()["endpoint"]
def _get_ollama_model(self) -> str:
return config_provider.get_ollama_config()["model"]
def _get_ollama_endpoint(self) -> str:
config = config_provider.get_ollama_config()
if not config or "endpoint" not in config:
raise ValueError("Ollama endpoint configuration is missing")
endpoint = config["endpoint"]
if not endpoint.startswith(("http://", "https://")):
raise ValueError(f"Invalid Ollama endpoint URL format: {endpoint}")
return endpoint
def _get_ollama_model(self) -> str:
config = config_provider.get_ollama_config()
if not config or "model" not in config:
raise ValueError("Ollama model configuration is missing")
return config["model"]

@dhirenmathur
Copy link
Contributor

@vishwamartur taking a look at this today!

@vishwamartur
Copy link
Author

@dhirenmathur Sir can you please take this task

Copy link
Contributor

@dhirenmathur dhirenmathur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vishwamartur the goal of the task is to make sure that the app i.e. code inference and existing agents can utilise local models through ollama, In your implementaton you have created separate agents for Ollama. That is not part of the scope.

Update the provider service to take the ollama and model name as input (this you have done)
Ensure that end to end - parsing + agent flow works perfectly with ollama (please attach screenshots)

@vishwamartur vishwamartur marked this pull request as draft January 17, 2025 15:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants