-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Ollama #218
base: main
Are you sure you want to change the base?
Conversation
Related to potpie-ai#188 Add support for Ollama to enable users to run open source models locally. * **Provider Service Integration** - Add Ollama API integration in `app/modules/intelligence/provider/provider_service.py` - Implement method to get Ollama LLM - Update `list_available_llms` method to include Ollama * **Configuration Options** - Add configuration options for Ollama endpoint and model selection in `app/core/config_provider.py` - Update `ConfigProvider` class to include Ollama settings * **Agent Factory and Injector Service** - Add support for Ollama models in `app/modules/intelligence/agents/agent_factory.py` - Implement method to create Ollama agent - Add support for Ollama models in `app/modules/intelligence/agents/agent_injector_service.py` - Implement method to get Ollama agent * **Tool Service** - Add tools for Ollama model support in `app/modules/intelligence/tools/tool_service.py` - Implement methods to interact with Ollama models
WalkthroughThe pull request introduces support for the Ollama language model provider across multiple components of the application. The changes include enhancements to configuration management, agent creation, provider services, and tool initialization. New attributes for Ollama configuration are added, alongside the integration of Ollama as a new agent and tool. This implementation allows the system to leverage Ollama's capabilities alongside existing language model providers, ensuring consistent integration across various modules. Changes
Sequence DiagramsequenceDiagram
participant ConfigProvider
participant ProviderService
participant AgentFactory
participant AgentInjectorService
participant ToolService
ConfigProvider->>ProviderService: Provide Ollama config
ProviderService->>AgentFactory: Create Ollama agent
AgentFactory->>AgentInjectorService: Initialize Ollama agent
AgentInjectorService->>ToolService: Initialize Ollama tool
ToolService-->>AgentInjectorService: Ollama tool ready
Poem
Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
app/modules/intelligence/agents/agent_factory.py (1)
74-77
: Newollama_agent
creation looks correct
The code properly retrievesbase_url
andmodel
from the provider service before instantiatingOllama
. This maintains consistency with existing agent definitions. Ensure that upstream configuration calls (e.g.,get_ollama_endpoint()
andget_ollama_model()
) handle missing or invalid environment variables gracefully.app/modules/intelligence/tools/tool_service.py (1)
69-72
:ollama_tool
initialization
InstantiatingOllama
in_initialize_tools
aligns with the approach used elsewhere and keeps the code tidy. Consider adding docstrings or usage instructions forollama_tool
to help future contributors.app/modules/intelligence/provider/provider_service.py (2)
204-209
: Refactor suggestion to avoid duplication.The Ollama initialization logic here is nearly identical to the logic in
get_small_llm
. Consider extracting the repeated initialization into a shared helper method to reduce code duplication and improve maintainability.+ def _init_ollama_llm(self): + logging.info("Initializing Ollama LLM") + ollama_endpoint = os.getenv("OLLAMA_ENDPOINT", "http://localhost:11434") + ollama_model = os.getenv("OLLAMA_MODEL", "llama2") + return Ollama(base_url=ollama_endpoint, model=ollama_model) def get_large_llm(self, agent_type: AgentType): ... elif preferred_provider == "ollama": - logging.info("Initializing Ollama LLM") - ollama_endpoint = os.getenv("OLLAMA_ENDPOINT", "http://localhost:11434") - ollama_model = os.getenv("OLLAMA_MODEL", "llama2") - self.llm = Ollama(base_url=ollama_endpoint, model=ollama_model) + self.llm = self._init_ollama_llm() ...
338-343
: Consistent approach for small LLM initialization.The code block for initializing Ollama in
get_small_llm
is consistent with the logic inget_large_llm
. The same refactoring advice applies: extracting these lines into a shared method would simplify future updates.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
app/core/config_provider.py
(1 hunks)app/modules/intelligence/agents/agent_factory.py
(2 hunks)app/modules/intelligence/agents/agent_injector_service.py
(2 hunks)app/modules/intelligence/provider/provider_service.py
(5 hunks)app/modules/intelligence/tools/tool_service.py
(2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
app/modules/intelligence/tools/tool_service.py
76-76: Undefined name ConfigProvider
(F821)
79-79: Undefined name ConfigProvider
(F821)
🔇 Additional comments (9)
app/modules/intelligence/agents/agent_factory.py (1)
27-27
: Import fromlangchain_ollama
is appropriate
Good job importingOllama
here. This addition is consistent with the new Ollama integration introduced in the PR.app/modules/intelligence/agents/agent_injector_service.py (2)
31-31
:Ollama
import is consistent
This import mirrors the pattern seen inagent_factory.py
and neatly ties into the newly added_initialize_agents
logic.
63-66
:ollama_agent
instantiation is well structured
The agent is clearly initialized withbase_url
andmodel
from the provider. Confirm that any potential exceptions (e.g., missing/invalid endpoints) are handled and logged.app/core/config_provider.py (2)
16-17
: Useful defaults for Ollama
Providing default values forollama_endpoint
andollama_model
helps avoid errors in development environments where these environment variables may be undefined.
25-29
: Convenientget_ollama_config
method
Encapsulating Ollama config in one method is a good pattern for clarity and adaptation to future changes.app/modules/intelligence/tools/tool_service.py (1)
39-39
:Ollama
import lines up with the new tool
This import matches the rest of the PR’s Ollama integration strategy, enabling the newollama_tool
.app/modules/intelligence/provider/provider_service.py (3)
48-52
: Great addition to the LLM options.Listing the Ollama provider in the available LLMs ensures visibility and discoverability for users. This is consistent with the approach used for other providers. No issues found.
358-359
: Provider identification logic looks correct.Returning
"Ollama"
when the LLM instance is of typeOllama
ensures correct provider identification for downstream consumers. No issues found.
10-10
: Confirm the version and stability of the imported library.While the
langchain_ollama
import is crucial for providing Ollama functionality, consider verifying its version and stability before merging. If the library is experimental or unstable, it could introduce unforeseen issues.✅ Verification successful
The langchain_ollama import appears to be part of a stable langchain ecosystem
The codebase is using a well-established langchain ecosystem with specific version constraints:
- langchain-core==0.2.43
- langchain-community==0.2.19
- langchain==0.2.17
The
langchain_ollama
import is consistently used across multiple core services in the application:
- agent_factory.py
- agent_injector_service.py
- provider_service.py
- tool_service.py
This consistent usage pattern and integration with versioned langchain dependencies suggests the library is stable and properly integrated into the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the version of langchain_ollama in the repo's dependencies. # This script searches for a requirements or pyproject file referencing langchain_ollama. fd -IH "requirements" --exec rg 'langchain_ollama' fd -IH "pyproject" --exec rg 'langchain_ollama'Length of output: 94
Script:
#!/bin/bash # Let's try searching for any dependency management files with different patterns fd -e txt -e toml -e ini -e cfg -IH "requirements|pyproject|setup|poetry" --exec rg -i 'langchain|ollama' # Also check for any usage patterns of this library in the codebase rg -i "from\s+langchain_ollama|import\s+langchain_ollama" -A 3Length of output: 1526
@vishwamartur thanks for your contribution! Could you please go through the CodeRabbit review comments and fix the major ones? Thanks |
@dhirenmathur Sure I will do that |
* **Configuration**: Use `config_provider` to retrieve Ollama configuration instead of querying the database - Import `config_provider` from `app.core.config_provider` - Update `_get_ollama_endpoint` method to use `config_provider` - Update `_get_ollama_model` method to use `config_provider`
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/modules/intelligence/tools/tool_service.py
(2 hunks)
🔇 Additional comments (2)
app/modules/intelligence/tools/tool_service.py (2)
39-40
: LGTM! Clean and focused imports.The new imports are correctly placed and specifically target the required functionality.
70-73
: Verify async compatibility of the Ollama tool.Since the
run_tool
method expects anarun
method for async execution, verify that theOllama
class fromlangchain_ollama
supports async operations.
"ollama_tool": Ollama( | ||
base_url=self._get_ollama_endpoint(), | ||
model=self._get_ollama_model(), | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and description for the Ollama tool.
Two improvements needed:
- Add error handling for missing configuration
- Add a description attribute for the tool listing functionality
Consider this implementation:
"ollama_tool": Ollama(
base_url=self._get_ollama_endpoint(),
model=self._get_ollama_model(),
+ description="Local LLM powered by Ollama",
),
Also, add error handling:
"ollama_tool": (
+ lambda: Ollama(
base_url=self._get_ollama_endpoint(),
model=self._get_ollama_model(),
+ description="Local LLM powered by Ollama",
+ )
+ if config_provider.get_ollama_config()
+ else None
)(),
Committable suggestion skipped: line range outside the PR's diff.
def _get_ollama_endpoint(self) -> str: | ||
return config_provider.get_ollama_config()["endpoint"] | ||
|
||
def _get_ollama_model(self) -> str: | ||
return config_provider.get_ollama_config()["model"] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation and error handling to configuration methods.
The methods should validate the configuration values and handle missing or malformed config gracefully.
Consider this implementation:
def _get_ollama_endpoint(self) -> str:
- return config_provider.get_ollama_config()["endpoint"]
+ config = config_provider.get_ollama_config()
+ if not config or "endpoint" not in config:
+ raise ValueError("Ollama endpoint configuration is missing")
+ endpoint = config["endpoint"]
+ if not endpoint.startswith(("http://", "https://")):
+ raise ValueError(f"Invalid Ollama endpoint URL format: {endpoint}")
+ return endpoint
def _get_ollama_model(self) -> str:
- return config_provider.get_ollama_config()["model"]
+ config = config_provider.get_ollama_config()
+ if not config or "model" not in config:
+ raise ValueError("Ollama model configuration is missing")
+ return config["model"]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def _get_ollama_endpoint(self) -> str: | |
return config_provider.get_ollama_config()["endpoint"] | |
def _get_ollama_model(self) -> str: | |
return config_provider.get_ollama_config()["model"] | |
def _get_ollama_endpoint(self) -> str: | |
config = config_provider.get_ollama_config() | |
if not config or "endpoint" not in config: | |
raise ValueError("Ollama endpoint configuration is missing") | |
endpoint = config["endpoint"] | |
if not endpoint.startswith(("http://", "https://")): | |
raise ValueError(f"Invalid Ollama endpoint URL format: {endpoint}") | |
return endpoint | |
def _get_ollama_model(self) -> str: | |
config = config_provider.get_ollama_config() | |
if not config or "model" not in config: | |
raise ValueError("Ollama model configuration is missing") | |
return config["model"] | |
@vishwamartur taking a look at this today! |
@dhirenmathur Sir can you please take this task |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vishwamartur the goal of the task is to make sure that the app i.e. code inference and existing agents can utilise local models through ollama, In your implementaton you have created separate agents for Ollama. That is not part of the scope.
Update the provider service to take the ollama and model name as input (this you have done)
Ensure that end to end - parsing + agent flow works perfectly with ollama (please attach screenshots)
Related to #188
Add support for Ollama to enable users to run open source models locally.
Provider Service Integration
app/modules/intelligence/provider/provider_service.py
list_available_llms
method to include OllamaConfiguration Options
app/core/config_provider.py
ConfigProvider
class to include Ollama settingsAgent Factory and Injector Service
app/modules/intelligence/agents/agent_factory.py
app/modules/intelligence/agents/agent_injector_service.py
Tool Service
app/modules/intelligence/tools/tool_service.py
Summary by CodeRabbit
Release Notes
New Features
Improvements