Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add ollama doc & update Ollama entrance URL #133

Merged
merged 2 commits into from
Mar 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/.vitepress/en.ts
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,10 @@ const side = {
text: "ComfyUI",
link: "/manual/use-cases/comfyui",
},
{
text: "Ollama",
link: "/manual/use-cases/ollama",
},
{
text: "Open WebUI",
link: "/manual/use-cases/openwebui",
Expand Down
4 changes: 4 additions & 0 deletions docs/.vitepress/zh.ts
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,10 @@ const side = {
text: "ComfyUI",
link: "/zh/manual/use-cases/comfyui",
},
{
text: "Ollama 下载开源模型",
link: "/zh/manual/use-cases/ollama",
},
{
text: "Open WebUI",
link: "/zh/manual/use-cases/openwebui",
Expand Down
12 changes: 4 additions & 8 deletions docs/manual/use-cases/dify.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ Dify is an AI application development platform. It's one of the key open-source

## Before you begin
To use local AI models on Dify, ensure you have:
- Ollama installed and running in your Olares environment
- Open WebUI installed with your preferred language models downloaded
- [Ollama installed](ollama.md) and running in your Olares environment.
- [Open WebUI installed](openwebui.md) with your preferred language models downloaded.
:::tip
For optimal performance, consider using lightweight yet powerful models like `gemma2` or `qwen`, which offer a good balance between speed and capability.
:::
Expand All @@ -30,18 +30,14 @@ Install Dify from Market based on your role:

## Add Ollama as model provider

1. Set up the Ollama access entrance:

a. Navigate to **Settings** > **Application** > **Ollama** > **Entrances**, and set the authentication level for Ollama to **Internal**. This configuration allows other applications to access Ollama services within the local network without authentication.
1. Navigate to **Settings** > **Application** > **Ollama** > **Entrances**, and set the authentication level for Ollama to **Internal**. This configuration allows other applications to access Ollama services within the local network without authentication.

![Ollama entrance](/images/manual/use-cases/dify-ollama-entrance.png#bordered)

b. From the **Entrances** page, enter the **Set up endpoint** page to retrieve the default route ID for Ollama(`39975b9a`). Now you get the local access address for Ollama: `https://39975b9a.local.{your username}.olares.com`, for example, `https://39975b9a.local.kevin112.olares.com`.

2. In Dify, navigate to **Settings** > **Model Provider**.
3. Select Ollama as the model provider, with the following configurations:
- **Model Name**: Enter the model name. For example: `gemma2`.
- **Base URL**: Enter Ollama's local address you get in step 1, for example, `https://39975b9a.local.kevin112.olares.com`.
- **Base URL**: Enter Ollama's local address: `https://39975b9a1.local.{username}.olares.com`. Replace `{username}` with the Olares Admin's local name. For example, `https://39975b9a1.local.marvin123.olares.com`.

![Add gemma2](/images/manual/use-cases/dify-add-gemma2.png#bordered){width=70%}

Expand Down
1 change: 1 addition & 0 deletions docs/manual/use-cases/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,6 @@ While other guides tell you "what" and "how," these use cases reveal the full po
{ title: 'Perplexica', link: './perplexica.html', tags: ['ai']},
{ title: 'Dify', link: './dify.html', tags: ['ai']},
{ title: 'Hubble', link: 'https://blog.olares.xyz/running-farcaster-hubble-on-your-home-cloud/', tags: ['social network']},
{ title: 'Ollama', link: './ollama.html', tags: ['ai']},
]"
/>
82 changes: 82 additions & 0 deletions docs/manual/use-cases/ollama.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
---
outline: [2, 3]
description: Learn how to download and manage AI models locally using Ollama CLI within the Olares environment.
---

# Download and run AI models locally via Ollama
Ollama is a lightweight platform that allows you to run open-source AI models like `deepseek-r1` and `gemma3` directly on your machine. Within Olares, you can integrate Ollama with graphical interfaces like Open WebUI to add more features and simplify interactions.

This guide will show you how to set up and use Ollama CLI on Olares.

## Before you begin
Before you start, ensure that:
- You have Olares admin privileges.

## Install Ollama

Directly install Ollama from the Market.

Once installation is complete, you can access Ollama terminal from the Launchpad.

![Ollama](/images/manual/use-cases/ollama.png#bordered)
## Ollama CLI
Ollama CLI allows you to manage and interact with AI models directly. Below are the key commands and their usage:

### Download model
:::tip Check Ollama library
If you are unsure which model to download, check the [Ollama Library](https://ollama.com/library) to explore available models.
:::
To download a model, use the following command:
```bash
ollama pull [model]
```

### Run model
:::tip
If the specified model has not been downloaded yet, the `ollama run` command will automatically download it before running.
:::

To run a model, use the following command:
```bash
ollama run [model]
```

After running the command, you can enter queries directly into the CLI, and the model will generate responses.

When you're finished interacting with the model, type:
```bash
/bye
```
This will exit the session and return you to the standard terminal interface.

### Stop model
To stop a model that is currently running, use the following command:
```bash
ollama stop [model]
```

### List models
To view all models installed on your system, use:
```bash
ollama list
```

### Remove a model
If you need to delete a model, you can use the following command:
```bash
ollama rm [model]
```
### Show information for a model
To display detailed information about a model, use:
```bash
ollama show [model]
```

### List running models
To see all currently running models, use:
```bash
ollama ps
```

## Learn more
- [Learn how to run Ollama models with Open WebUI](openwebui.md)
Binary file added docs/public/images/manual/use-cases/ollama.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 4 additions & 8 deletions docs/zh/manual/use-cases/dify.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ Dify 是一个 AI 应用开发平台。它是 Olares 集成的关键开源项目

## 开始之前
要使用本地 AI 模型,请确保你的环境中已配置以下内容:
- Olares 环境中已安装并运行 Ollama。
- 已安装 Open WebUI,并下载了你偏好的语言模型。
- Olares 环境中已安装并运行 [Ollama](ollama.md)
- 已安装 [Open WebUI](openwebui.md),并下载了你偏好的语言模型。
:::tip 提示
建议使用 `gemma2` 或 `qwen` 等轻量但功能强大的模型,可在速度和性能间取得良好平衡。
:::
Expand All @@ -30,17 +30,13 @@ Dify 是一个 AI 应用开发平台。它是 Olares 集成的关键开源项目

## 添加 Ollama 作为模型提供商

1. 配置 Ollama 访问入口。

a. 进入**设置** > **应用** > **Ollama** > **入口**,设置 Ollama 的认证级别为“内部”。该设置允许其他应用在本地网络环境下可无需认证即可访问 Ollama 服务。
1. 进入**设置** > **应用** > **Ollama** > **入口**,设置 Ollama 的认证级别为“内部”。该设置允许其他应用在本地网络环境下可无需认证即可访问 Ollama 服务。

![Ollama entrance](/images/zh/manual/use-cases/dify-ollama-entrance.png#bordered)

b. 点击**设置端点**页面,查看 Ollama 的默认路由 ID(`39975b9a`)。这样,我们便得到了 Ollama 的本地访问地址:`https://39975b9a.local.{你的本地名称}.olares.cn`, 如 `https://39975b9a.local.marvin123.olares.cn`。

2. 在 Dify 的 模型提供者商配置页面,选择 Ollama 作为模型提供者商,并进行以下配置:
- **模型名称**:填写模型名称,例如:`gemma2`。
- **基础 URL**:填入上一步获取的 Ollama 本地地址,如 `https://39975b9a.local.marvin123.olares.cn`。
- **基础 URL**:填入 Ollama 本地地址: `https://39975b9a1.local.{username}.olares.cn`。将 `{username}` 替换为 Olares 管理员的用户名。例如:`https://39975b9a1.local.marvin123.olares.com`。

![配置 Ollama](/images/zh/manual/use-cases/dify-add-gemma2.png#bordered){width=70%}

Expand Down
1 change: 1 addition & 0 deletions docs/zh/manual/use-cases/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,6 @@ description: 了解 Olares 的多种实际应用场景,学习如何将其功
{ title: 'Open WebUI', link: './openwebui.html', tags: ['AI'] },
{ title: 'Perplexica', link: './perplexica.html', tags: ['AI']},
{ title: 'Dify', link: './dify.html', tags: ['AI']},
{ title: 'Ollama', link: './ollama.html', tags: ['ai']},
]"
/>
82 changes: 82 additions & 0 deletions docs/zh/manual/use-cases/ollama.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
---
outline: [2, 3]
description: 了解如何在 Olares 环境中使用 Ollama CLI 在本地下载和管理 AI 模型。
---

# 通过 Ollama 在本地运行 AI 模型
Ollama 是一个轻量级平台,可以让你在本地机器上直接运行 `deepseek-r1` 和 `gemma3` 等开源 AI 模型。如果你更偏向于使用图形化界面,也可以在 Open WebUI 中管理 Ollama 模型,以添加更多功能并简化交互。

本文档将介绍如何在 Olares 上设置和使用 Ollama CLI。

## 开始之前
请确保满足以下条件:
- 当前登录账号为 Olares 管理员。

## 安装 Ollama

直接从应用市场安装 Ollama。

安装完成后,可以从启动台访问 Ollama 终端。

![Ollama](/images/manual/use-cases/ollama.png#bordered)
## Ollama CLI
Ollama CLI 让你可以直接管理和使用 AI 模型。以下是主要命令及其用法:

### 下载模型
:::tip 查看 Ollama 模型库
如果不确定下载哪个模型,可以在 [Ollama Library](https://ollama.com/library) 浏览可用模型。
:::
使用以下命令下载模型:
```bash
ollama pull [model]
```

### 运行模型
:::tip
如果尚未下载指定的模型,ollama run 命令会在运行前自动下载该模型。
:::

使用以下命令运行模型:
```bash
ollama run [model]
```

运行命令后,直接在 CLI 中输入问题,模型会生成回答。

完成交互后,输入:
```bash
/bye
```
这将退出会话并返回标准终端界面。

### 停止模型
要停止当前运行的模型,使用以下命令:
```bash
ollama stop [model]
```

### 列出模型
要查看系统中已安装的所有模型,使用:
```bash
ollama list
```

### 删除模型
如果需要删除模型,可以使用以下命令:
```bash
ollama rm [model]
```
### 显示模型信息
要显示模型的详细信息,使用:
```bash
ollama show [model]
```

### 列出运行中的模型
要查看所有当前运行的模型,使用:
```bash
ollama ps
```

## 了解更多
- [了解如何通过 Open WebUI 运行 Ollama 模型](openwebui.md)