Skip to content
This repository has been archived by the owner on Oct 14, 2024. It is now read-only.

Commit

Permalink
Merge pull request #259 from janhq/pena-patch-1
Browse files Browse the repository at this point in the history
docs: Update the images to reflect the new UI
  • Loading branch information
irfanpena authored Jun 6, 2024
2 parents 982706e + eb4a4dc commit f9c7f74
Show file tree
Hide file tree
Showing 102 changed files with 394 additions and 435 deletions.
4 changes: 0 additions & 4 deletions src/components/FooterMenu/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,6 @@ const menus = [
menu: 'Documentation',
path: '/docs',
},
{
menu: 'API Reference',
path: '/api-reference',
},
],
},
{
Expand Down
9 changes: 0 additions & 9 deletions src/pages/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,6 @@
"title": "Integrations",
"display": "hidden"
},
"api-reference": {
"type": "page",
"title": "API Reference",
"href": "/api-reference",
"theme": {
"layout": "raw",
"footer": false
}
},
"changelog": {
"type": "page",
"title": "Changelog",
Expand Down
22 changes: 0 additions & 22 deletions src/pages/api-reference.mdx

This file was deleted.

Binary file added src/pages/docs/_assets/Anthropic-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Anthropic-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Cohere-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Cohere-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Groq-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Groq-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/LM-Studio-v1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/LM-Studio-v2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/LM-Studio-v3.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Martian-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Martian-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Mistral-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Mistral-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Ollama-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Ollama-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/Ollama-3.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/OpenAi-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/OpenAi-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/OpenRouter-1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/OpenRouter-2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/advance-set.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/advance-settings2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/appearance.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/asst.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/browser1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added src/pages/docs/_assets/browser2.png
Binary file added src/pages/docs/_assets/chat.gif
Binary file modified src/pages/docs/_assets/clean.png
Binary file added src/pages/docs/_assets/clear-logs.png
Binary file removed src/pages/docs/_assets/data-folder.gif
Diff not rendered.
Binary file added src/pages/docs/_assets/data-folder.png
Binary file added src/pages/docs/_assets/default.gif
Binary file modified src/pages/docs/_assets/delete-threads.png
Binary file modified src/pages/docs/_assets/delete.png
Binary file added src/pages/docs/_assets/download-button.png
Binary file added src/pages/docs/_assets/download-button2.png
Binary file added src/pages/docs/_assets/download-button3.png
Binary file added src/pages/docs/_assets/download-icon.png
Binary file added src/pages/docs/_assets/download-model2.gif
Binary file added src/pages/docs/_assets/exp-mode.png
Binary file added src/pages/docs/_assets/extensions-page2.png
Binary file added src/pages/docs/_assets/gpu-accel.png
Binary file added src/pages/docs/_assets/gpu2.gif
Binary file modified src/pages/docs/_assets/history.png
Binary file added src/pages/docs/_assets/http.png
Binary file added src/pages/docs/_assets/hub.png
Binary file added src/pages/docs/_assets/import.png
Binary file added src/pages/docs/_assets/import2.png
Binary file added src/pages/docs/_assets/inf.gif
Binary file added src/pages/docs/_assets/install-ext.png
Binary file added src/pages/docs/_assets/install-tensor.png
Binary file modified src/pages/docs/_assets/local-api1.png
Binary file modified src/pages/docs/_assets/local-api2.png
Binary file modified src/pages/docs/_assets/local-api3.png
Binary file modified src/pages/docs/_assets/local-api4.png
Binary file added src/pages/docs/_assets/local-api5.png
Binary file added src/pages/docs/_assets/model-parameters.png
Binary file added src/pages/docs/_assets/model-tab.png
Binary file modified src/pages/docs/_assets/mymodels.png
Binary file added src/pages/docs/_assets/reset-jan.png
Binary file added src/pages/docs/_assets/retrieval1.png
Binary file added src/pages/docs/_assets/retrieval2.png
Binary file added src/pages/docs/_assets/scheme.png
Binary file added src/pages/docs/_assets/search-bar.png
Binary file added src/pages/docs/_assets/server-openai2.gif
Binary file modified src/pages/docs/_assets/settings.png
Binary file added src/pages/docs/_assets/shortcut.png
Binary file added src/pages/docs/_assets/ssl.png
Binary file added src/pages/docs/_assets/system-mili2.png
Binary file added src/pages/docs/_assets/system-monitor2.png
Binary file added src/pages/docs/_assets/system-slider2.png
Binary file added src/pages/docs/_assets/tensor.png
Binary file added src/pages/docs/_assets/theme.png
Binary file added src/pages/docs/_assets/title.png
Binary file added src/pages/docs/_assets/tools.png
Binary file added src/pages/docs/_assets/turn-off.png
5 changes: 4 additions & 1 deletion src/pages/docs/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,10 @@
"type": "separator"
},
"models": "Models",
"assistants": "Assistants",
"assistants": {
"display": "hidden",
"title": "Assistants"
},
"tools": "Tools",
"threads": "Threads",
"settings": "Settings",
Expand Down
15 changes: 11 additions & 4 deletions src/pages/docs/built-in/tensorrt-llm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,17 @@ This guide walks you through installing Jan's official [TensorRT-LLM Extension](

### Step 1: Install TensorRT-Extension

1. Go to **Settings** > **Extensions**.
2. Select the TensorRT-LLM Extension and click the **Install** button.
1. Click the **Gear Icon (⚙️)** on the bottom left of your screen.
<br/>
![Install Extension](../_assets/install-tensor.gif)
![Settings](../_assets/settings.png)
<br/>
2. Select the **TensorRT-LLM** under the **Model Provider** section.
<br/>
![Click Tensor](../_assets/tensor.png)
<br/>
3. Click **Install** to install the required dependencies to use TensorRT-LLM.
<br/>
![Install Extension](../_assets/install-tensor.png)
<br/>
3. Check that files are correctly downloaded.

Expand Down Expand Up @@ -91,7 +98,7 @@ We offer a handful of precompiled models for Ampere and Ada cards that you can i
Please see [here](/docs/models/model-parameters) for more detailed model parameters.
</Callout>
<br/>
![Configure Model](../_assets/set-tensor.gif)
![Specific Conversation](../_assets/model-parameters.png)

</Steps>

Expand Down
152 changes: 101 additions & 51 deletions src/pages/docs/extensions.mdx

Large diffs are not rendered by default.

50 changes: 25 additions & 25 deletions src/pages/docs/installing-extension.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,36 +29,36 @@ Here are the steps to install a custom extension:
Jan only accepts the `.tgz` file format for installing a custom extension.
</Callout>

1. Navigate to **Settings** > **Extensions**.
2. Click Select under **Manual Installation**.
1. Click the **Gear Icon (⚙️)** on the bottom left of your screen.
<br/>
![Settings](./_assets/settings.png)
<br/>
2. Click the **Extensions** button.
<br/>
![Extensions](./_assets/extensions-page2.png)
<br/>
2. Select **Install Extension** on top right corner.
<br/>
![Install Extension](./_assets/install-ext.png)
<br/>
3. Select a `.tgz` extension file.
4. Restart the Jan application.
5. Then, the `~/jan/extensions/extensions.json` file will be updated automatically.
<br/>
![Install Extension](./_assets/install-ext.gif)

## Disable an Extension
## Turn Off an Extension

To disable the extension, follow the steps below:
To turn off the extension, follow the steps below:

1. Navigate to the **Settings** > **Advanced Settings**.
2. On the **Jan Data Folder** click the **folder icon (📂)** to access the data folder.
3. Navigate to the `~/jan/extensions` folder.
4. Open the `extensions.json` and change the `_active` value of the TensorRT-LLM to `false`
5. Restart the app to see that the TensorRT-LLM settings page has been removed.
1. Click the **Gear Icon (⚙️)** on the bottom left of your screen.
<br/>
![Disable Extension](./_assets/disable-tensor.gif)


## Uninstall an Extension

To uninstall the extension, follow the steps below:

1. Quit the app.
2. Navigate to the **Settings** > **Advanced Settings**.
3. On the **Jan Data Folder** click the **folder icon (📂)** to access the data folder.
4. Navigate to the `~/jan/extensions/@janhq` folder.
5. Delete the **tensorrt-llm-extension** folder.
4. Reopen the app.
![Settings](./_assets/settings.png)
<br/>
2. Click the **Extensions** button.
<br/>
![Extensions](./_assets/extensions-page2.png)
<br/>
3. Click the slider button to turn off the extension.
<br/>
![Extensions](./_assets/turn-off.png)
<br/>
![Delete Extension](./_assets/delete-tensor.gif)
4. Restart the app to see that the extension has been disabled.
6 changes: 3 additions & 3 deletions src/pages/docs/local-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -105,12 +105,12 @@ To start the local server, follow the steps below:
- Replace `$PORT` with your server Port number.
</Callout>
<br/>
![Local API Server](./_assets/set-url.gif)
![Local API Server](./_assets/local-api4.png)

### Step 3: Start Chatting with the Model
1. Go to the **Threads** tab.
2. Create a new chat.
3. Select **Remote** in the Model dropdown menu and choose the **Local Model** name.
3. Select **Model** tab > select **local test** model under the **OpenAI** section.
4. Chat with the model.
![Local API Server](./_assets/set-model.gif)
![Local API Server](./_assets/local-api5.png)
</Steps>
13 changes: 7 additions & 6 deletions src/pages/docs/local-models/lmstudio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,10 @@ To integrate LM Studio with Jan, follow the steps below:
2. Select your desired model.
3. Start the server after configuring the port and options.
4. Navigate back to Jan.
5. Navigate to the **Settings** > **Extensions**.
6. In the **OpenAI Inference Engine** section, add the full web address of the LM Studio server.
5. Navigate to the **Settings** > **Model Provider**.
6. In the **OpenAI** section, add the full web address of the LM Studio server.
<br/>
![Server Setup](../_assets/server-phi.gif)
![Server Setup](../_assets/LM-Studio-v1.gif)

<Callout type="info">
- Replace `(port)` with your chosen port number. The default is 1234.
Expand All @@ -58,15 +58,16 @@ To integrate LM Studio with Jan, follow the steps below:
2. we will use the `phi-2` model in this example. Insert the `https://huggingface.co/TheBloke/phi-2-GGUF` link into the search bar.
3. Select and download the model you want to use.
<br/>
![Download Model](../_assets/download-phi.gif)
![Download Model](../_assets/LM-Studio-v2.gif)


### Step 3: Start the Model

1. Proceed to the **Threads**.
2. Select the `phi-2` model and configure the model parameters.
2. Click the **Model** tab.
3. Select the `phi-2` model and configure the model parameters.
3. Start chatting with the model.
<br/>
![Start Model](../_assets/phi.gif)
![Start Model](../_assets/LM-Studio-v3.gif)

</Steps>
15 changes: 8 additions & 7 deletions src/pages/docs/local-models/ollama.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,10 @@ To integrate Ollama with Jan, follow the steps below:
### Step 1: Server Setup

According to the [Ollama documentation on OpenAI compatibility](https://github.com/ollama/ollama/blob/main/docs/openai.md), you can connect to the Ollama server using the web address `http://localhost:11434/v1/chat/completions`. To do this, follow the steps below:
1. Navigate to the **Settings** > **Extensions**.
2. In the **OpenAI Inference Engine** section, add the full web address of the Ollama server.
1. Navigate to the **Settings** > **Model Providers**.
2. In the **OpenAI** section, add the full web address of the Ollama server.
<br/>
![Server Setup](../_assets/server-llama2.gif)
![Server Setup](../_assets/Ollama-1.gif)


<Callout type="info">
Expand All @@ -53,14 +53,15 @@ According to the [Ollama documentation on OpenAI compatibility](https://github.c
1. Navigate to the **Hub**.
2. Download the Ollama model, for example, `Llama 2 Chat 7B Q4`.
<br/>
![Download Model](../_assets/download-llama2.gif)
![Download Model](../_assets/Ollama-2.gif)

### Step 3: Start the Model

1. Navigate to the **Threads**.
2. Select the `Llama 2 Chat 7B Q4` model and configure the model parameters.
3. Start chatting with the model.
2. Click the **Model** tab.
3. Select the `Llama 2 Chat 7B Q4` model and configure the model parameters.
4. Start chatting with the model.
<br/>
![Start Model](../_assets/llama2.gif)
![Start Model](../_assets/Ollama-3.gif)

</Steps>
48 changes: 40 additions & 8 deletions src/pages/docs/models/manage-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,13 @@ Jan Hub provides three convenient methods to access machine learning models. Her
The Recommended List is a great starting point if you're looking for popular and pre-configured models that work well and quickly on most computers.

1. Open the Jan app and navigate to the Hub.
<br/>
![Jan Hub](../_assets/hub.png)
<br/>
2. Select models, clicking the `v` dropdown for more information. Models with the `Recommended` label will likely run faster on your computer.
3. Click **Download** to download the model.
<br/>
![Download Model](../_assets/hub.gif)
![Download Model](../_assets/download-button.png)

#### 2. Download with HuggingFace Model's ID or URL
If you need a specific model from [Hugging Face](https://huggingface.co/models), Jan Hub lets you download it directly using the model’s ID or URL.
Expand All @@ -51,11 +54,17 @@ Only `GGUF` models are supported for this feature.
2. Select the model you want to use.
3. Copy the Model's ID or URL, for example: `MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF` or `https://huggingface.co/MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF`.
4. Return to the Jan app and click on the Hub tab.
<br/>
![Jan Hub](../_assets/hub.png)
<br/>
5. Paste the **URL** or the **model ID** you have copied into the search bar.
<br/>
![Search Bar](../_assets/search-bar.png)
<br/>
6. The app will show all available versions of the model.
7. Click **Download** to download the model.
<br/>
![Import Model](../_assets/import-hf.gif)
![Download Model](../_assets/download-button2.png)
<br/>
#### 3. Download with Deep Link
You can also use Jan's deep link feature to download a specific model from [Hugging Face](https://huggingface.co/models). The deep link format is: `jan://models/huggingface/<model's ID>`.
Expand All @@ -70,24 +79,41 @@ You will need to download such models manually.
2. Select the model you want to use.
3. Copy the Model's ID or URL, for example: `TheBloke/Magicoder-S-DS-6.7B-GGUF`.
4. Enter the deep link URL with your chosen model's ID in your browser. For example: `jan://models/huggingface/TheBloke/Magicoder-S-DS-6.7B-GGUF`
<br/>
![Paste the URL](../_assets/browser1.png)
<br/>
5. A prompt will appear, click **Open** to open the Jan app.
<br/>
![Click Open](../_assets/browser2.png)
<br/>
6. The app will show all available versions of the model.
7. Click **Download** to download the model.
<br/>
![Import Model](../_assets/deeplink.gif)
![Download Model](../_assets/download-button3.png)
<br/>
### Import or Symlink Local Models

You can also point to existing model binary files on your local filesystem.
This is the easiest and most space-efficient way if you have already used other local AI applications.

1. Navigate to the Hub.
<br/>
![Jan Hub](../_assets/hub.png)
<br/>
2. Click on `Import Model` at the top.
3. Select to import using `.GGUF` file or a folder.
3. Select the model or the folder containing multiple models.
4. Optionally, check the box to symlink the model files instead of copying them over the Jan Data Folder. This saves disk space.
<br/>
![Import Folder](../_assets/import-folder.gif)
![Import Model](../_assets/import.png)
<br/>
3. Click the download icon button.
<br/>
![Download Icon](../_assets/download-icon.png)
<br/>
4. Select to import using `.GGUF` file or a folder.
<br/>
![Import Model](../_assets/import2.png)
<br/>
5. Select the model or the folder containing multiple models.
6. Optionally, check the box to symlink the model files instead of copying them over the Jan Data Folder. This saves disk space.

<Callout type="warning">
Windows users should drag and drop the model file, as **Click to Upload** might not show the model files in Folder Preview.
Expand All @@ -97,10 +123,16 @@ Windows users should drag and drop the model file, as **Click to Upload** might
You can also add a specific model that is not available within the **Hub** section by following the steps below:
1. Open the Jan app.
2. Click the **gear icon (⚙️)** on the bottom left of your screen.
<br/>
![Settings](../_assets/settings.png)
<br/>
3. Under the **Settings screen**, click **Advanced Settings**.
<br/>
![Settings](../_assets/advance-set.png)
<br/>
4. Open the **Jan Data folder**.
<br/>
![Jan Data Folder](../_assets/data-folder.gif)
![Jan Data Folder](../_assets/data-folder.png)
<br/>
5. Head to the `~/jan/models/`.
6. Make a new model folder and put a file named `model.json` in it.
Expand Down
28 changes: 9 additions & 19 deletions src/pages/docs/models/model-parameters.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,25 +54,15 @@ By default, Jan sets the **Context Length** to the maximum supported by your mod
</Callout>

## Customize the Model Settings
Adjust model settings for a specific conversation or across all conversations:
Adjust model settings for a specific conversation:

### A Specific Conversation
To customize model settings for a specific conversation only:

1. Create a new thread.
2. Expand the right panel.
3. Change settings under the `model` dropdown.
1. Navigate to a **thread**.
2. Click the **Model** tab.
<br/>
![Specific Conversation](../_assets/specific-model.gif)


### All Conversations
To customize default model settings for all conversations:

1. Open any thread.
2. Select the three dots next to the `model` dropdown.
3. Select `Edit global defaults for [model]`.
4. Edit the default settings directly in the `model.json`.
5. Save the file and refresh the app.
![Specific Conversation](../_assets/model-tab.png)
3. You can customize the following parameters:
- Inference parameters
- Model parameters
- Engine parameters
<br/>
![Customize model settings for all conversations](../_assets/modelparam.gif)
![Specific Conversation](../_assets/model-parameters.png)
Loading

0 comments on commit f9c7f74

Please sign in to comment.