Skip to content

Latest commit

 

History

History
36 lines (27 loc) · 911 Bytes

README.md

File metadata and controls

36 lines (27 loc) · 911 Bytes

Synochatgpt

Inspired by synochat, chatgpt and ollama.

The goal is to run LLM 100% locally and integrate as a chatbot with Synology Chat

Usage

Install ollama and download llama3:8b on your mac

ollama pull llama3:8b
ollama server

It also needs your Synology Chat Bot's token and incoming URL (host), set them as environment variables before using the app:

export export SYNOLOGY_TOKEN='...'
export export SYNOLOGY_INCOMING_URL='...'

Disable PROXY for localhost HTTP access if needed

export NO_PROXY=http://127.0.0.1

Run

pip install -r requirements.txt

python synochatgpt.py

TODO

  • Fine tune
  • Docker
  • RAG