This project is based on AIUI by lspahija, modified to run completely locally without any cloud dependencies.
Point-and-click user interfaces will soon be a thing of the past. The main user interface of the near future will be entirely voice-based.
AIUI is a platform that enables seamless two-way verbal communication with AI models. It works in both desktop and mobile browsers and now runs entirely on your local network with:
No data leaves your network, providing a completely private AI assistant experience.
This fork has been modified to ensure 100% local operation with no external API calls:
- All AI inference happens through your local Ollama server
- Speech recognition is done locally with Vosk
- Text-to-speech is handled locally with EdgeTTS
- No OpenAI or other cloud services are used
To interact with AIUI, simply start speaking after navigating to the app in your browser. AIUI will listen to your voice input, process it using your local Ollama instance, and provide a synthesized speech response. You can have a natural, continuous conversation with the AI by speaking and listening to its responses.
- Docker and Docker Compose
- An Ollama instance running on your local network
- One or more Ollama models pulled to your server
- Clone the repo
git clone https://github.com/shane-reaume/AIUI_Ollama.git
- Change directory to AIUI_Ollama
cd AIUI_Ollama
- Edit the
docker-compose.yml
file to point to your Ollama server
environment:
- OLLAMA_HOST=http://your-ollama-server:11434 # Change this to your Ollama host
- AI_COMPLETION_MODEL=deepseek-r1:8b # Change this to your preferred model
- Build the Docker image
docker build -t aiui .
- Start the container with Docker Compose
docker-compose up -d
Alternatively, you can use the following Docker run command that's known to work:
docker run -d -e AI_PROVIDER=ollama -e OLLAMA_HOST=http://your-ollama-server:11434 -e AI_COMPLETION_MODEL=deepseek-r1:8b -e STT_PROVIDER=vosk -e TTS_PROVIDER=EDGETTS -e EDGETTS_VOICE=en-US-EricNeural -p 8000:80 aiui
Or use the provided helper script:
chmod +x run_docker.sh
./run_docker.sh
- Navigate to
localhost:8000
in a modern browser
AI_PROVIDER
: Set to "ollama" to use Ollama locallyOLLAMA_HOST
: The URL of your Ollama instanceAI_COMPLETION_MODEL
: The name of the model to use (e.g., "deepseek-r1:8b")
STT_PROVIDER
: Set to "vosk" for local speech recognitionVOSK_MODEL_PATH
: Path to the Vosk model (default is set in the Docker image)
TTS_PROVIDER
: Set to "EDGETTS" for local text-to-speechEDGETTS_VOICE
: The voice to use (e.g., "en-US-EricNeural")LANGUAGE
: ISO-639-1 language code (default: "en")
If you're having issues connecting to your Ollama server, you can use the included utility script:
./check_ollama.py --host http://your-ollama-server:11434
For more detailed setup and configuration options, see LOCAL_SETUP.md.
Please star both this repository and the original AIUI repository! It helps contributors gauge the popularity of the repo and determine how much time to allot to development.