Skip to content

Latest commit

 

History

History
49 lines (40 loc) · 1.84 KB

File metadata and controls

49 lines (40 loc) · 1.84 KB

Local Low-Code Chatbot with Ollama and Flowise

logo

Create a fully functional, local chatbot using Flowise and Ollama. This project is designed to provide a low-code, privacy-friendly solution for building intelligent conversational bots. 🚀 This was made as part of a blog that you can find here

Features

  • Low-Code Workflow: Build chatbots visually without heavy coding.
  • Local Hosting: Keeps your data private and secure.
  • Customisable: Fully adjustable to your needs.
  • Powered by Open-Source Models: Utilizes Ollama's LLMs for AI capabilities.

Workflow Overview

Workflow Diagram

Core Components

  1. ChatOllama: Provides AI responses using Ollama's models.
  2. Buffer Memory: Retains chat history for continuity.
  3. Conversation Chain: Integrates ChatOllama and Buffer Memory to enable interactive conversations.

Getting Started

Prerequisites

  • Docker (for running Ollama locally)
  • Node.js and npm (for Flowise)
  • A compatible LLM model (e.g., SARA-llama3.2)

Installation

  1. Clone the repository:
    git clone https://github.com/dwain-barnes/local-low-code-chatbot-ollama-flowise.git
    cd local-low-code-chatbot-ollama-flowise
  2. Start Ollama:
    docker run -p 11434:11434 ollama/server
  3. Install Flowise:
    npm install -g flowise
    flowise start
  4. Import the provided Flowise JSON (basic-local-chatbot-flowise.json) into Flowise.

Running the Chatbot

  1. Access the Flowise editor at http://localhost:3000.
  2. Load and activate the workflow.
  3. Start chatting with the bot in the interface.