Skip to content

NiladriChatterje/conversational-online-docs

Repository files navigation

Getting Started

First, run the development server:

npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev

Open http://localhost:3000 with your browser to see the result. what I have done is basically pulled an AI image (mistral for my case) through bash Ollama pull mistral You can pull your AI model according to your choice. A lot of model is available including llama 2, llama 3 and many more. after pulling the image, make sure that the image is running otherwise server won't able to accept queries through endpoints.

Simple Workflow :

WorkFlow

Issue :

Creation of embeddings and storing in Memory and treating as a vectorStore requires a descent hardware. And currently such hardware is unavailable to me. :) thus, A static context is provided which is a direct document providing some infos about me. One can test with the provided context and play with it.

Demo:

Demo-Project

About

A LLM chatroom where one can interact with the website by providing its link.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published