First, run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
Open http://localhost:3000 with your browser to see the result.
what I have done is basically pulled an AI image (mistral for my case) through
bash Ollama pull mistral
You can pull your AI model according to your choice. A lot of model is available including llama 2, llama 3 and many more.
after pulling the image, make sure that the image is running otherwise server won't able to accept queries through endpoints.
Creation of embeddings and storing in Memory and treating as a vectorStore requires a descent hardware. And currently such hardware is unavailable to me. :) thus, A static context is provided which is a direct document providing some infos about me. One can test with the provided context and play with it.