Skip to content

This project is built with **Ollama** and **TinyLlama** to create a lightweight Large Language Model (LLM) application. The backend, powered by Express.js, processes requests and communicates with the TinyLlama model, while the frontend, developed in React.js, provides a user-friendly interface for interaction.

Notifications You must be signed in to change notification settings

zidniryi/LLM-GPT-Like-Tiny

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

TinyLlama LLM Project GPT Like

This project is built with Ollama and TinyLlama to create a lightweight Large Language Model (LLM) application. The backend, powered by Express.js, processes requests and communicates with the TinyLlama model, while the frontend, developed in React.js, provides a user-friendly interface for interaction.

Alt text Alt text

Table of Contents


Features

  • Interactive Chat Interface: Users can chat with TinyLlama via a web-based chat interface.
  • Efficient API Handling: Express.js backend routes messages to TinyLlama efficiently.
  • Lightweight LLM: Uses the TinyLlama-1.1B model for fast response times.
  • Scalable Design: The project structure allows easy expansion with additional features.

Project Structure

This project has a backend (Express.js) and frontend (React.js) directory:

tinyllama-llm-project/
├── backend/                # Backend files (Express.js)
│   ├── app.js              # Main server file
│   └── package.json
├── frontend/               # Frontend files (React.js)
│   ├── src/
│   │   ├── App.js          # Main App component
│   └── package.json
└── README.md

Getting Started

Follow the steps below to install and run the project on your local machine.

Prerequisites

  • Node.js and npm
  • Python 3.x
  • Ollama CLI (for running TinyLlama locally)

Installation

  1. Clone the Repository

    Clone the project repository to your local machine:

    git clone https://github.com/zidniryi/LLM-GPT-Like-Tiny
    cd tinyllama-llm-project
  2. Backend Setup

    Navigate to the backend directory to set up the backend server:

    cd backend

    Install the necessary dependencies:

    npm install
  3. Frontend Setup

    Open a new terminal window and navigate to the frontend directory to set up the frontend client:

    cd frontend

    Install the necessary dependencies:

    npm install
  4. Set Up Ollama

    Ensure Ollama and TinyLlama are installed and configured on your system. Follow Ollama's official documentation for setup instructions.


Usage

Starting the Servers

  1. Start the Backend Server

    In the backend directory, run the following command:

    npm start

    The backend server should start on http://localhost:3001.

  2. Start the Frontend Server

    In a new terminal window, navigate to the frontend directory and run:

    npm start

    The frontend should start on http://localhost:3000.

Interacting with TinyLlama

Once both servers are running, open your browser and navigate to http://localhost:3000 to interact with the TinyLlama model through the chat interface.


Backend API Example

The backend exposes an API endpoint to interact with the TinyLlama model programmatically.

  • Endpoint: POST http://localhost:3001/generate

Request Payload:

{
  "model": "tinyllama",
  "prompt": "What colow of the sky",
  "raw": true,
  "stream": false
}

Response:

{
  "model": "tinyllama",
  "created_at": "2024-10-26T05:56:34.946112Z",
  "response": " was it? \n“Was it blue?” asks the boy.\n\nThe sky was a blue-white glow,  \nA sight so beautiful to behold.\n\nAnd the stars sparkled like diamonds,  \nShining brightly in the night's dark, clear sky.\n\nThe moon shone high and bright above,  \nA light that illuminated all below.\n\nBut even as it grew into the night,  \nThe moon cast a dim but ever-lasting glow.\n\nFor when the stars began to fade away,  \nAnd the last of their light was gone,  \nA silence descended upon the night,  \nAs though the world were waiting for peace.",
  "done": true,
  "done_reason": "stop",
  "total_duration": 8206857666,
  "load_duration": 4364864791,
  "prompt_eval_count": 7,
  "prompt_eval_duration": 82870000,
  "eval_count": 156,
  "eval_duration": 3753873000
}

Request Parameters

  • message: (string) The message to send to TinyLlama.

Response

  • response: (string) The model’s response.

Contributing

Contributions are welcome! Please fork the repository and create a pull request with your changes.


License

This project is licensed under the MIT License.

About

This project is built with **Ollama** and **TinyLlama** to create a lightweight Large Language Model (LLM) application. The backend, powered by Express.js, processes requests and communicates with the TinyLlama model, while the frontend, developed in React.js, provides a user-friendly interface for interaction.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published