Skip to content

An innovative AI-powered chatbot that simulates realistic conversations with celebrities using state-of-the-art open-source large language models.

License

Notifications You must be signed in to change notification settings

Anandha-Vihari/AI-Celebrity-Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🌟 AI Celebrity Chatbot

πŸ“‹ Project Overview

An innovative AI-powered chatbot that simulates realistic conversations with celebrities using state-of-the-art open-source large language models.

🎯 Project Objective

Develop an advanced conversational AI platform that enables users to interact with virtual representations of celebrities, leveraging sophisticated natural language processing and machine learning technologies.

πŸ“™ Features

πŸ€– Advanced AI Capabilities

  • Hyper-realistic celebrity personality simulation
  • Context-aware conversational intelligence
  • Deep learning-powered response generation

🌐 Celebrity Interaction

  • Diverse celebrity persona library
  • Customizable interaction parameters
  • Multi-domain celebrity representations (Entertainment, Sports, Politics, Science)

🧠 Technical Innovations

  • Open-source LLM integration
  • Real-time personality trait modeling
  • Adaptive communication algorithms

🫳 Prerequisites

Hardware Requirements

  • Processor: 8+ CPU cores
  • RAM: 16+ GB
  • Storage: 60+ GB
  • Recommended: NVIDIA CUDA-compatible GPU

Software Requirements

  • Python 3.11+
  • Node.js 20.16.0+
  • Flask
  • SQLAlchemy
  • Ollama
  • Llama3/Gemma/Mistral LLMs

πŸ‘£ Installation Guide

1. Clone Repository

git clone https://github.com/yourusername/AI-Celebrity-Chatbot.git
cd AI-Celebrity-Chatbot

2. Setup Virtual Environment

python -m venv venv
source venv/bin/activate  # Unix/macOS
# venv\Scripts\activate  # Windows

3. Install Dependencies

pip install -r requirements.txt

Ollama Setup

Installing Ollama

# Windows (WSL2 required)
curl -fsSL https://ollama.com/install.sh | sh

# Verify Installation
ollama --version

Pull required models

ollama pull mistral    # Balanced performance (default)
ollama pull llama3     # Creative responses
ollama pull gemma      # Efficient processing  
ollama pull starcoder  # Technical expertise

#### Verify models are installed
ollama list

πŸ—‚ Project Structure

AI-Celebrity-Chatbot/
β”œβ”€β”€ abilities/
β”‚   β”œβ”€β”€ llm.py
β”‚   β”œβ”€β”€ migrations.py
β”‚   β”œβ”€β”€ __init__.py
β”œβ”€β”€ instance/
β”‚   └── database.db
β”œβ”€β”€ migrations/
β”‚   └── database.sql
β”œβ”€β”€ static/
β”‚   β”œβ”€β”€ css/
β”‚   β”‚   └── styles.css
β”‚   └── js/
β”‚       β”œβ”€β”€ chat.js
β”‚       β”œβ”€β”€ header.js
β”‚       └── home.js
β”œβ”€β”€ templates/
β”‚   β”œβ”€β”€ chat.html
β”‚   β”œβ”€β”€ home.html
β”‚   └── partials/
β”‚       β”œβ”€β”€ _desktop_header.html
β”‚       β”œβ”€β”€ _header.html
β”‚       └── _mobile_header.html
β”œβ”€β”€ app_init.py
β”œβ”€β”€ main.py
β”œβ”€β”€ models.py
β”œβ”€β”€ routes.py
└── requirements.txt

πŸš€ Running Application

# Start development server
python main.py

πŸš€ Deploy on Spheron

Here I will help you to deploy Server and Ollama on Spheron using Spheron Protocol CLI πŸ’ͺ

Prerequisites

You should have this before you start deploying on Spheron:

  • curl

1. Install Spheron Protocol CLI (Linux, MacOS)

curl -sL1 https://sphnctl.sh | bash

After installation, verify the installation by using a simple command to check the Spheron version:

sphnctl version # or `sphnctl -h` for help

2. Creating a Wallet

sphnctl wallet create --name <your-wallet-name>

Replace <your-wallet-name> with your desired wallet name. Here is an example of how the result will look:

Created account xxx:
 path: root/.spheron/<your-wallet-name>.json
 address: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 secret: xxxxxxxxxx
 mnemonic: xxxxxx xxxxx xxxx xxxxx xxxxx xxxx xxxxx xxxxx

Important: Make sure to securely save the mnemonic phrase and key secret provided.

3. Get Test Tokens from the Faucet

You will need some token to deploy on Spheron. Visit the Spheron Faucet to obtain test tokens for deployment.

After receiving the tokens, you can check your wallet balance with:

sphnctl wallet balance --token USDT

Here is an example of how the result will look:

Current ETH balance: 0.00011 (used for gas fee)
Total USDT balance: 35 (used to buy the lease)
 
Deposited USDT balance
 unlocked: 100.0000
 locked: 0.0000

4. Deposit Tokens to Your Escrow Balance

Deposit USDT to your escrow wallet for deployment:

sphnctl payment deposit --amount 20 --token USDT
sphnctl wallet balance --token USDT

5. Create your Deployment

deploy the deploy.yml configuration file on Spheron:

sphnctl deployment create deploy.yml

Here is an example of how the result will look:

Validating SDL configuration.
SDL validated.
Sending configuration for provider matching.
Create deployment tx: [Tx Hash]
Waiting for providers to bid on the deployment order...
Bid found.
Order matched successfully.
Deployment created using wallet xxxxxxxxxxxxxxxxxxxxxxx
 lid: xxxx
 provider: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 agreed price: 0.30
Sending the manifest for deployment…
Deployment manifest sent, waiting for acknowledgment.
Deployment is finished.

'lid' is used access the deployment,lid also means lease id

6. Access Your Deployment

To get details about your deployment, including the URL, ports, and status, run:

sphnctl deployment get --lid <lease-id>

Replace the <lease-id> with your actual Lease ID, you obtained after deployment.

You will get a url that is your deployment link

✍ Acknowledgments

This project couldn't be there if they didn't be there!

🧾 License

This project is licensed under the MIT License.

Revolutionizing Digital Interactions, One Celebrity at a Time 🌟

About

An innovative AI-powered chatbot that simulates realistic conversations with celebrities using state-of-the-art open-source large language models.

Resources

License

Stars

Watchers

Forks