This project is an advanced chat application that leverages OpenAI's powerful language models to provide intelligent responses. Built with a modern tech stack, it combines a FastAPI backend with a responsive frontend, featuring load balancing, service discovery, and session management.
- Create a scalable, intelligent chatbot interface using OpenAI's GPT models
- Provide fast and reliable responses through efficient API integration
- Implement vector-based search capabilities for improved response accuracy
- Enable real-time chat functionality with WebSocket support
- Ensure high availability through load balancing and service discovery
- FastAPI: High-performance Python web framework
- Python 3.11+: Core programming language
- OpenAI API: For natural language processing
- FAISS: Vector similarity search library
- Redis: Session management and caching
- Uvicorn: ASGI server implementation
- Load Balancer: Custom implementation for request distribution
- Service Discovery: Automatic service registration and health checks
- Session Management: Redis-based session storage
- Task Queue: Asynchronous task processing
- HTML/Jinja2: Template-based structure
- JavaScript: Client-side functionality and WebSocket handling
- CSS: Responsive design and styling
- Docker: Containerization
- Docker Compose: Multi-container orchestration
- NGINX: Reverse proxy and load balancing
- Load balancing across multiple instances
- Automatic service discovery and registration
- Health monitoring and failover
- Scalable infrastructure
- Real-time message processing
- Context-aware responses
- Natural language understanding
- Message history management
- FAISS-powered vector similarity search
- Efficient query processing
- Memory management for chat context
- Knowledge base integration
- RESTful API endpoints
- WebSocket support
- Rate limiting and caching
- Comprehensive error handling
- Input validation and sanitization
- Performance monitoring and logging
- Rate limiting protection
- Secure session management
graph TD
A[Client] -->|HTTP/WebSocket| B[Load Balancer]
B -->|Request Distribution| C[FastAPI Instances]
C -->|Service Discovery| D[Service Registry]
C -->|Session Management| E[Redis Store]
C -->|Chat Processing| F[OpenAI Service]
C -->|Vector Search| G[FAISS Index]
C -->|Cache| H[Redis Cache]
C -->|Tasks| I[Task Queue]
OpenAI Chatbot Project/
βββ app/
β βββ __init__.py
β βββ main.py
β βββ error_handlers.py
β βββ memory.py
β βββ models/
β β βββ __init__.py
β β βββ message.py
β βββ routers/
β β βββ __init__.py
β β βββ chat.py
β βββ handlers/
β β βββ message_handler.py
β βββ services/
β β βββ __init__.py
β β βββ chat_service.py
β βββ exceptions.py
βββ common/
β βββ websocket_manager.py
βββ services/
β βββ __init__.py
β βββ openai_service.py
βββ config/
β βββ __init__.py
β βββ logging_config.py
β βββ faiss_config.py
β βββ settings.py
βββ load_balancer/
β βββ __init__.py
β βββ balancer.py
βββ middleware/
β βββ __init__.py
β βββ error_logging.py
βββ scripts/
β βββ __init__.py
β βββ vectorization.py
βββ service_discovery/
β βββ __init__.py
β βββ discovery.py
βββ session/
β βββ __init__.py
β βββ redis_store.py
βββ templates/
β βββ index.html
βββ static/
β βββ css/
β βββ js/
βββ utils/
β βββ __init__.py
β βββ helpers.py
β βββ cache_manager.py
β βββ decorators.py
β βββ health_check.py
β βββ performance_logger.py
β βββ rate_limiter.py
β βββ task_queue.py
β βββ backup_manager.py
βββ tests/
β βββ __init__.py
β βββ test_exceptions.py
β βββ test_helpers.py
β βββ test_main.py
β βββ test_openai_services.py
β βββ test_vector_store.py
βββ .coverage
βββ .coveragerc
βββ .env
βββ Dockerfile
βββ docker-compose.yml
βββ nginx.conf
βββ pytest.ini
βββ requirements.txt
βββ setup.py
βββ README.md
- Python 3.11+
- Docker
- Docker Compose
- Redis
- OpenAI API Key
Clone the repository:
git clone https://github.com/zehraacer/Enterprise-Grade_OpenAI_Chatbot_Platform.git
cd Enterprise-Grade_OpenAI_Chatbot_Platform
Create and activate a virtual environment:
python3 -m venv venv
source venv/bin/activate
Install dependencies:
pip install -r requirements.txt
Set up environment variables:
Create a .env
file in the root directory and add your configuration:
OPENAI_API_KEY=your_openai_api_key
REDIS_URL=redis://localhost:6379/0
Run the application:
python -m uvicorn app.main:app --reload
Build and run the Docker containers:
docker-compose up --build
To run the tests, use the following command:
pytest
Contributions are welcome! Please fork the repository and create a pull request. For major changes, please open an issue first to discuss what you would like to change.
This project is licensed under the MIT License. See the LICENSE file for details.
For any inquiries or support, please contact zehraacer.