A powerfully elegant CLI for unified access to multiple AI providers
Features • Installation • Usage • Configuration • Contributing
The LLM CLI Toolkit is a sophisticated command-line interface that provides seamless integration with multiple AI providers through a unified, elegant interface. Built with modern JavaScript and featuring a stunning powerline-style terminal UI, OFCA makes interacting with various AI models as simple as a single function call.
- OpenAI - GPT-4, GPT-3.5-Turbo
- Anthropic - Claude 3 Opus, Sonnet, Haiku
- OpenRouter - Multiple model aggregation
- Groq - Ultra-fast inference
- Grok - X's AI powerhouse
- Mistral - Advanced open models
- Powerline Style - Modern terminal aesthetics
- Provider Themes - Unique gradients per provider
- Smart Icons - Nerd Font integration
- Responsive Design - Adapts to terminal size
- Async Architecture - Non-blocking operations
- Error Resilience - Comprehensive error handling
- Demo Mode - Try without API keys
- Smart Caching - Optimized responses
# Clone the repository
git clone https://github.com/traves-theberge/OFCA.git
# Install dependencies
cd OFCA
npm install
# Set up environment variables
cp .env.example .env
# Start in normal mode
npm start
# Start in demo mode
npm run demo
Create a .env
file with your API keys:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
OPENROUTER_API_KEY=sk-or-...
GROQ_API_KEY=gsk-...
GROK_API_KEY=grok-...
MISTRAL_API_KEY=...
During chat sessions, you can use these special commands:
Command | Description |
---|---|
/switch |
Change provider or model |
/model |
List available models |
/help |
Show command list |
/clear |
Clear chat history |
/exit |
End session |
OFCA features unique themes for each provider:
Provider | Theme Colors | Icon |
---|---|---|
OpenAI | Green gradient | 🤖 |
Anthropic | Purple gradient | 🌟 |
OpenRouter | Orange gradient | 🌐 |
Groq | Blue gradient | ⚡ |
Grok | Red gradient | 🤘 |
Mistral | Indigo gradient | 🌪️ |
{
"temperature": 0.7,
"maxTokens": 2000,
"topP": 0.9,
"frequencyPenalty": 0.0,
"presencePenalty": 0.0
}
RATE_LIMIT_REQUESTS=60
RATE_LIMIT_WINDOW=60000
# Verify API keys
npm run check-env
# Test specific provider
npm run test-provider openai
- Check internet connection
- Verify API endpoint status
- Ensure proper proxy configuration
- Confirm model access level
- Check provider status page
- Try fallback models
- Response caching
- Model preloading
- Connection pooling
- Automatic garbage collection
- Stream processing
- Buffer optimization
-
API Key Management
- Use environment variables
- Rotate keys regularly
- Never commit keys to repo
-
Data Protection
- Local message encryption
- Secure storage
- Privacy controls
-
Access Control
- Rate limiting
- IP restrictions
- Usage monitoring
Detailed documentation is available in the Wiki.
-
Fork and Clone
git clone https://github.com/your-username/OFCA.git
-
Install Dependencies
npm install
-
Create Branch
git checkout -b feature/your-feature
- ESLint configuration
- Prettier formatting
- JSDoc comments
- Unit test coverage
- Update documentation
- Add unit tests
- Follow commit conventions
- Request review
Traves Theberge Project Lead |
Special thanks to Echo Hive for their invaluable support and contributions to this project.
This project is licensed under the MIT License - see the LICENSE file for details.