A high-performance, memory-efficient API for Stable Diffusion XL operations. Built for production use with zero file storage and automatic resource cleanup.
- 🚀 Memory-efficient operation with zero file storage
- 🎨 Support for text-to-image, image-to-image, and inpainting
- 🔧 Multiple scheduler options
- 📦 LoRA model integration
- 🔄 Automatic device selection (CUDA/MPS/CPU)
- 🧹 Aggressive memory cleanup
- 🛡️ Production-ready error handling
# As of the time of posting this README, Mac Silicone users should use this Nightly version of Torch for FP16 support on MPS.
pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu
The server will start on http://localhost:8000
by default.
The API is designed for long-running deployments with:
- Zero file storage
- Automatic cache clearing
- Aggressive memory cleanup
- Resource monitoring
This project is licensed under the MIT License - see the LICENSE file for details.