π― Transform your technical interviews into dynamic, intelligent conversations powered by AI
Live Demo β’ Quick Start β’ Documentation
- π§ Dynamic Question Flow: Questions adapt based on candidate responses
- π― Real-time Evaluation: Instant feedback on technical accuracy and depth
- π Intelligent Follow-ups: Automatically triggers relevant follow-up questions
- π Comprehensive Assessment: Evaluates technical skills, understanding depth, and practical experience
# Clone the repository
git clone https://github.com/yourusername/repo.git
# Install dependencies
pip install -r requirements.txt
# Set up environment
cp .env.example .env
# Add your Groq API key to .env
# Start the server
uvicorn src.main:app --reload
curl -X POST "http://localhost:8000/api/v1/recruitment/analyze-job" \
-H "Content-Type: application/json" \
-d '{
"title": "Senior Backend Engineer",
"requirements": ["Python", "FastAPI", "Concurrency"],
"responsibilities": ["Design scalable systems"]
}'
curl -X POST "http://localhost:8000/api/v1/recruitment/generate-questions" \
-F "cv_file=@candidate_cv.pdf" \
-F "criteria=@criteria.json"
curl -X POST "http://localhost:8000/api/v1/recruitment/evaluate-response" \
-F "question_id=Python Development" \
-F "response=candidate answer" \
-F "criteria=@criteria.json" \
-F "conversation_flows=@flows.json"
Request:
POST /api/v1/recruitment/evaluate-response
Form-data:
- question_id: Python Development
- response: "In my previous role, I optimized a Python application that was processing large amounts of data. I implemented multiprocessing using Python's concurrent.futures module to parallelize the data processing tasks. For database operations, I implemented connection pooling using SQLAlchemy and added appropriate indexes. We also used Redis for caching frequently accessed data. This resulted in a 70% reduction in processing time."
- criteria: {your_criteria_json}
- conversation_flows: {your_flows_json}
- current_question_level: base
Response:
{
"technical_accuracy": {
"score": 90,
"comments": "Strong understanding of Python development concepts, including multiprocessing, connection pooling, and caching..."
},
"next_question": {
"type": "follow_up",
"question": "How do you handle concurrency in your applications?",
"deeper_questions": [
"What concurrency models do you use?",
"How do you handle synchronization and locking?",
"What libraries do you use for concurrent programming?"
]
}
}
Request:
POST /api/v1/recruitment/evaluate-response
Form-data:
- question_id: Python Development
- response: "In my Python applications, I handle concurrency using multiple approaches. For I/O-bound tasks, I use asyncio with async/await patterns. For CPU-bound tasks, I use the multiprocessing module. I also implement thread pooling using concurrent.futures.ThreadPoolExecutor. For synchronization, I use threading.Lock() to prevent race conditions."
- criteria: {same_criteria_json}
- conversation_flows: {same_flows_json}
- current_question_level: follow_up
Response:
{
"technical_accuracy": {
"score": 85,
"comments": "Demonstrates strong understanding of concurrency concepts..."
},
"next_question": {
"type": "deeper",
"question": "What concurrency models do you use?",
"deeper_questions": [
"How do you handle synchronization and locking?",
"What libraries do you use for concurrent programming?"
]
}
}
Request:
POST /api/v1/recruitment/evaluate-response
Form-data:
- question_id: Python Development
- response: "I use three main concurrency models: 1) Thread-based concurrency using threading and concurrent.futures for I/O-bound tasks, 2) Process-based parallelism using multiprocessing for CPU-intensive tasks to bypass GIL limitations, 3) Asynchronous programming with asyncio for event-driven applications. Each has its trade-offs: threads are good for I/O but limited by GIL, processes have more overhead but better for CPU work, and asyncio is efficient but requires async-compatible libraries."
- criteria: {same_criteria_json}
- conversation_flows: {same_flows_json}
- current_question_level: deeper
Response:
{
"technical_accuracy": {
"score": 95,
"comments": "Excellent understanding of different concurrency models and their trade-offs..."
},
"next_question": {
"type": "continue",
"question": "Continue with current question",
"deeper_questions": []
}
}
- System starts with base questions about Python development
- Based on mentions of concurrency, triggers follow-up questions
- High scores (>85) trigger deeper technical questions
- Process continues until topic is exhausted or exit conditions are met
- Moves to next topic (e.g., FastAPI, Concurrency) when current topic is complete
- Keep same
criteria
andconversation_flows
throughout the session - Watch
current_question_level
progression: base β follow_up β deeper - Pay attention to trigger words in responses
- Check
next_question
field for subsequent questions - Monitor scores to gauge candidate performance
-
Base Question:
"Can you explain your experience with Python development?"
-
Follow-up (based on response):
"How do you handle concurrency in your applications?"
-
Deeper Questions (based on expertise):
"What concurrency models do you use?" "How do you handle synchronization?"
{
"technical_accuracy": {
"score": 90,
"comments": "Strong understanding of concurrency models..."
},
"understanding_depth": {
"score": 85,
"comments": "Good grasp of practical applications..."
},
"next_question": {
"type": "deeper",
"question": "What concurrency models do you use?"
}
}
- FastAPI: High-performance web framework
- CrewAI: AI agent orchestration
- Groq: Ultra-fast LLM inference
- PyPDF2: CV analysis
- Pydantic: Data validation
Full documentation available at docs/README.md
Contributions welcome! Check out our Contributing Guide
MIT License - see LICENSE