This project develops a sophisticated vision-based localization system for unmanned aerial vehicles (UAVs) operating in GPS-denied environments. The system enables UAVs to determine their position by matching real-time camera feed with satellite imagery through advanced computer vision techniques.
The system has evolved through multiple iterations, each enhancing detection accuracy, computational efficiency, and robustness across various environmental conditions. The current implementation (v6.2) features a hybrid detection approach that adaptively combines multiple feature detection algorithms to achieve optimal performance.
The system implements multiple feature detection algorithms, each optimized for specific scenarios:
- SIFT (Scale-Invariant Feature Transform) for robust scale and rotation handling
- ORB (Oriented FAST and Rotated BRIEF) for efficient real-time processing
- AKAZE (Accelerated-KAZE) for handling nonlinear scale space
- BRISK (Binary Robust Invariant Scalable Keypoints) for fast binary descriptors
- Hybrid detector (v6.0+) combining multiple algorithms for optimal performance
Our pipeline incorporates several sophisticated techniques:
- Adaptive preprocessing for varying lighting conditions
- Region of Interest (ROI) optimization for efficient processing
- Multi-stage matching with outlier rejection
- Comprehensive error analysis and visualization
- Scale and rotation invariant position estimation
The system includes a robust testing framework that evaluates performance across:
- Multiple environmental conditions
- Various image transformations (rotation, scale, brightness)
- Different noise levels and distortions
- Real-world deployment scenarios
uav-vision-localization/
├── src/ # Source code
│ ├── core/ # Core detection algorithms
│ ├── utils/ # Utility functions
│ └── evaluation/ # Testing framework
├── tests/ # Test suites
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
├── datasets/ # Test datasets
│ ├── satellite/ # Satellite imagery
│ └── drone/ # Drone camera feeds
├── results/ # Evaluation results
├── docs/ # Documentation
└── scripts/ # Utility scripts
- Python 3.8 or higher
- OpenCV 4.5+
- NumPy
- Matplotlib
- Pandas
# Clone the repository
git clone https://github.com/sidharthmohannair/VisionUAV-Navigation.git
cd VisionUAV-Navigation
# Install dependencies
pip install -r requirements.txt
from src.core.evaluator import PracticalDroneEvaluator
# Initialize evaluator
evaluator = PracticalDroneEvaluator(
satellite_path="datasets/satellite/sample.jpg",
drone_images=["datasets/drone/sample.jpg"],
drone_position=(lat, lon) # Optional ground truth
)
# Run evaluation
results = evaluator.run_evaluation()
# Generate comprehensive report
evaluator.generate_report()
Our latest version achieves:
- Position accuracy: <1% of flight height
- Processing time: <100ms per frame
- Match quality: >80% inlier ratio
- Success rate: >95% across test scenarios
We welcome contributions! See our Contribution Guidelines for details on:
- Code style and standards
- Testing requirements
- Pull request process
- Development workflow
This project is licensed under the MIT License - see LICENSE file for details.
This project builds upon research in computer vision and UAV navigation, particularly:
- Feature detection algorithms (SIFT, ORB, AKAZE, BRISK)
- OpenCV library and community
- Related research in vision-based navigation