YOLOv8-Dataset-Transformer is an integrated solution for transforming image classification datasets into object detection datasets, followed by training with the state-of-the-art YOLOv8 model. This toolkit simplifies the process of dataset augmentation, preparation, and model training, offering a streamlined path for custom object detection projects.
- Dataset Conversion: Converts standard image classification datasets into YOLOv8 compatible object detection datasets.
- Image Augmentation: Applies a variety of augmentations to enrich the dataset, improving model robustness.
- Model Training and Validation: Facilitates the training and validation of YOLOv8 models with custom datasets.
- Model Exporting: Supports exporting trained models to different formats like ONNX for easy deployment.
- Python 3.8 or later
- PyTorch 1.8 or later
- YOLOv8 dependencies (refer to YOLOv8 documentation)
Clone the repository to your local machine:
git clone https://github.com/[YourUsername]/YOLOv8-Dataset-Transformer.git
cd YOLOv8-Dataset-Transformer
Install the required packages:
pip install -r requirements.txt
-
Prepare Your Dataset: Place your image classification dataset in the designated folders.
-
Run the Dataset Preparation Script:
python dataset_preparation.py --markers train20X20 --irrelevant irrelevant --output output --total_images 1000 --train_ratio 0.8
New images shall be generated by the script, you can refer to the image above:
-
Train Your Model:
python train.py --data_config path/to/data.yaml --epochs 100 --model_name yolov8n.pt
-
Evaluate and Export Your Model:
Validate, predict, and export your model using options in the
train.py
script.
Train and demonstrate the model and computed the parameters of experiments.
Contributions to the YOLOv8-Dataset-Transformer are welcome! Please read our Contributing Guidelines for more information. We fetch the train20X20 dataset from apoorva-dave and irrelevant images from Google image.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
- Thanks to the Ultralytics team for the YOLOv8 model.
- Special thanks to all contributors and maintainers of this project.