This project showcases an object tracking system that employs YOLOv8 for detection and integrates a Kalman filter for tracking, thereby implementing the DeepSORT algorithm. It is designed to identify and follow objects within a video stream.
- Project Structure
- Project Demo
- Installation
- Training the Model
- Custom Training Results
- Running the System
- Dependencies
The project is organized into two main folders:
-
system: Contains the main codebase for the object tracking system.
detector_model_file
: Directory to store the trained YOLOv8 model weights.models
: Contains the implementation of various models used in the system (e.g., YOLOv8, Kalman filter).utils
: Contains utility functions and classes.detect.py
: Script to run the object detection and tracking on a video file.test.mp4
: Sample video file for testing the system.tracker.py
: Main script for the object tracking system.
-
training: Contains the Jupyter notebook for training the YOLOv8 model.
train.ipynb
: Jupyter notebook with the code for training the YOLOv8 model.runs/detect/train9/weights
: Directory where the trained model weights are saved.
demo.mp4
- Clone the repository:
git clone https://github.com/yourusername/your-repo.git cd your-repo
- Install the required dependencies:
Its upto you if you want to make a virtual environment, if yes then write following commands in the terminal:
Install the requirements using following command
pip install virtualenv python -m virtualenv venv ./venv/Scripts/activate
pip install -r requirements.txt
In the training directory, we have jupyter notebook file which can be run cell wise to train the model. The weights will get trained and stored in runs/detect/train/weights/best.pt (model files will be present in the recent train folder which will be made with each train try)
Copy the weights to system/detector_model_file/ and use this trained weights for detection.
The dataset used for training the model is the road identifier dataset, which can be downloaded from Roboflow's Universe. The dataset, available at this link, consists of images of roads with various objects such as cars, bicycles, buses, and pedestrians. Each image in the dataset is annotated with bounding boxes around the objects for training purposes.
- Navigate to the system directory:
cd ../system
- Run the detection and tracking script:
python detect.py
- The script will process the test.mp4 video file and display the results with bounding boxes and tracking IDs.
The project requires the following dependencies, which are listed in requirements.txt:
- ultralytics
- opencv-python
- numpy
- torch
- torchvision
Ensure all dependencies are installed by running:
pip install -r requirements.txt
This project is licensed under the MIT License. See the LICENSE file for details.
Thanks to the YOLOv8 and Ultralytics teams for their excellent work on the YOLO object detection framework.
Thanks to the OpenCV community for their powerful computer vision library.
Feel free to contribute to this project by submitting issues or pull requests.