Machine Learning Training Pipeline for Wildfire Detection.
The whole repository is organized as a data pipeline that can be run to train the models and export them to the appropriate formats.
The Data pipeline is organized with a dvc.yaml file.
This section list and describes all the DVC stages that are defined in the dvc.yaml file:
- build_model_input: Generate model input for YOLO custom dataset training using the provided raw dataset.
- train_yolo_baseline_small: Train a YOLO baseline model on a subset of the full dataset.
- train_yolo_baseline: Train a YOLO baseline model on the full dataset.
- train_yolo_best: Train the best YOLO model on the full dataset.
- build_manifest_yolo_best: Build the manifest.yaml file to attach with the model.
- export_yolo_best: Export the best YOLO model to different formats (ONNX, NCNN).
Install uv
with pipx
:
pipx install uv
Create a virtualenv and install the dependencies with uv
:
uv sync
Activate the uv
virutalenv:
source .venv/bin/activate
Make sure git-lfs
is installed on your system.
Run the following command to check:
git lfs install
If not installed, one can install it with the following:
sudo apt install git-lfs
git-lfs install
brew install git-lfs
git-lfs install
Download and run the latest windows installer.
To get the data dependencies one can use DVC - To fully use this repository you would need access to our DVC remote storage which is currently reserved for Pyronear members. On request, you will be provided with AWS credentials to access our remote storage.
Pull the data files needed for training the model:
dvc get . ./data/03_model_input/
Pull all the data files tracked by DVC using this command:
dvc pull
Create the following file ~/.aws/config
:
[profile pyronear]
region = eu-west-3
Add your credentials in the file ~/.aws/credentials
- replace XXX
with your access key id and your secret access key:
[pyronear]
aws_access_key_id = XXX
aws_secret_access_key = XXX
Make sure you use the AWS pyronear
profile:
export AWS_PROFILE=pyronear
The project is organized following mostly the cookie-cutter-datascience guideline.
All the data lives in the data
folder and follows some data engineering
conventions.
The library code is available under the pyronear_mlops
folder.
The notebooks live in the notebooks
folder. They are automatically synced to
the Git LFS storage.
Please follow this
convention
to name your Notebooks.
<step>-<ghuser>-<description>.ipynb
- e.g., 0.3-mateo-visualize-distributions.ipynb
.
The scripts live in the scripts
folder, they are
commonly CLI interfaces to the library
code.
DVC is used to track and define data pipelines and make them
reproducible. See dvc.yaml
.
To get an overview of the pipeline DAG:
dvc dag
To run the full pipeline:
dvc repro
An MLFlow server is running when running ML experiments to track hyperparameters and performances and to streamline model selection.
To start the mlflow UI server, run the following command:
make mlflow_start
To stop the mlflow UI server, run the following command:
make mlflow_stop
To browse the different runs, open your browser and navigate to the URL: http://localhost:5000
Run the test suite with the following commmand:
make run_test_suite
Follow the steps:
- Work on a separate git branch:
git checkout -b "<user>/<experiment-name>"
- Modify and iterate on the code, then run
dvc repro
. It will rerun parts of the pipeline that have been updated. - Commit your changes and open a Pull Request to get your changes approved and merged.
We use random hyperparameter search to find the best set of hyperparameters for our models.
The initial stage is to optimize for exploration of all hyperparameter ranges. A wide.yaml hyperparamter config file is available for performing this type of search.
It is good practice to run this search on a small subset of the full dataset to make quickly iterate over many different combinations of hyperparameters.
Run the wide and fast hyperparameter search with:
make run_yolo_wide_hyperparameter_search
The second stage of the hyperparameter search is to run some more narrow and local searches on identified combinations of good parameters from stage 1. A narrow.yaml hyperparameter config file is available for this type of search.
It is good practice to run this search on the full dataset to get the actual model performances of the randomly drawn sets of hyperparameters.
Run the narrow and deep hyperparameter search with:
make run_yolo_narrow_hyperparameter_search
Adapt and run this command to launch a specific hyperparamater space search:
uv run python ./scripts/model/yolo/hyperparameter_search.py \
--data ./data/03_model_input/wildfire/full/datasets/data.yaml \
--output-dir ./data/04_models/yolo/ \
--experiment-name "random_hyperparameter_search" \
--filepath-space-yaml ./scripts/model/yolo/spaces/default.yaml \
--n 5 \
--loglevel "info"
One can adapt the hyperparameter space to search by adding a new space.yaml
file based on the default.yaml
model_type:
type: array
array_type: str
values:
- yolo11n.pt
- yolo11s.pt
- yolo12n.pt
- yolo12s.pt
epochs:
type: space
space_type: int
space_config:
type: linear
start: 50
stop: 70
num: 10
patience:
type: space
space_type: int
space_config:
type: linear
start: 10
stop: 50
num: 10
batch:
type: array
array_type: int
values:
- 16
- 32
- 64
...
make run_yolo_benchmark
The script to release a new version of the model is located in
./scripts/model/yolo/release.py
.
Make sure to set your GITHUB_ACCESS_TOKEN
as an env variable in your shell
before running the following script:
export GITHUB_ACCESS_TOKEN=XXX
uv run python ./scripts/release.py \
--version v4.0.0 \
--release-name "dazzling dragonfly" \
--github-owner earthtoolsmaker \
--github-repo pyro-train
This will create a new release in the github repository with the model artifacts such as its weights.
Note: The current naming convention for release is to use an adjective paired with an animal name starting with the same letter (eg. artistic alpaca, wise wolf, ...).