A pytorch-lightning implementation of TCN-based Joint Beat and Downbeat Tracking model described in the paper Temporal convolutional networks for musical audio beat tracking, with some extra ideas from Deconstruct, Analyse, Reconstruct: How to improve Tempo, Beat, and Downbeat Estimation.
git clone https://github.com/mhrice/BeatTrack.git
cd BeatTrack
python3 -m venv env
source env/bin/activate
pip install cython numpy
pip install -e .
Need a 2 stage pip install because of madmom issues
In newer versions of python, madmom has an issue with the processors.py
file. To fix this, run the following command, replacing {python-version}
with your python version:
cp processors.py env/lib/{python-version}/site-packages/madmom/processors.py
mkdir data && cd data
wget http://mtg.upf.edu/ismir2004/contest/tempoContest/data1.tar.gz
git clone https://github.com/CPJKU/BallroomAnnotations.git
tar -xvzf data1.tar.gz
rm data1.tar.gz
python scripts/train.py
Important: Change wandb logger settings to your own account.
python scripts/inference.py audio_file_path {checkpoint_path}
or
from scripts.inference import beatTracker
beatTracker(audio_file_path)
See cfg.py
for all parameters.
checkpoints/best.ckpt
is a checkpoint where the model was trained to do joint beat/downbeat predictions on the Ballroom dataset for 165 epochs.