Moshi's Llama version, a framework for training LlamaMoshi from scratch, including dataset handling and fine-tuning code.
- Install
git clone https://github.com/Airoura/LlamaMoshi.git
- Requirements
pip3 install -r requirements.txt
cd third_party/llama-cookbook
pip3 install -e .
-
Data
Both the pretrain and the sft data have been made public:
Take LibriSpeech-100h as an example.
-
Concatenate audio segments to 5 minutes.
Untar train-clean-100.tar.gz under the dataset directory, then run:
python src/tools/data/concat_librispeech.py
-
Tokenize the pretrain data.
Our approach is consistent with the paper in that we use Mimi to encode the audio and then use FasterWhisper to align the text and audio, and we replace the reserved tokens of the llama tokenizer with PAD and EPAD.
bash scripts/tokenize_librispeech_100h.sh
-
Summary tokenized data.
see summary.ipynb
-
Run pretrain script
We used llama3.1-8B as the base model, and we borrowed a lot of code from llama-cookbook to implement FSDP.
bash scripts/librispeech_pretrain.sh
We weren't able to fully reproduce the effects in Moshi's paper because we didn't have enough audio data and GPUs, and we're making the training code public here in the hope that it will help you continue to explore the alignment work between the speech models and the LLMs. The evaluation code will be made public in the near future.