Skip to content

A framework for training LlamaMoshi from scratch, including dataset handling and fine-tuning code.

License

Notifications You must be signed in to change notification settings

Airoura/LlamaMoshi

Repository files navigation

LlamaMoshi

Moshi's Llama version, a framework for training LlamaMoshi from scratch, including dataset handling and fine-tuning code.

Usage

  1. Install
git clone https://github.com/Airoura/LlamaMoshi.git
  1. Requirements
pip3 install -r requirements.txt
cd third_party/llama-cookbook
pip3 install -e .
  1. Data

    Both the pretrain and the sft data have been made public:

Pipeline

Take LibriSpeech-100h as an example.

  1. Concatenate audio segments to 5 minutes.

    Untar train-clean-100.tar.gz under the dataset directory, then run:

python src/tools/data/concat_librispeech.py
  1. Tokenize the pretrain data.

    Our approach is consistent with the paper in that we use Mimi to encode the audio and then use FasterWhisper to align the text and audio, and we replace the reserved tokens of the llama tokenizer with PAD and EPAD.

bash scripts/tokenize_librispeech_100h.sh
  1. Summary tokenized data.

    see summary.ipynb

  2. Run pretrain script

    We used llama3.1-8B as the base model, and we borrowed a lot of code from llama-cookbook to implement FSDP.

bash scripts/librispeech_pretrain.sh

Evaluate

We weren't able to fully reproduce the effects in Moshi's paper because we didn't have enough audio data and GPUs, and we're making the training code public here in the hope that it will help you continue to explore the alignment work between the speech models and the LLMs. The evaluation code will be made public in the near future.

About

A framework for training LlamaMoshi from scratch, including dataset handling and fine-tuning code.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published