Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train_dreambooth_lora_flux with prodigy and train_text_encoder causes IndexError: list index out of range #9464

Closed
squewel opened this issue Sep 18, 2024 · 6 comments · Fixed by #9473
Labels
bug Something isn't working stale Issues that haven't received updates

Comments

@squewel
Copy link

squewel commented Sep 18, 2024

Describe the bug

train_dreambooth_lora_flux.py when running with --train_text_encoder --optimizer="prodigy" causes IndexError: list index out of range because of this:

09/18/2024 20:06:33 - WARNING - main - Learning rates were provided both for the transformer and the text encoder- e.g. text_encoder_lr: 5e-06 and learning_rate: 1.0. When using prodigy only learning_rate is used as the initial learning rate.
Traceback (most recent call last):
File "/content/diffusers/examples/dreambooth/train_dreambooth_lora_flux.py", line 1891, in
main(args)
File "/content/diffusers/examples/dreambooth/train_dreambooth_lora_flux.py", line 1375, in main
params_to_optimize[2]["lr"] = args.learning_rate
IndexError: list index out of range

Reproduction

run the sample script with params from the docs:

https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md

To perform DreamBooth LoRA with text-encoder training, run:

export MODEL_NAME="black-forest-labs/FLUX.1-dev"
export OUTPUT_DIR="trained-flux-dev-dreambooth-lora"

accelerate launch train_dreambooth_lora_flux.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --mixed_precision="bf16" \
  --train_text_encoder\
  --instance_prompt="a photo of sks dog" \
  --resolution=512 \
  --train_batch_size=1 \
  --guidance_scale=1 \
  --gradient_accumulation_steps=4 \
  --optimizer="prodigy" \
  --learning_rate=1. \
  --report_to="wandb" \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --max_train_steps=500 \
  --validation_prompt="A photo of sks dog in a bucket" \
  --seed="0" \
  --push_to_hub

Logs

09/18/2024 20:06:33 - WARNING - __main__ - Learning rates were provided both for the transformer and the text encoder- e.g. text_encoder_lr: 5e-06 and learning_rate: 1.0. When using prodigy only learning_rate is used as the initial learning rate.
Traceback (most recent call last):
  File "/content/diffusers/examples/dreambooth/train_dreambooth_lora_flux.py", line 1891, in <module>
    main(args)
  File "/content/diffusers/examples/dreambooth/train_dreambooth_lora_flux.py", line 1375, in main
    params_to_optimize[2]["lr"] = args.learning_rate
IndexError: list index out of range

System Info

colab

Who can help?

@sayakpaul

@squewel squewel added the bug Something isn't working label Sep 18, 2024
@sayakpaul
Copy link
Member

Cc: @linoytsaban

@biswaroop1547
Copy link
Contributor

I've added a fix, given that we don't tune the T5 text encoder in flux, so that line is unnecessary (possibly a typo). Let me know if that's not the case, thanks!

@linoytsaban
Copy link
Collaborator

Thanks @biswaroop1547! indeed for now we support full fine-tuning of the CLIP encoder only when --train_text_encoder is enabled

@illyafan
Copy link

i change the optimizer to "adamw", the error is removed.

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Oct 24, 2024
@linoytsaban
Copy link
Collaborator

Thanks @biswaroop1547! this should now be fixed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale Issues that haven't received updates
Projects
None yet
5 participants