You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
'pydantic_core._pydantic_core.ValidationError: 1 validation error for DeepSpeedZeroConfig
stage3_prefetch_bucket_size
Input should be a valid integer, got a number with a fractional part [type=int_from_float, input_value=15099494.4, input_type=float]'
I used the same procedure to use LoRA fine-tune the LLaVA 1.5 13B version, but did not cause the same problem.
Does anyone know how to solve that?
The text was updated successfully, but these errors were encountered:
yiwei-chenn
changed the title
[Question] LLaVA 1.5 7B model fine-tune
[Question] LLaVA 1.5 7B model fine-tune -- pydantic
Nov 13, 2024
Question
When I use my own pre-trained mlp adapter to finetune the LLaVA 1.5 7B model, I use the finetune_lora.sh like
And I face the problem like
This problem was caused by
'pydantic_core._pydantic_core.ValidationError: 1 validation error for DeepSpeedZeroConfig
stage3_prefetch_bucket_size
Input should be a valid integer, got a number with a fractional part [type=int_from_float, input_value=15099494.4, input_type=float]'
I used the same procedure to use LoRA fine-tune the LLaVA 1.5 13B version, but did not cause the same problem.
Does anyone know how to solve that?
The text was updated successfully, but these errors were encountered: