You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RuntimeError: Input a person is standing up facing forward, they rest their hands down to their sides in a relaxed motion. they then clamp their hands for a second, rest their arms again. they begin to walk forward, start off with their left leg.they are walking in a diagonally left way. after they take about 6 steps, then they sit down in mid air (like a relaxed squat) facing diagonally right. they place their left hand on their right side of their chest. is too long for context length 76
which traceback from
File "/data/miniconda3/envs/omnicontrol/lib/python3.7/site-packages/clip/clip.py", line 242, in tokenize
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
I think it means the some of the texts from humanml datasets is too long for the CLIP limitation, which is defined as
Thanks for your amazing work!But I got a problem here...
When I try to train my own OmniControl model via
python -m train.train_mdm --save_dir save/my_omnicontrol --dataset humanml --num_steps 400000 --batch_size 64 --resume_checkpoint ./save/model000475000.pt --lr 1e-5
it always stop at
RuntimeError: Input a person is standing up facing forward, they rest their hands down to their sides in a relaxed motion. they then clamp their hands for a second, rest their arms again. they begin to walk forward, start off with their left leg.they are walking in a diagonally left way. after they take about 6 steps, then they sit down in mid air (like a relaxed squat) facing diagonally right. they place their left hand on their right side of their chest. is too long for context length 76
which traceback from
File "/data/miniconda3/envs/omnicontrol/lib/python3.7/site-packages/clip/clip.py", line 242, in tokenize
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
I think it means the some of the texts from humanml datasets is too long for the CLIP limitation, which is defined as
context_length = default_context_length - 1
in model/cmdm.py L149.I've also tried to change the value of default_context_length, but it didn't work if more than 77. How can I solve this problem?
Sincerely thanks for your attention !
The text was updated successfully, but these errors were encountered: