Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Context length too long called back by CLIP while training my own OmniControl #24

Open
Lucky1192405754 opened this issue Nov 13, 2024 · 0 comments

Comments

@Lucky1192405754
Copy link

Thanks for your amazing work!But I got a problem here...

When I try to train my own OmniControl model via

python -m train.train_mdm --save_dir save/my_omnicontrol --dataset humanml --num_steps 400000 --batch_size 64 --resume_checkpoint ./save/model000475000.pt --lr 1e-5

it always stop at

RuntimeError: Input a person is standing up facing forward, they rest their hands down to their sides in a relaxed motion. they then clamp their hands for a second, rest their arms again. they begin to walk forward, start off with their left leg.they are walking in a diagonally left way. after they take about 6 steps, then they sit down in mid air (like a relaxed squat) facing diagonally right. they place their left hand on their right side of their chest. is too long for context length 76

which traceback from

File "/data/miniconda3/envs/omnicontrol/lib/python3.7/site-packages/clip/clip.py", line 242, in tokenize
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")

I think it means the some of the texts from humanml datasets is too long for the CLIP limitation, which is defined as

context_length = default_context_length - 1in model/cmdm.py L149.

I've also tried to change the value of default_context_length, but it didn't work if more than 77. How can I solve this problem?

Sincerely thanks for your attention !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant