You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I really liked your paper. The general idea is interesting and I am trying out some personal experiments on objaverse.
I noticed that the data filtering code produces 298k uids instead of 250k.
Could you please tell why is that the case.
The text was updated successfully, but these errors were encountered:
charchit7
changed the title
Total No. of meta_filtered_uids is 296k instead of 150k mentioned in the paper.
Total No. of meta_filtered_uids is 296k instead of 150k+50k+50k mentioned in the paper.
Apr 1, 2024
we missed to clarify that we take a union of our filtered objects in the paper text — this leads to a final dataset size of ~296K (as you noticed). i will add this clarification to our readme.
in my experience, using a high quality subset of objaverse and a strong base model (SDXL) is much more important than total samples. for example, instant3d only trained on 10K assets, and still generates good results.
Hey Yash,
I really liked your paper. The general idea is interesting and I am trying out some personal experiments on objaverse.
I noticed that the data filtering code produces 298k uids instead of 250k.
Could you please tell why is that the case.

The text was updated successfully, but these errors were encountered: