You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, your work is great!May I ask you a queastion?
I found that using different ddp settings has a greater impact on the final result. How to overcome this? In other words, how to set the batch size and number of GPUs appropriately?
The text was updated successfully, but these errors were encountered:
Thanks for your reply! I have another question.
In training_loop.py, I found you used a class "InfiniteSampler" as training sampler. Is there any difference between this and torch.utils.data.distributed.DistributedSampler?
Hi, your work is great!May I ask you a queastion?
I found that using different ddp settings has a greater impact on the final result. How to overcome this? In other words, how to set the batch size and number of GPUs appropriately?
The text was updated successfully, but these errors were encountered: