You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm having trouble reproducing the experiments from your paper using code from this repository.
Let's focus on Table 1 from the paper, column +GBN and experiments C1, Resnet44, C3 (although I have problems with more results, including baselines). Here are the commands I use to run these 3 experiments: python main_gbn.py --dataset cifar10 --model cifar10_shallow --save c10_shallow_11 --epochs 200 --b 4096 --lr_bb_fix --mini-batch-size 128 python main_gbn.py --dataset cifar10 --model resnet --save resnet_11 --epochs 200 --b 4096 --lr_bb_fix --mini-batch-size 128 python main_gbn.py --dataset cifar100 --model cifar100_shallow --save c100_shallow_11 --epochs 200 --b 4096 --lr_bb_fix --mini-batch-size 128
So I set batch size to 4096 and ghost batch size to 128 as instructed by the paper; the rest of the hyperparameters (number of epochs, learning rate schedule, momentum, gradient clipping constant, weight decay) remain as set in the code.
Here are the results that I get, compared to results from the paper: C1 LB + LR + GBN, last epoch: 75.07 +/- 0.10; best epoch: 75.41 +/- 0.11; in paper: 86.4 Resnet44 LB + LR + GBN, last epoch: 85.21 +/- 0.81; best epoch: 85.63 +/- 0.76; in paper: 90.50 C3 LB + LR + GBN, last epoch: 27.33 +/- 0.11; best epoch: 27.63 +/- 0.11; in paper: 57.5
While in the last case training for more epochs would improve the results, in the second case it pretty much flattens out. I suspect that learning rate schedules are the main culprits, as by manipulating them I was able to enhance the results by far.
Can you please help me reproduce the experiments and publish the hyparparameters from your experiments? Also, it would be helpful if you published which versions of the Python packages you had used in the original experiments - I use PyTorch 0.3.1 as recommended by the Smoothout paper repo (https://github.com/wenwei202/smoothout), which is forked from your repository, I believe.
Thanks!
The text was updated successfully, but these errors were encountered:
Hello,
I'm having trouble reproducing the experiments from your paper using code from this repository.
Let's focus on Table 1 from the paper, column +GBN and experiments C1, Resnet44, C3 (although I have problems with more results, including baselines). Here are the commands I use to run these 3 experiments:
python main_gbn.py --dataset cifar10 --model cifar10_shallow --save c10_shallow_11 --epochs 200 --b 4096 --lr_bb_fix --mini-batch-size 128
python main_gbn.py --dataset cifar10 --model resnet --save resnet_11 --epochs 200 --b 4096 --lr_bb_fix --mini-batch-size 128
python main_gbn.py --dataset cifar100 --model cifar100_shallow --save c100_shallow_11 --epochs 200 --b 4096 --lr_bb_fix --mini-batch-size 128
So I set batch size to 4096 and ghost batch size to 128 as instructed by the paper; the rest of the hyperparameters (number of epochs, learning rate schedule, momentum, gradient clipping constant, weight decay) remain as set in the code.
Here are the results that I get, compared to results from the paper:
C1 LB + LR + GBN, last epoch: 75.07 +/- 0.10; best epoch: 75.41 +/- 0.11; in paper: 86.4
Resnet44 LB + LR + GBN, last epoch: 85.21 +/- 0.81; best epoch: 85.63 +/- 0.76; in paper: 90.50
C3 LB + LR + GBN, last epoch: 27.33 +/- 0.11; best epoch: 27.63 +/- 0.11; in paper: 57.5
While in the last case training for more epochs would improve the results, in the second case it pretty much flattens out. I suspect that learning rate schedules are the main culprits, as by manipulating them I was able to enhance the results by far.
Can you please help me reproduce the experiments and publish the hyparparameters from your experiments? Also, it would be helpful if you published which versions of the Python packages you had used in the original experiments - I use PyTorch 0.3.1 as recommended by the Smoothout paper repo (https://github.com/wenwei202/smoothout), which is forked from your repository, I believe.
Thanks!
The text was updated successfully, but these errors were encountered: