Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finetune on new dataset #20

Open
rxdhr opened this issue Mar 19, 2018 · 4 comments
Open

Finetune on new dataset #20

rxdhr opened this issue Mar 19, 2018 · 4 comments

Comments

@rxdhr
Copy link

rxdhr commented Mar 19, 2018

Hi, I am trying to finetune DPN-107 on a new dataset. I use latest mxnet and add scale=0.0167 in image iter. However, the training accuracy is very low. The resnext 101 model can reach 80+ while dpn only 40+. I have verified that using latest mxnet and scale=0.0167 can get TOP-1 ~95.0 on imagenet validation. So it's very stange why finetuning DPN on new dataset is not working well. I also tried to fix all layers except the last fc for classification. The performance is also very low. Do you have any comment on how to finetune DPN on new dataset? Thanks.

@cypw
Copy link
Owner

cypw commented Mar 21, 2018

@rxdhr
I would suggest you check if the pre-trained model is correctly loaded, and check if your fine-tuning code/strategy* is correct:
This can be done by fine-tuning the pre-trained model on ImageNet training set for 2~3 epochs, and then validate the result on the validation set. If the validation accuracy is significantly dropped, then you may need to check the code or change the fine-tuning strategy.

*DPN-107 is a pretty large model which can result in less training samples per batch. Due to the batch normalization layers, an insufficient number of samples per batch will introduce too much variance that can ruin the pre-trained model. So, you may also want to try to set "use_global_stats=True" for all BN layers.

@rxdhr
Copy link
Author

rxdhr commented Mar 22, 2018

Thank you very much for your advice. I tried to finetune on ImageNet val with 224*224 size, the training accuracy can reach near 1.0. With the same code, data processing and hyperparameters, I finetune on another dataset, the training accuracy can only reach ~50%. When I change to resnext 101, the training accuracy can reach near 1.0. So I think the code and the data are both correct. But not clear why the dpn-107 can't even fit the training data. I use this code for finetuning: https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/fine-tune.py and change the sym_name to flatten.

@rxdhr
Copy link
Author

rxdhr commented Mar 22, 2018

As you point out, the problem is use_global_stats in BN layer. I previously loaded the saved "-symbol.json" file to construct the network. I change to use the code to construct the network and set use_global_stats to True. It's working normally now. However, I'm still curious about the effect of use_global_stats since I use batch size 128 over 4 GPUs to train. The batch size should be large enough for stable mean and variance. And the problem does not exist in resnext/resnet-like networks. Do you have some comment on use_global_stats? Thanks.

@cypw
Copy link
Owner

cypw commented Mar 22, 2018

Hmmm, that's an interesting observation. 32 images per GPU should be fine. Maybe fine-tuning DPN-107 requires smaller initial learning rate than ResNet/ResNeXt since it has an additional Dense Path?

In my opinion, set "use_global_stats=True" will simply disable the batch mean/var and use the precomputed moving mean/var, which downgrades a BN layer to a depth-wise 1x1 convolution layer. Considering that you are using a pre-trained model and doing fine-tuning on a small dataset, using fixed BN layers won't be a problem. As for the overfitting problem, you may want to add a dropout layer just before the final classifier since BN layers are disabled. (Or, maybe you can try to only freeze the BN layers on bottom layers and set "use_global_stats=False" for top layers)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants