-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the NWPU-Crowd dataset #19
Comments
I think this issue is similar to the UCF-QNRF dataset. For large-scale datasets, it is important to ensure that the training patches contain a sufficient number of people. If many training patches are empty, the training supervision would be weak. Regarding performance on the validation set, I remember the MAE is between 40 to 50. |
Yes. If the model did not see crowded scenes during training, the model may not output good results in such scenarios during testing. |
Dear author, I encountered some problems in the process of reproducing the NWPU data set: I screened the best models through the indicators of the validation set. After 1500 rounds of training, the indicators of the validation set reached about 73, and then I tested the test set on the best model, and submitted the results to the official website. I only got an MAE of 112. Do you know what the possible reasons are? Also, I would like to ask what is your approximate indicator on the validation set?Thank you for your answers in advance.
The text was updated successfully, but these errors were encountered: