-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image Generation Performance Evaluation #58
Comments
I am getting the following FID score: FID: 26.207129500696738. I have computed it on the following dataset: https://huggingface.co/datasets/stasstaf/MS-COCO-validation using the standard Github code https://github.com/mseitzer/pytorch-fid. Could you please tell me how I can reproduce your numbers? |
Hi, we used this split https://github.com/boomb0om/text2image-benchmark and we followed PixArt to fine-tune our model on a coco-like dataset (e.g., openimages) for evaluating MSCOCO FID. The other evaluations were assessed using the released checkpoints in GitHub. |
Is this finetuning dataset/script available? I want to provide a solid comparison with your work and that is why I am asking for it. |
Or would it be possible if I ask you to share with me the check point after finetuning? I do really need to have this one as I need to do inference on this model. Thanks. |
Hi, I can share it with you via email. (sierkinhane@gmail.com) |
Hi, Following up on the evaluations, I’m working on reproducing the results from Table 3 (GenEval). Here are the results I obtained: Results:
For image generation, I used the following parameters:
Could you confirm if these settings align with those used in the original evaluation? If there are any additional details or adjustments I should consider, I’d appreciate your guidance. Thanks! |
Hi, we got a GenEval score around 0.53 when setting fewer inference steps (<=25) and more inference steps would obtain better performance. |
For the sake of reproducibility of the results in the paper, I'd like to achieve the score mentioned in the paper (0.68). Would it be possible to provide the hyperparmeters that were used to get that score? As it is much higher than what I get in my test runs with higher inference steps (0.57 maximum) |
Hi, you should use this checkpoint https://huggingface.co/showlab/show-o-512x512. |
Thank you! I tried using this checkpoint, but something seems to be off. Below are the GenEval results I obtained:
Would you be able to check if this checkpoint reproduces the reported score of 0.68? Are there any specific settings I might be missing? Any advice would be greatly appreciated. |
The score is very weird. Can you check if the images were correctly generated? Besides, you must use this config https://github.com/showlab/Show-o/blob/main/configs/showo_demo_512x512.yaml |
Thank you so much for your guidance! It turns out the issue was with the config. I was able to reproduce the reported score using the provided configuration file. I appreciate your help! |
Hi @Sierkinhane,
Thank you for providing this amazing GitHub repository. Please let me know which checkpoint and configuration you used to compute the results for tables 2 and 3 related to FID performance evaluation and GenEval evaluation. I also would like to know which split of MSCOCO 30k you use. Any link for this link would be appreciated.
I am trying to replicate your number, and I would appreciate having access to your evaluation script.
Best,
Mohammad
The text was updated successfully, but these errors were encountered: