Skip to content

Latest commit

 

History

History
67 lines (46 loc) · 1.88 KB

EVAL_README.md

File metadata and controls

67 lines (46 loc) · 1.88 KB

Evaluation Instruction for TinyGPT-V

Data preparation

Images download

Image source Download path
gqa annotations    images
hateful meme images and annotations
iconqa images and annotation
vizwiz images and annotation

Evaluation dataset structure

${MINIGPTv2_EVALUATION_DATASET}
├── gqa
│   └── test_balanced_questions.json
│   ├── testdev_balanced_questions.json
│   ├── gqa_images
├── hateful_meme
│   └── hm_images
│   ├── dev.jsonl
├── iconvqa
│   └── iconvqa_images
│   ├── choose_text_val.json
├── vizwiz
│   └── vizwiz_images
│   ├── val.json
├── vsr
│   └── vsr_images
...

config file setup

Set llama_model to the path of Phi model.
Set ckpt to the path of our pretrained model.
Set eval_file_path to the path of the annotation files for each evaluation data.
Set img_path to the img_path for each evaluation dataset.
Set save_path to the save_path for each evaluation dataset.

in eval_configs/minigptv2_benchmark_evaluation.yaml

start evaluating visual question answering

port=port_number
cfg_path=/path/to/eval_configs/benchmark_evaluation.yaml

dataset names:

vizwiz iconvqa gqa vsr hm
torchrun --master-port ${port} --nproc_per_node 1 eval_vqa.py \
 --cfg-path ${cfg_path} --dataset vizwiz,iconvqa,gqa,vsr,hm