The official implementation of DA-Ada: Learning Domain-Aware Adapter for Domain Adaptive Object Detection
(Openreview).
This codebase is based on RegionCLIP.
- Put your dataset at './datasets/your_dataset'. Please follow the format of Pascal Voc. For example:
- dataset
- cityscapes_voc
- VOC2007
- Annotations
- ImageSets
- JPEGImages
- VOC2007
- foggy_cityscapes_voc
- VOC2007
- Annotations
- ImageSets
- JPEGImages
- VOC2007
- cityscapes_voc
- Put your pre-trained VLM model at somewhere you like, for example, './ckpt', and edit the MODEL.WEIGHTS in train_da_ada_c2f.sh.
- Following RegionCLIP, generate class embedding and put it at somewhere you like, and edit the MODEL.CLIP.TEXT_EMB_PATH.
- Training: train_da_ada_c2f.sh Testing: test_da_ada_c2f.sh