|
| 1 | +# ArtGPT-4: Artistic Vision-Language Understanding with Adapter-enhanced MiniGPT-4 |
| 2 | +[Zhengqing Yuan](https://orcid.org/0000-0002-4870-8492)*, [Huiwen Xue]()*, [Xinyi Wang]()*, [Yongming Liu](https://www.semanticscholar.org/author/Yongming-Liu/2130184867)*, [Zhuanzhe Zhao](https://www.semanticscholar.org/author/Zhuanzhe-Zhao/2727550)*, and [Kun Wang](https://www.ahpu.edu.cn/jsjyxxgc/2023/0220/c5472a187109/page.htm)*. *Equal Contribution |
| 3 | + |
| 4 | +**Anhui Polytechnic University, Soochow University** |
| 5 | + |
| 6 | +<a href='https://artgpt-4.github.io'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='ArtGPT_4.pdf'><img src='https://img.shields.io/badge/Paper-PDF-red'></a> |
| 7 | + <!-- <a href='https://huggingface.co/spaces/Vision-CAIR/minigpt4'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a> <a href='https://huggingface.co/Vision-CAIR/MiniGPT-4'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a> [](https://colab.research.google.com/drive/1OK4kYsZphwt5DXchKkzMBjYF6jnkqh4R?usp=sharing) [](https://www.youtube.com/watch?v=__tftoxpBAw&feature=youtu.be) --> |
| 8 | + |
| 9 | + |
| 10 | + |
| 11 | +## Online Demo |
| 12 | + |
| 13 | +Click the image to chat with MiniGPT-4 around your images |
| 14 | +[](https://artgpt-4.github.io) |
| 15 | + |
| 16 | + |
| 17 | +## Examples |
| 18 | + | | | |
| 19 | +:-------------------------:|:-------------------------: |
| 20 | + |  |
| 21 | + |
| 22 | + |
| 23 | +More examples can be found in the [project page](https://artgpt-4.github.io). |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | +## Introduction |
| 28 | +- ArtGPT-4 is a novel model that builds upon the architecture of MiniGPT-4 by incorporating tailored linear layers and activation functions into Vicuna, specifically designed to optimize the model's performance in vision-language tasks. |
| 29 | +- The modifications made to Vicuna in ArtGPT-4 enable the model to better capture intricate details and understand the meaning of artistic images, resulting in improved image understanding compared to the original MiniGPT-4 model. |
| 30 | +- To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset. |
| 31 | +- ArtGPT-4 was trained using about 200 GB of image-text pairs on a Tesla A100 device in just 2 hours, demonstrating impressive efficiency and effectiveness in training. |
| 32 | +- In addition to improved image understanding, ArtGPT-4 is capable of generating visual code, including aesthetically pleasing HTML/CSS web pages, with a more artistic flair. |
| 33 | + |
| 34 | + |
| 35 | + |
| 36 | + |
| 37 | + |
| 38 | +## Getting Started |
| 39 | +### Installation |
| 40 | + |
| 41 | +**1. Prepare the code and the environment** |
| 42 | + |
| 43 | +Git clone our repository, creating a python environment and ativate it via the following command |
| 44 | + |
| 45 | +```bash |
| 46 | +git clone https://github.com/DLYuanGod/ArtGPT-4.git |
| 47 | +cd ArtGPT-4 |
| 48 | +conda env create -f environment.yml |
| 49 | +conda activate artgpt4 |
| 50 | +``` |
| 51 | + |
| 52 | + |
| 53 | +**2. Prepare the pretrained Vicuna weights** |
| 54 | + |
| 55 | +The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B. |
| 56 | +Please refer to our instruction [here](PrepareVicuna.md) |
| 57 | +to prepare the Vicuna weights. |
| 58 | +The final weights would be in a single folder in a structure similar to the following: |
| 59 | + |
| 60 | +``` |
| 61 | +vicuna_weights |
| 62 | +├── config.json |
| 63 | +├── generation_config.json |
| 64 | +├── pytorch_model.bin.index.json |
| 65 | +├── pytorch_model-00001-of-00003.bin |
| 66 | +... |
| 67 | +``` |
| 68 | + |
| 69 | +Then, set the path to the vicuna weight in the model config file |
| 70 | +[here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16. |
| 71 | + |
| 72 | +**3. Prepare the pretrained MiniGPT-4 checkpoint** |
| 73 | + [Downlad](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link) |
| 74 | + |
| 75 | + |
| 76 | +Then, set the path to the pretrained checkpoint in the evaluation config file |
| 77 | +in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 11. |
| 78 | + |
| 79 | + |
| 80 | + |
| 81 | +### Launching Demo Locally |
| 82 | + |
| 83 | +Try out our demo [demo.py](demo.py) on your local machine by running |
| 84 | + |
| 85 | +``` |
| 86 | +python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0 |
| 87 | +``` |
| 88 | + |
| 89 | + |
| 90 | +### Training |
| 91 | +The training of ArtGPT-4 contains two alignment stages. The training process for the step is consistent with that of [MiniGPT-4](https://minigpt-4.github.io/). |
| 92 | + |
| 93 | +**Datasets** |
| 94 | +We use [Laion-aesthetic](https://github.com/LAION-AI/laion-datasets/blob/main/laion-aesthetic.md) from the LAION-5B dataset, which amounts to approximately 200GB for the first 302 tar files. |
| 95 | + |
| 96 | + |
| 97 | + |
| 98 | +## Acknowledgement |
| 99 | + |
| 100 | ++ [MiniGPT-4](https://minigpt-4.github.io/) Our work is based on improvements to the model. |
| 101 | + |
| 102 | + |
| 103 | +If you're using ArtGPT-4 in your research or applications, please cite using this BibTeX: |
| 104 | +```bibtex |
| 105 | +@article{yuan2023artgpt4, |
| 106 | + title={ArtGPT-4: Artistic Vision-Language Understanding with Adapter-enhanced MiniGPT-4}, |
| 107 | + author={Yuan, Zhengqng and Xue, Huiwen and Wang, Xinyi and Liu, Yongming and Zhao, zhuanzhe and Wang, Kun}, |
| 108 | + year={2023} |
| 109 | + } |
| 110 | +``` |
| 111 | + |
| 112 | + |
| 113 | +## License |
| 114 | +This repository is under [BSD 3-Clause License](LICENSE.md). |
| 115 | +Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with |
| 116 | +BSD 3-Clause License [here](LICENSE_Lavis.md). |
0 commit comments