You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You might need to download the original [Swin-T Weights](https://github.com/SwinTransformer/storage/releases/download/v1.0.4/swin_tiny_patch244_window877_kinetics400_1k.pth) to initialize the model.
135
147
136
148
137
-
#### Intra Dataset Training
149
+
### Training
150
+
训练
138
151
139
-
This training will split the dataset into 10 random train/test splits (with random seed 42) and report the best result on the random split of the test dataset.
152
+
### Get Pretrained Weights from Recognition
153
+
154
+
You might need to download the original [Swin-T Weights](https://github.com/SwinTransformer/storage/releases/download/v1.0.4/swin_tiny_patch244_window877_kinetics400_1k.pth) to initialize the model.
155
+
156
+
### Train with large dataset (LSVQ)
157
+
158
+
To train FAST-VQA-B, please run
140
159
141
-
```shell
142
-
python train.py -d $DATASET$ --from_ar
160
+
```
161
+
python new_train.py -o options/fast/fast-b.yml
143
162
```
144
163
145
-
Supported datasets are KoNViD-1k, LIVE_VQC, CVD2014, YouTube-UGC.
164
+
To train FAST-VQA-M, please run
146
165
147
-
#### Cross Dataset Training
166
+
```
167
+
python new_train.py -o options/fast/fast-m.yml
168
+
```
148
169
149
-
This training will do no split and directly report the best result on the provided validation dataset.
Supported TRAINSET is LSVQ, and VALSETS can be LSVQ(LSVQ-test+LSVQ-1080p), KoNViD, LIVE_VQC.
156
176
157
177
158
-
### Finetune with provided weights
178
+
### Finetune on small datasets with provided weights (*from 1.0 version*)
159
179
160
-
#### Intra Dataset Training
180
+
You should download our [v1.0-weights](https://github.com/TimothyHTimothy/FAST-VQA/releases/tag/v1.0.0-open-release-weights) for this function. We are working on to refactor this part soon.
161
181
162
182
This training will split the dataset into 10 random train/test splits (with random seed 42) and report the best result on the random split of the test dataset.
163
183
164
184
```shell
165
-
python inference.py -d $DATASET$
185
+
python inference.py -d $DATASET$
166
186
```
167
187
168
-
Supported datasets are KoNViD-1k, LIVE_VQC, CVD2014, YouTube-UGC.
188
+
Note that this part only support FAST-VQA-B and FAST-VQA-M, without FAST-VQA-B-3D.
169
189
170
-
## Switching to FASTER-VQA
190
+
Supported `$DATASET$` are KoNViD-1k, LIVE_VQC, CVD2014, LIVE-Qualcomm, YouTube-UGC.
171
191
172
-
You can add the argument `-m FASTER` in any scripts (```finetune.py, inference.py, visualize.py```) above to switch to FAST-VQA-M instead of FAST-VQA.
173
192
174
193
## Citation
175
194
@@ -183,5 +202,15 @@ The following paper is to be cited in the bibliography if relevant papers are pr
183
202
}
184
203
```
185
204
205
+
And this code library if it is used.
206
+
```
207
+
@misc{end2endvideoqualitytool,
208
+
title = {Open Source Deep End-to-End Video Quality Assessment Toolbox},
0 commit comments