Skip to content

Commit 7d00e3a

Browse files
author
Guocheng Qian
committed
update code to the final ICCP version and update dataset link
1 parent a8685e8 commit 7d00e3a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+2670
-985
lines changed

Evaluate_PSNR_SSIM.m

+7-5
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@ function Evaluate_PSNR_SSIM()
44

55
%%
66
% div2k
7-
% pretrain_dataset = 'pixelshift';
8-
% dataset = 'pixelshift';
9-
pretrain_dataset = 'div2k';
10-
dataset = 'urban100';
7+
pretrain_dataset = 'pixelshift';
8+
dataset = 'pixelshift';
9+
% pretrain_dataset = 'div2k';
10+
% dataset = 'urban100';
1111
% dataset = 'cbsd68';
1212
% dataset = 'set14';
1313
% dataset = 'div2k';
@@ -20,7 +20,9 @@ function Evaluate_PSNR_SSIM()
2020

2121

2222
% % tasks = {'resnet-dn+dm+sr-SR2', 'resnet-dn+sr-dm-SR2'};
23-
tasks = {'e2e-dn+dm+sr-SR2', 'e2e-dn+sr-dm-SR2'};
23+
% tasks = {'e2e-dn+dm+sr-SR2', 'e2e-dn+sr-dm-SR2'};
24+
% tasks = {'e2e-dn+sr-dm-SR2'};
25+
tasks = {'e2e-dn-sr-dm-SR2'};
2426

2527
gt_path = fullfile('/home/qiang/codefiles/low_level/ISP/ispnet/data/benchmark/', dataset, 'gt');
2628
pred_dir = fullfile(fullfile('/home/qiang/codefiles/low_level/ISP/ispnet/pretrain/', pretrain_dataset, 'pipeline'), ['result_', dataset]);

README.md

+57-53
Original file line numberDiff line numberDiff line change
@@ -1,51 +1,48 @@
11
# TENet <a href="https://arxiv.org/abs/1905.02538" target="_blank">[PDF]</a> <a href="http://gcqian.com/project/pixelshift200">[pixelshift200]</a>
22

33
### Rethink the Pipeline of Demosaicking, Denoising, and Super-resolution
4-
By [Guocheng Qian](https://guochengqian.github.io/), [Yuanhao Wang](https://github.com/yuanhaowang1213), Chao Dong, [Jimmy S. Ren](http://www.jimmyren.com/), Wolfgang Heidrich, Bernard Ghanem, [Jinjin Gu](http://www.jasongt.com/)
4+
By [Guocheng Qian*](https://guochengqian.github.io/), [Yuanhao Wang*](https://github.com/yuanhaowang1213), [Jinjin Gu](http://www.jasongt.com/), Chao Dong, Wolfgang Heidrich, Bernard Ghanem, [Jimmy S. Ren](http://www.jimmyren.com/)
55

66
The original name of this project is: "Trinity of Pixel Enhancement: a Joint Solution for Demosaicking, Denoising and Super-Resolution"
77

88

99

10-
## pipeline DN -> SR -> DM
11-
12-
![pipeline](misc/pipeline_result.png)
13-
1410
## TENet
1511

1612
We insert the proposed pipeline DN -> SR -> DM into an end-to-end network constructed by RRDB for the joint DN, DM and SR. We leverage the detachable branch to provide the middle stage supervision.
1713

14+
1815
<p align="center">
1916
<img height="300" src="misc/Network.png">
2017
</p>
2118

2219

23-
## PixelShift200 dataset
24-
25-
![pixelshift](misc/PixelShift.png)
26-
2720

2821

29-
## Resources
22+
## PixelShift200 dataset
3023

31-
* pretrained models
32-
* Pixelshift200:
33-
* Real-shot raw images:
24+
We employ advanced pixel shift technology to perform a full color sampling of the image.
25+
Pixel shift technology takes four samples of the same image, and physically controls the camera sensor to move one pixel horizontally or vertically at each sampling to capture all color information at each pixel.
26+
The pixel shift technology ensures that the sampled images follow the distribution of natural images sampled by the camera, and the full information of the color is completely obtained.
27+
In this way, the collected images are artifacts-free, which leads to better training results for demosaicing related tasks.
3428

35-
Will be available soon.
29+
<p align="center">
30+
<img height="200" src="misc/PixelShift.png">
31+
</p>
3632

3733

3834

35+
Download dataset from [pxielshift200 website](http://gcqian.com/project/pixelshift200).
3936

4037

41-
### Enviroment installnation
38+
### Environment installation
4239

4340
Clone this github repo and install the environment by:
4441

45-
```shell
42+
```bash
4643
git clone https://github.com/guochengqian/TENet
4744
cd TENet
48-
source env_install.sh
45+
source install.sh
4946
conda activate tenet
5047
```
5148

@@ -57,108 +54,115 @@ conda activate tenet
5754

5855
1. Download ([DIV2K](https://drive.google.com/file/d/1vXPPr2hVaMewz2JA1lFfI5uHB4ENwRXQ/view?usp=sharing)) dataset
5956

60-
2. create data directory in TENet folder: `mkdir data && cd data`
57+
2. `mkdir data && cd data`
6158

6259
3. Link DIV2K data into ./data/DIV2K, e.g. `ln -s /data/lowlevel/DIV2K ./`
6360

6461
4. Crop DIV2K
62+
6563
```bash
6664
cd ../datasets
65+
python crop_imgs.py # crop train images
6766
python crop_imgs.py --src_dir ../data/DIV2K/DIV2K_val5_HR --save_dir ../data/DIV2K/DIV2K_val5_HR_sub # crop val5 images
68-
python generate_datalist_div2k.py # generate div2k training and val dataset
6967
```
70-
The generated .txt file `train_div2k.txt` and `val_div2k.txt` are used for training on DIV2K.
71-
68+
7269
2. PixelShift200 data preparation
7370

74-
1. Download [Pixelshift200]. They are .mat format, having 4 channels (R, Gr, Gb, B). Unzip the .zip file and put all folders inside into one folder called pixelshift200. For example, put here `/data/pixelshift200`.
71+
1. Download [Pixelshift200](http://guochengqian.com/pixelshift200). They are .mat format, having 4 channels (R, Gr, Gb, B). Unzip the .zip file and put all folders inside into one folder called pixelshift200. For example, put here `/data/lowlevel/pixelshift200`.
72+
73+
2. `cd TENet && mkdir data && cd data`
7574

76-
3. Link PixelShift200 data into ./data/pixelshift200, e.g. `cd TENet/data && ln -s /data/pixelshift200 pixelshift200`
75+
3. Link PixelShift200 data into ./data/pixelshift200, e.g. `ln -s /data/lowlevel/pixelshift200 pixelshift200`
7776

7877
4. Crop images into 512*512, and generate the text file contains the location of each image:
78+
7979
```bash
8080
cd ../datasets
8181
python crop_pixelshift200.py
8282
python generate_datalist_pixelshift.py
8383
```
84-
The generated .txt file `train_pixelshift.txt` (9444 Lines) and `val_pixelshift.txt` (20 Lines) are used for training. check them.
8584

8685

8786
## Training
88-
#### Train joint models:
87+
#### Train joint models
8988

9089
* DN+DM+SR (end to end without pipeline)
9190

92-
```shell
93-
python train.py --in_type noisy_lr_raw --mid_type None --out_type linrgb --model tenet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block_type rrdb --n_blocks 6 --imgs_per_gpu 8
91+
```bash
92+
python train.py --in_type noisy_lr_raw --mid_type None --out_type linrgb --model tenet --n_gpus 4 --block rrdb --n_blocks 12
9493
```
9594

96-
* DN+SR->DM (our TENet)
95+
* DN+SR->DM (our **TENet**)
9796

98-
```SHELL
99-
python train.py --in_type noisy_lr_raw --mid_type raw --out_type linrgb --model tenet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block_type rrdb --n_blocks 6 --imgs_per_gpu 8
97+
```bash
98+
python train.py --in_type noisy_lr_raw --mid_type raw --out_type linrgb --model tenet --n_gpus 4 --block rrdb --n_blocks 12
10099
```
101100

102101
Note:
103102

104-
1. `--mid_type raw` is to activate the auxiliary mid stage supervision. Here we add `raw` as the supervision, therefore, the pipeline will be DN+SR->DM.
103+
1. `--mid_type raw` is to activate the auxiliary mid stage supervision. Here we add `raw` as the supervision, therefore, the pipeline will be DN+SR->DM
105104

106-
2. for training on a different dataset, like DIV2K, change `--train_list datasets/train_div2k.txt --val_list datasets/val_div2k.txt`
105+
2. for training on a different dataset, like DIV2K, change `--dataset div2k`
106+
107+
3. for training with Gaussian noise model, add `--noise_model g`
107108

108-
3. for using a different building block, such as NLSA `--block_type nlsa`, or EAM `--block_type eam` , or RRG `--block_type rrg`, or DRLM `--block_type drlm` or RRDB `--block_type rrdb`
109+
4. for using a different building block, such as NLSA `--block nlsa`, or EAM `--block eam` , or RRG `--block rrg`, or DRLM `--block drlm` or RRDB `--block rrdb`
109110

110-
4. Monitor your jobs using Tensorboard (log saved in ./log folder by default, `tensorboard --logdir=./ ` ) or using wandb (online website) by set `--use_wandb`.
111+
5. [`wandb`](https://wandb.ai/) is used by default. Set `--no_wandb` to not using wandb
111112

112113

113114

114-
#### Train sequential models:
115+
#### Train sequential models (ablation study)
115116

116-
```shell
117+
```bash
117118
# RawDN
118-
python train.py --in_type noisy_raw --out_type raw --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block_type rrdb --n_blocks 6
119+
python train.py --in_type noisy_raw --out_type raw --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block rrdb --n_blocks 12
119120
120121
# RawSR
121-
python train.py --in_type lr_raw --out_type raw --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block_type rrdb --n_blocks 6
122+
python train.py --in_type lr_raw --out_type raw --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block rrdb --n_blocks 12
122123
123124
# DM
124-
python train.py --in_type raw --out_type linrgb --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block_type rrdb --n_blocks 6
125+
python train.py --in_type raw --out_type linrgb --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block rrdb --n_blocks 12
125126
126127
# RGBDN
127-
python train.py --in_type noisy_linrgb --out_type linrgb --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block_type rrdb --n_blocks 6
128+
python train.py --in_type noisy_linrgb --out_type linrgb --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block rrdb --n_blocks 12
128129
129130
# RGBSR
130-
python train.py --in_type lr_linrgb --out_type linrgb --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block_type rrdb --n_blocks 6
131+
python train.py --in_type lr_linrgb --out_type linrgb --model resnet --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block rrdb --n_blocks 12
131132
```
132133

133134

134-
135-
136135
### Train SOTA models
137136
* JDSR
138-
```SHELL
139-
python train.py --in_type noisy_lr_raw --mid_type None --out_type linrgb --model jdsr --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block_type res --n_blocks 12 --channels 256
137+
```bash
138+
python train.py --in_type noisy_lr_raw --mid_type None --out_type linrgb --model jdsr --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --use_wandb --block res --n_blocks 12 --channels 256
140139
```
141140

142141
* JDnDmSR
143142

144-
```SHELL
143+
```bash
145144
python train.py --in_type noisy_lr_raw --mid_type lr_raw --out_type linrgb --model jdndmsr --scale 2 --train_list datasets/train_pixelshift.txt --val_list datasets/val_pixelshift.txt --n_gpus 4 --n_blocks 2 --block rcab
146145
```
147146

148147

149-
150148
## Testing
151149

152-
```shell
150+
```bash
153151
bash script_all_pipelines.sh
152+
# this script supports evaluation on all benchmarking datasets as well as the real-shot images for all possible pipelines
154153
```
155154

156-
Note:
157-
1. this script supports evaluation on all benchmarking datasets as well as the real-shot images for all possible pipelines
158-
2. for the real shot images testing, you have to:
159-
* save the real-shot image as a readable raw image (like in .RAW, .ARW, .DNG format). For example, we use Lightroom mobile version to shot images on iPhone and save the photo in .DNG format.
160-
* Read the general metadata using RawPy and read the noise profiling metadata using [Jeffrey's Image Metadata Viewer](http://exif.regex.info/exif.cgi) or [metapicz](http://metapicz.com/).
155+
Note: for the real shot images testing, you have to:
161156

157+
1. save the real-shot image as a readable raw image (like in .RAW, .ARW, .DNG format). For example, we use Lightroom mobile version to shot images on iPhone and save the photo in .DNG format.
158+
2. Read the general metadata using RawPy and read the noise profiling metadata using [Jeffrey's Image Metadata Viewer](http://exif.regex.info/exif.cgi) or [metapicz](http://metapicz.com/).
159+
160+
161+
162+
## Result
163+
<p align="center">
164+
<img width="800" src="misc/pipeline_result.png">
165+
</p>
162166
163167
164168
### Citation

0 commit comments

Comments
 (0)