1
- # SECOND for KITTI object detection
2
- SECOND detector. Based on my unofficial implementation of VoxelNet with some improvements.
1
+ # PointPillars
3
2
4
- ONLY support python 3.6+, pytorch 0.4.1+. Don't support pytorch 0.4.0. Tested in Ubuntu 16.04/18.04 .
3
+ Welcome to PointPillars .
5
4
6
- * Ubuntu 18.04 have speed problem in my environment and may can't build/usr SparseConvNet.
5
+ This repo demonstrates how to reproduce the results from
6
+ PointPillars: Fast Encoders for Object Detection from Point Cloud on the
7
+ [ KITTI dataset] ( http://www.cvlibs.net/datasets/kitti/ ) by making the minimum required changes from the preexisting
8
+ open source codebase [ SECOND] ( https://github.com/traveller59/second.pytorch ) . This is not the official
9
+ nuTonomy: an Aptiv company's codebase, but it can be used to match the published PointPillars results.
7
10
8
- ### Performance in KITTI validation set (50/50 split, people have problems, need to be tuned.)
11
+ ##Getting Started
9
12
10
- ```
11
- Car AP@0.70, 0.70, 0.70:
12
- bbox AP:90.80, 88.97, 87.52
13
- bev AP:89.96, 86.69, 86.11
14
- 3d AP:87.43, 76.48, 74.66
15
- aos AP:90.68, 88.39, 86.57
16
- Car AP@0.70, 0.50, 0.50:
17
- bbox AP:90.80, 88.97, 87.52
18
- bev AP:90.85, 90.02, 89.36
19
- 3d AP:90.85, 89.86, 89.05
20
- aos AP:90.68, 88.39, 86.57
21
- ```
13
+ This is a fork of [ SECOND for KITTI object detection] ( https://github.com/traveller59/second.pytorch ) and the relevant
14
+ subset of the original README is reproduced here.
15
+
16
+ ### Code Support
22
17
23
- ## Install
18
+ ONLY supports python 3.6+, pytorch 0.4.1+. Code has only been tested on Ubuntu 16.04/18.04.
24
19
25
- ### 1. Clone code
20
+ ### Install
21
+
22
+ #### 1. Clone code
26
23
27
24
``` bash
28
- git clone https://github.com/traveller59/second.pytorch.git
29
- cd ./second.pytorch/second
25
+ git clone https://github.com/nutonomy/second.pytorch.git
30
26
```
31
27
32
- ### 2. Install dependence python packages
28
+ #### 2. Install dependence python packages
33
29
34
- It is recommend to use Anaconda package manager.
30
+ It is recommend to use the Anaconda package manager.
35
31
32
+ First, use Anaconda to configure as many packages as possible.
36
33
``` bash
37
- pip install shapely fire pybind11 tensorboardX protobuf scikit-image numba pillow
34
+ conda create -n pointpillars python=3.7 anaconda
35
+ source activate pointpillars
36
+ conda install shapely pybind11 protobuf scikit-image numba pillow
37
+ conda install pytorch-nightly -c pytorch
38
+ conda install google-sparsehash -c bioconda
38
39
```
39
40
40
- If you don't have Anaconda:
41
-
41
+ Then use pip for the packages missing from Anaconda.
42
42
``` bash
43
- pip install numba
43
+ pip install --upgrade pip
44
+ pip install fire tensorboardX
44
45
```
45
46
46
- Follow instructions in https://github.com/facebookresearch/SparseConvNet to install SparseConvNet.
47
+ Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this
48
+ to be correctly configured.
49
+ ``` bash
50
+ git clone git@github.com:facebookresearch/SparseConvNet.git
51
+ cd SparseConvNet/
52
+ bash build.sh
53
+ # NOTE: if bash build.sh fails, try bash develop.sh instead
54
+ ```
47
55
48
- Install Boost geometry:
56
+ Additionally, you may need to install Boost geometry:
49
57
50
58
``` bash
51
59
sudo apt-get install libboost-all-dev
52
60
```
53
61
54
62
55
- ### 3. Setup cuda for numba
63
+ #### 3. Setup cuda for numba
56
64
57
- you need to add following environment variable for numba.cuda, you can add them to ~ /.bashrc:
65
+ You need to add following environment variables for numba to ~ /.bashrc:
58
66
59
67
``` bash
60
68
export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
61
69
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
62
70
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice
63
71
```
64
72
65
- ### 4. add second.pytorch/ to PYTHONPATH
73
+ #### 4. PYTHONPATH
74
+
75
+ Add second.pytorch/ to your PYTHONPATH.
66
76
67
- ## Prepare dataset
77
+ ### Prepare dataset
68
78
69
- * Dataset preparation
79
+ #### 1. Dataset preparation
70
80
71
81
Download KITTI dataset and create some directories first:
72
82
@@ -85,27 +95,29 @@ Download KITTI dataset and create some directories first:
85
95
└── velodyne_reduced <-- empty directory
86
96
```
87
97
88
- * Create kitti infos:
98
+ Note: PointPillar's protos use ``` KITTI_DATASET_ROOT=/data/sets/kitti_second/ ``` .
99
+
100
+ #### 2. Create kitti infos:
89
101
90
102
``` bash
91
103
python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
92
104
```
93
105
94
- * Create reduced point cloud:
106
+ #### 3. Create reduced point cloud:
95
107
96
108
``` bash
97
109
python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT
98
110
```
99
111
100
- * Create groundtruth-database infos:
112
+ #### 4. Create groundtruth-database infos:
101
113
102
114
``` bash
103
115
python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT
104
116
```
105
117
106
- * Modify config file
118
+ #### 5. Modify config file
107
119
108
- There is some path need to be configured in config file :
120
+ The config file needs to be edited to point to the above datasets :
109
121
110
122
``` bash
111
123
train_input_reader: {
@@ -125,135 +137,28 @@ eval_input_reader: {
125
137
}
126
138
```
127
139
128
- ## Usage
129
140
130
- ### train
141
+ ### Train
131
142
132
143
``` bash
133
- python ./pytorch/train.py train --config_path=./configs/car.config --model_dir=/path/to/model_dir
144
+ cd ~ /second.pytorch/second
145
+ python ./pytorch/train.py train --config_path=./configs/pointpillars/car/xyres_16.config --model_dir=/path/to/model_dir
134
146
```
135
147
136
- * Make sure "/path/to/model_dir" doesn't exist if you want to train new model. A new directory will be created if the model_dir doesn't exist, otherwise will read checkpoints in it.
148
+ * If you want to train a new model, make sure "/path/to/model_dir" doesn't exist.
149
+ * If "/path/to/model_dir" does exist, training will be resumed from the last checkpoint.
150
+ * Training only supports a single GPU.
151
+ * Training uses a batchsize=2 which should fit in memory on most standard GPUs.
152
+ * On a single 1080Ti, training xyres_16 requires approximately 20 hours for 160 epochs.
137
153
138
- * training process use batchsize=3 as default for 1080Ti, you need to reduce batchsize if your GPU has less memory.
139
154
140
- * Currently only support single GPU training, but train a model only needs 20 hours (165 epoch) in a single 1080Ti and only needs 40 epoch to reach 74 AP in car moderate 3D in Kitti validation dateset.
155
+ ### Evaluate
141
156
142
- ### evaluate
143
157
144
158
``` bash
145
- python ./pytorch/train.py evaluate --config_path=./configs/car.config --model_dir=/path/to/model_dir
146
- ```
147
-
148
- * detection result will saved as a result.pkl file in model_dir/eval_results/step_xxx or save as official KITTI label format if you use --pickle_result=False.
149
-
150
- ### pretrained model
151
-
152
- Before using pretrained model, you need to modify some file in SparseConvNet because the pretrained model doesn't support SparseConvNet master:
153
-
154
- * convolution.py
155
- ``` Python
156
- # self.weight = Parameter(torch.Tensor(
157
- # self.filter_volume, nIn, nOut).normal_(
158
- # 0,
159
- # std))
160
- self .weight = Parameter(torch.Tensor(
161
- self .filter_volume * nIn, nOut).normal_(
162
- 0 ,
163
- std))
164
- # ...
165
- # output.features = ConvolutionFunction.apply(
166
- # input.features,
167
- # self.weight,
168
- output.features = ConvolutionFunction.apply(
169
- input .features,
170
- self .weight.view(self .filter_volume, self .nIn, self .nOut),
171
- ```
172
-
173
- * submanifoldConvolution.py
174
- ```Python
175
- # self.weight = Parameter(torch.Tensor(
176
- # self.filter_volume, nIn, nOut).normal_(
177
- # 0,
178
- # std))
179
- self .weight = Parameter(torch.Tensor(
180
- self .filter_volume * nIn, nOut).normal_(
181
- 0 ,
182
- std))
183
- # ...
184
- # output.features = SubmanifoldConvolutionFunction.apply(
185
- # input.features,
186
- # self.weight,
187
- output.features = SubmanifoldConvolutionFunction.apply(
188
- input .features,
189
- self .weight.view(self .filter_volume, self .nIn, self .nOut),
190
- ```
191
-
192
- You can download pretrained models in [google drive](https:// drive.google.com/ open ? id = 1eblyuILwbxkJXfIP5QlALW5N_x5xJZhL ). The car model is corresponding to car.config, the car_tiny model is corresponding to car.tiny.config and the people model is corresponding to people.config.
193
-
194
- # # Docker
195
-
196
- You can use a prebuilt docker for testing:
197
- ```
198
- docker pull scrin/ second- pytorch
159
+ cd ~ /second.pytorch/second
160
+ python ./pytorch/train.py evaluate --config_path=./configs/pointpillars/car/xyres_16.config --model_dir=/path/to/model_dir
199
161
```
200
- Then run:
201
- ```
202
- nvidia- docker run - it -- rm - v / media/ yy/ 960evo / datasets/ :/ root/ data - v $ HOME / pretrained_models:/ root/ model -- ipc = host second- pytorch:latest
203
- python ./ pytorch/ train.py evaluate -- config_path = ./ configs/ car.config -- model_dir = / root/ model/ car
204
- ...
205
- ```
206
-
207
- Currently there is a problem that training and evaluating in docker is very slow.
208
-
209
- # # Try Kitti Viewer Web
210
-
211
- # ## Major step
212
-
213
- 1 . run ```python ./ kittiviewer/ backend.py main -- port=xxxx``` in your server/ local.
214
-
215
- 2 . run ```cd ./ kittiviewer/ frontend && python - m http.server``` to launch a local web server.
216
-
217
- 3 . open your browser and enter your frontend url (e.g. http:// 127.0 .0.1:8000 , default]).
218
-
219
- 4 . input backend url (e.g. http:// 127.0 .0.1:16666 )
220
-
221
- 5 . input root path, info path and det path (optional)
222
-
223
- 6 . click load, loadDet (optional), input image index in center bottom of screen and press Enter.
224
-
225
- # ## Inference step
226
-
227
- Firstly the load button must be clicked and load successfully.
228
-
229
- 1 . input checkpointPath and configPath.
230
-
231
- 2 . click buildNet.
232
-
233
- 3 . click inference.
234
-
235
- 
236
-
237
-
238
-
239
- # # Try Kitti Viewer (Deprecated)
240
-
241
- You should use kitti viewer based on pyqt and pyqtgraph to check data before training.
242
-
243
- run ```python ./ kittiviewer/ viewer.py``` , check following picture to use kitti viewer:
244
- 
245
-
246
- # # Concepts
247
-
248
-
249
- * Kitti lidar box
250
-
251
- A kitti lidar box is consist of 7 elements: [x, y, z, w, l, h, rz], see figure.
252
-
253
- 
254
-
255
- All training and inference code use kitti box format . So we need to convert other format to KITTI format before training.
256
-
257
- * Kitti camera box
258
162
259
- A kitti camera box is consist of 7 elements: [x, y, z, l, h, w, ry].
163
+ * Detection result will saved in model_dir/eval_results/step_xxx.
164
+ * By default, results are stored as a result.pkl file. To save as official KITTI label format use --pickle_result=False.
0 commit comments