Skip to content
This repository was archived by the owner on Oct 13, 2021. It is now read-only.

Commit 0ea37e4

Browse files
committed
initial commit
1 parent db9a897 commit 0ea37e4

16 files changed

+1871
-201
lines changed

README.md

+64-159
Original file line numberDiff line numberDiff line change
@@ -1,72 +1,82 @@
1-
# SECOND for KITTI object detection
2-
SECOND detector. Based on my unofficial implementation of VoxelNet with some improvements.
1+
# PointPillars
32

4-
ONLY support python 3.6+, pytorch 0.4.1+. Don't support pytorch 0.4.0. Tested in Ubuntu 16.04/18.04.
3+
Welcome to PointPillars.
54

6-
* Ubuntu 18.04 have speed problem in my environment and may can't build/usr SparseConvNet.
5+
This repo demonstrates how to reproduce the results from
6+
PointPillars: Fast Encoders for Object Detection from Point Cloud on the
7+
[KITTI dataset](http://www.cvlibs.net/datasets/kitti/) by making the minimum required changes from the preexisting
8+
open source codebase [SECOND](https://github.com/traveller59/second.pytorch). This is not the official
9+
nuTonomy: an Aptiv company's codebase, but it can be used to match the published PointPillars results.
710

8-
### Performance in KITTI validation set (50/50 split, people have problems, need to be tuned.)
11+
##Getting Started
912

10-
```
11-
Car AP@0.70, 0.70, 0.70:
12-
bbox AP:90.80, 88.97, 87.52
13-
bev AP:89.96, 86.69, 86.11
14-
3d AP:87.43, 76.48, 74.66
15-
aos AP:90.68, 88.39, 86.57
16-
Car AP@0.70, 0.50, 0.50:
17-
bbox AP:90.80, 88.97, 87.52
18-
bev AP:90.85, 90.02, 89.36
19-
3d AP:90.85, 89.86, 89.05
20-
aos AP:90.68, 88.39, 86.57
21-
```
13+
This is a fork of [SECOND for KITTI object detection](https://github.com/traveller59/second.pytorch) and the relevant
14+
subset of the original README is reproduced here.
15+
16+
### Code Support
2217

23-
## Install
18+
ONLY supports python 3.6+, pytorch 0.4.1+. Code has only been tested on Ubuntu 16.04/18.04.
2419

25-
### 1. Clone code
20+
### Install
21+
22+
#### 1. Clone code
2623

2724
```bash
28-
git clone https://github.com/traveller59/second.pytorch.git
29-
cd ./second.pytorch/second
25+
git clone https://github.com/nutonomy/second.pytorch.git
3026
```
3127

32-
### 2. Install dependence python packages
28+
#### 2. Install dependence python packages
3329

34-
It is recommend to use Anaconda package manager.
30+
It is recommend to use the Anaconda package manager.
3531

32+
First, use Anaconda to configure as many packages as possible.
3633
```bash
37-
pip install shapely fire pybind11 tensorboardX protobuf scikit-image numba pillow
34+
conda create -n pointpillars python=3.7 anaconda
35+
source activate pointpillars
36+
conda install shapely pybind11 protobuf scikit-image numba pillow
37+
conda install pytorch-nightly -c pytorch
38+
conda install google-sparsehash -c bioconda
3839
```
3940

40-
If you don't have Anaconda:
41-
41+
Then use pip for the packages missing from Anaconda.
4242
```bash
43-
pip install numba
43+
pip install --upgrade pip
44+
pip install fire tensorboardX
4445
```
4546

46-
Follow instructions in https://github.com/facebookresearch/SparseConvNet to install SparseConvNet.
47+
Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this
48+
to be correctly configured.
49+
```bash
50+
git clone git@github.com:facebookresearch/SparseConvNet.git
51+
cd SparseConvNet/
52+
bash build.sh
53+
# NOTE: if bash build.sh fails, try bash develop.sh instead
54+
```
4755

48-
Install Boost geometry:
56+
Additionally, you may need to install Boost geometry:
4957

5058
```bash
5159
sudo apt-get install libboost-all-dev
5260
```
5361

5462

55-
### 3. Setup cuda for numba
63+
#### 3. Setup cuda for numba
5664

57-
you need to add following environment variable for numba.cuda, you can add them to ~/.bashrc:
65+
You need to add following environment variables for numba to ~/.bashrc:
5866

5967
```bash
6068
export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
6169
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
6270
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice
6371
```
6472

65-
### 4. add second.pytorch/ to PYTHONPATH
73+
#### 4. PYTHONPATH
74+
75+
Add second.pytorch/ to your PYTHONPATH.
6676

67-
## Prepare dataset
77+
### Prepare dataset
6878

69-
* Dataset preparation
79+
#### 1. Dataset preparation
7080

7181
Download KITTI dataset and create some directories first:
7282

@@ -85,27 +95,29 @@ Download KITTI dataset and create some directories first:
8595
└── velodyne_reduced <-- empty directory
8696
```
8797

88-
* Create kitti infos:
98+
Note: PointPillar's protos use ```KITTI_DATASET_ROOT=/data/sets/kitti_second/```.
99+
100+
#### 2. Create kitti infos:
89101

90102
```bash
91103
python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
92104
```
93105

94-
* Create reduced point cloud:
106+
#### 3. Create reduced point cloud:
95107

96108
```bash
97109
python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT
98110
```
99111

100-
* Create groundtruth-database infos:
112+
#### 4. Create groundtruth-database infos:
101113

102114
```bash
103115
python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT
104116
```
105117

106-
* Modify config file
118+
#### 5. Modify config file
107119

108-
There is some path need to be configured in config file:
120+
The config file needs to be edited to point to the above datasets:
109121

110122
```bash
111123
train_input_reader: {
@@ -125,135 +137,28 @@ eval_input_reader: {
125137
}
126138
```
127139

128-
## Usage
129140

130-
### train
141+
### Train
131142

132143
```bash
133-
python ./pytorch/train.py train --config_path=./configs/car.config --model_dir=/path/to/model_dir
144+
cd ~/second.pytorch/second
145+
python ./pytorch/train.py train --config_path=./configs/pointpillars/car/xyres_16.config --model_dir=/path/to/model_dir
134146
```
135147

136-
* Make sure "/path/to/model_dir" doesn't exist if you want to train new model. A new directory will be created if the model_dir doesn't exist, otherwise will read checkpoints in it.
148+
* If you want to train a new model, make sure "/path/to/model_dir" doesn't exist.
149+
* If "/path/to/model_dir" does exist, training will be resumed from the last checkpoint.
150+
* Training only supports a single GPU.
151+
* Training uses a batchsize=2 which should fit in memory on most standard GPUs.
152+
* On a single 1080Ti, training xyres_16 requires approximately 20 hours for 160 epochs.
137153

138-
* training process use batchsize=3 as default for 1080Ti, you need to reduce batchsize if your GPU has less memory.
139154

140-
* Currently only support single GPU training, but train a model only needs 20 hours (165 epoch) in a single 1080Ti and only needs 40 epoch to reach 74 AP in car moderate 3D in Kitti validation dateset.
155+
### Evaluate
141156

142-
### evaluate
143157

144158
```bash
145-
python ./pytorch/train.py evaluate --config_path=./configs/car.config --model_dir=/path/to/model_dir
146-
```
147-
148-
* detection result will saved as a result.pkl file in model_dir/eval_results/step_xxx or save as official KITTI label format if you use --pickle_result=False.
149-
150-
### pretrained model
151-
152-
Before using pretrained model, you need to modify some file in SparseConvNet because the pretrained model doesn't support SparseConvNet master:
153-
154-
* convolution.py
155-
```Python
156-
# self.weight = Parameter(torch.Tensor(
157-
# self.filter_volume, nIn, nOut).normal_(
158-
# 0,
159-
# std))
160-
self.weight = Parameter(torch.Tensor(
161-
self.filter_volume * nIn, nOut).normal_(
162-
0,
163-
std))
164-
# ...
165-
# output.features = ConvolutionFunction.apply(
166-
# input.features,
167-
# self.weight,
168-
output.features = ConvolutionFunction.apply(
169-
input.features,
170-
self.weight.view(self.filter_volume, self.nIn, self.nOut),
171-
```
172-
173-
* submanifoldConvolution.py
174-
```Python
175-
# self.weight = Parameter(torch.Tensor(
176-
# self.filter_volume, nIn, nOut).normal_(
177-
# 0,
178-
# std))
179-
self.weight = Parameter(torch.Tensor(
180-
self.filter_volume * nIn, nOut).normal_(
181-
0,
182-
std))
183-
# ...
184-
# output.features = SubmanifoldConvolutionFunction.apply(
185-
# input.features,
186-
# self.weight,
187-
output.features = SubmanifoldConvolutionFunction.apply(
188-
input.features,
189-
self.weight.view(self.filter_volume, self.nIn, self.nOut),
190-
```
191-
192-
You can download pretrained models in [google drive](https://drive.google.com/open?id=1eblyuILwbxkJXfIP5QlALW5N_x5xJZhL). The car model is corresponding to car.config, the car_tiny model is corresponding to car.tiny.config and the people model is corresponding to people.config.
193-
194-
## Docker
195-
196-
You can use a prebuilt docker for testing:
197-
```
198-
docker pull scrin/second-pytorch
159+
cd ~/second.pytorch/second
160+
python ./pytorch/train.py evaluate --config_path=./configs/pointpillars/car/xyres_16.config --model_dir=/path/to/model_dir
199161
```
200-
Then run:
201-
```
202-
nvidia-docker run -it --rm -v /media/yy/960evo/datasets/:/root/data -v $HOME/pretrained_models:/root/model --ipc=host second-pytorch:latest
203-
python ./pytorch/train.py evaluate --config_path=./configs/car.config --model_dir=/root/model/car
204-
...
205-
```
206-
207-
Currently there is a problem that training and evaluating in docker is very slow.
208-
209-
## Try Kitti Viewer Web
210-
211-
### Major step
212-
213-
1. run ```python ./kittiviewer/backend.py main --port=xxxx``` in your server/local.
214-
215-
2. run ```cd ./kittiviewer/frontend && python -m http.server``` to launch a local web server.
216-
217-
3. open your browser and enter your frontend url (e.g. http://127.0.0.1:8000, default]).
218-
219-
4. input backend url (e.g. http://127.0.0.1:16666)
220-
221-
5. input root path, info path and det path (optional)
222-
223-
6. click load, loadDet (optional), input image index in center bottom of screen and press Enter.
224-
225-
### Inference step
226-
227-
Firstly the load button must be clicked and load successfully.
228-
229-
1. input checkpointPath and configPath.
230-
231-
2. click buildNet.
232-
233-
3. click inference.
234-
235-
![GuidePic](https://raw.githubusercontent.com/traveller59/second.pytorch/master/images/viewerweb.png)
236-
237-
238-
239-
## Try Kitti Viewer (Deprecated)
240-
241-
You should use kitti viewer based on pyqt and pyqtgraph to check data before training.
242-
243-
run ```python ./kittiviewer/viewer.py```, check following picture to use kitti viewer:
244-
![GuidePic](https://raw.githubusercontent.com/traveller59/second.pytorch/master/images/simpleguide.png)
245-
246-
## Concepts
247-
248-
249-
* Kitti lidar box
250-
251-
A kitti lidar box is consist of 7 elements: [x, y, z, w, l, h, rz], see figure.
252-
253-
![Kitti Box Image](https://raw.githubusercontent.com/traveller59/second.pytorch/master/images/kittibox.png)
254-
255-
All training and inference code use kitti box format. So we need to convert other format to KITTI format before training.
256-
257-
* Kitti camera box
258162

259-
A kitti camera box is consist of 7 elements: [x, y, z, l, h, w, ry].
163+
* Detection result will saved in model_dir/eval_results/step_xxx.
164+
* By default, results are stored as a result.pkl file. To save as official KITTI label format use --pickle_result=False.

second/builder/dataset_builder.py

+1
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,7 @@ def build(input_reader_config,
8585
gt_loc_noise_std=list(cfg.groundtruth_localization_noise_std),
8686
global_rotation_noise=list(cfg.global_rotation_uniform_noise),
8787
global_scaling_noise=list(cfg.global_scaling_uniform_noise),
88+
global_loc_noise_std=(0.2, 0.2, 0.2),
8889
global_random_rot_range=list(
8990
cfg.global_random_rotation_range_per_object),
9091
db_sampler=db_sampler,

second/configs/pointpillars/README.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# PointPillars Configs
2+
3+
The configuration files in these directories can be used to reproduce the results published in PointPillars.

0 commit comments

Comments
 (0)