Skip to content
This repository was archived by the owner on Oct 13, 2021. It is now read-only.

Commit f097f08

Browse files
committed
release code
1 parent 2481962 commit f097f08

File tree

129 files changed

+44017
-2
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

129 files changed

+44017
-2
lines changed

.gitignore

+108
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
# Byte-compiled / optimized / DLL files
2+
__pycache__/
3+
*.py[cod]
4+
*$py.class
5+
6+
# C extensions
7+
*.so
8+
*.o
9+
*.out
10+
11+
# Distribution / packaging
12+
.Python
13+
build/
14+
develop-eggs/
15+
dist/
16+
downloads/
17+
eggs/
18+
.eggs/
19+
lib/
20+
lib64/
21+
parts/
22+
sdist/
23+
var/
24+
wheels/
25+
*.egg-info/
26+
.installed.cfg
27+
*.egg
28+
MANIFEST
29+
30+
# PyInstaller
31+
# Usually these files are written by a python script from a template
32+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
33+
*.manifest
34+
*.spec
35+
36+
# Installer logs
37+
pip-log.txt
38+
pip-delete-this-directory.txt
39+
40+
# Unit test / coverage reports
41+
htmlcov/
42+
.tox/
43+
.coverage
44+
.coverage.*
45+
.cache
46+
nosetests.xml
47+
coverage.xml
48+
*.cover
49+
.hypothesis/
50+
.pytest_cache/
51+
52+
# Translations
53+
*.mo
54+
*.pot
55+
56+
# Django stuff:
57+
*.log
58+
local_settings.py
59+
db.sqlite3
60+
61+
# Flask stuff:
62+
instance/
63+
.webassets-cache
64+
65+
# Scrapy stuff:
66+
.scrapy
67+
68+
# Sphinx documentation
69+
docs/_build/
70+
71+
# PyBuilder
72+
target/
73+
74+
# Jupyter Notebook
75+
.ipynb_checkpoints
76+
77+
# pyenv
78+
.python-version
79+
80+
# celery beat schedule file
81+
celerybeat-schedule
82+
83+
# SageMath parsed files
84+
*.sage.py
85+
86+
# Environments
87+
.env
88+
.venv
89+
env/
90+
venv/
91+
ENV/
92+
env.bak/
93+
venv.bak/
94+
95+
# Spyder project settings
96+
.spyderproject
97+
.spyproject
98+
99+
# Rope project settings
100+
.ropeproject
101+
102+
# mkdocs documentation
103+
/site
104+
105+
# mypy
106+
.mypy_cache/
107+
108+
.vscode

README.md

+228-2
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,228 @@
1-
# second.pytorch
2-
SECOND Kitti Object Detection
1+
# SECOND for KITTI object detection
2+
SECOND detector. Based on my unofficial implementation of VoxelNet with some improvements.
3+
4+
ONLY support python 3.6+, pytorch 0.4.1+. Don't support pytorch 0.4.0. Tested in Ubuntu 16.04/18.04.
5+
6+
### Performance in KITTI validation set (50/50 split)
7+
8+
```
9+
Car AP@0.70, 0.70, 0.70:
10+
bbox AP:90.80, 88.97, 87.52
11+
bev AP:89.96, 86.69, 86.11
12+
3d AP:87.43, 76.48, 74.66
13+
aos AP:90.68, 88.39, 86.57
14+
Car AP@0.70, 0.50, 0.50:
15+
bbox AP:90.80, 88.97, 87.52
16+
bev AP:90.85, 90.02, 89.36
17+
3d AP:90.85, 89.86, 89.05
18+
aos AP:90.68, 88.39, 86.57
19+
Cyclist AP@0.50, 0.50, 0.50:
20+
bbox AP:95.99, 88.46, 87.92
21+
bev AP:88.59, 86.03, 85.07
22+
3d AP:88.36, 85.66, 84.51
23+
aos AP:95.71, 88.10, 87.53
24+
Cyclist AP@0.50, 0.25, 0.25:
25+
bbox AP:95.99, 88.46, 87.92
26+
bev AP:94.99, 87.04, 86.47
27+
3d AP:94.99, 86.91, 86.41
28+
aos AP:95.71, 88.10, 87.53
29+
Pedestrian AP@0.50, 0.50, 0.50:
30+
bbox AP:76.07, 67.04, 65.92
31+
bev AP:74.21, 65.67, 64.24
32+
3d AP:72.48, 63.89, 57.80
33+
aos AP:70.14, 61.55, 60.53
34+
Pedestrian AP@0.50, 0.25, 0.25:
35+
bbox AP:76.07, 67.04, 65.92
36+
bev AP:85.00, 75.40, 68.27
37+
3d AP:85.00, 69.65, 68.26
38+
aos AP:70.14, 61.55, 60.53
39+
```
40+
41+
## Install
42+
43+
### 1. Clone code
44+
45+
```bash
46+
git clone https://github.com/traveller59/second.pytorch.git
47+
cd ./second.pytorch/second
48+
```
49+
50+
### 2. Install dependence python packages
51+
52+
It is recommend to use Anaconda package manager.
53+
54+
```bash
55+
pip install shapely fire pybind11 pyqtgraph tensorboardX
56+
```
57+
58+
If you don't have Anaconda:
59+
60+
```bash
61+
pip install numba
62+
```
63+
64+
Follow instructions in https://github.com/facebookresearch/SparseConvNet to install SparseConvNet.
65+
66+
Install Boost geometry:
67+
68+
```bash
69+
sudo apt-get install libboost-all-dev
70+
```
71+
72+
73+
### 3. Setup cuda for numba
74+
75+
you need to add following environment variable for numba.cuda, you can add them to ~/.bashrc:
76+
77+
```bash
78+
export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
79+
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
80+
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice
81+
```
82+
83+
### 4. add second.pytorch/ to PYTHONPATH
84+
85+
## Prepare dataset
86+
87+
* Dataset preparation
88+
89+
Download KITTI dataset and create some directories first:
90+
91+
```plain
92+
└── KITTI_DATASET_ROOT
93+
├── training <-- 7481 train data
94+
| ├── image_2 <-- for visualization
95+
| ├── calib
96+
| ├── label_2
97+
| ├── velodyne
98+
| └── velodyne_reduced <-- empty directory
99+
└── testing <-- 7580 test data
100+
├── image_2 <-- for visualization
101+
├── calib
102+
├── velodyne
103+
└── velodyne_reduced <-- empty directory
104+
```
105+
106+
* Create kitti infos:
107+
108+
```bash
109+
python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
110+
```
111+
112+
* Create reduced point cloud:
113+
114+
```bash
115+
python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT
116+
```
117+
118+
* Create groundtruth-database infos:
119+
120+
```bash
121+
python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT
122+
```
123+
124+
* Modify config file
125+
126+
There is some path need to be configured in config file:
127+
128+
```bash
129+
train_input_reader: {
130+
...
131+
database_sampler {
132+
database_info_path: "/path/to/kitti_dbinfos_train.pkl"
133+
...
134+
}
135+
kitti_info_path: "/path/to/kitti_infos_train.pkl"
136+
kitti_root_path: "KITTI_DATASET_ROOT"
137+
}
138+
...
139+
eval_input_reader: {
140+
...
141+
kitti_info_path: "/path/to/kitti_infos_val.pkl"
142+
kitti_root_path: "KITTI_DATASET_ROOT"
143+
}
144+
```
145+
146+
## Try Kitti Viewer (Unstable)
147+
148+
You should use kitti viewer based on pyqt and pyqtgraph to check data before training.
149+
150+
Before using kitti viewer, you need to modify some file in SparseConvNet because the pretrained model doesn't support SparseConvNet master:
151+
152+
* convolution.py
153+
```Python
154+
# self.weight = Parameter(torch.Tensor(
155+
# self.filter_volume, nIn, nOut).normal_(
156+
# 0,
157+
# std))
158+
self.weight = Parameter(torch.Tensor(
159+
self.filter_volume * nIn, nOut).normal_(
160+
0,
161+
std))
162+
# ...
163+
# output.features = ConvolutionFunction.apply(
164+
# input.features,
165+
# self.weight,
166+
output.features = ConvolutionFunction.apply(
167+
input.features,
168+
self.weight.view(self.filter_volume, self.nIn, self.nOut),
169+
```
170+
171+
* submanifoldConvolution.py
172+
```Python
173+
# self.weight = Parameter(torch.Tensor(
174+
# self.filter_volume, nIn, nOut).normal_(
175+
# 0,
176+
# std))
177+
self.weight = Parameter(torch.Tensor(
178+
self.filter_volume * nIn, nOut).normal_(
179+
0,
180+
std))
181+
# ...
182+
# output.features = SubmanifoldConvolutionFunction.apply(
183+
# input.features,
184+
# self.weight,
185+
output.features = SubmanifoldConvolutionFunction.apply(
186+
input.features,
187+
self.weight.view(self.filter_volume, self.nIn, self.nOut),
188+
```
189+
190+
Then run ```python ./kittiviewer/viewer.py```, check following picture to use kitti viewer:
191+
![GuidePic](https://github.com/traveller59/second.pytorch/tree/master/images/simpleguide.png)
192+
193+
194+
195+
## Usage
196+
197+
* train
198+
199+
```bash
200+
python ./pytorch/train.py train --config_path=./configs/car.config --model_dir=/path/to/model_dir
201+
```
202+
203+
Make sure "/path/to/model_dir" doesn't exist if you want to train new model. A new directory will be created if the model_dir doesn't exist, otherwise will read checkpoints in it.
204+
205+
* evaluate
206+
207+
```bash
208+
python ./pytorch/train.py evaluate --config_path=./configs/car.config --model_dir=/path/to/model_dir
209+
```
210+
211+
* pretrained model
212+
213+
You can download pretrained models in [google drive](https://drive.google.com/open?id=1eblyuILwbxkJXfIP5QlALW5N_x5xJZhL). The car model is related to car.config and the people model is related to people.config.
214+
215+
## Concepts
216+
217+
218+
* Kitti lidar box
219+
220+
A kitti lidar box is consist of 7 elements: [x, y, z, w, l, h, rz], see figure.
221+
222+
![Kitti Box Image](https://github.com/traveller59/second.pytorch/tree/master/images/kittibox.png)
223+
224+
All training and inference code use kitti box format. So we need to convert other format to KITTI format before training.
225+
226+
* Kitti camera box
227+
228+
A kitti camera box is consist of 7 elements: [x, y, z, l, h, w, ry].

images/kittibox.png

9.64 KB
Loading

images/simpleguide.png

499 KB
Loading

second/__init__.py

Whitespace-only changes.

second/__main__.py

+25
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
import os
2+
import sys
3+
from importlib import import_module
4+
5+
import fire
6+
from google.protobuf import json_format, text_format
7+
8+
from codeai.tools.file_ops import scan
9+
10+
VOXELNET_CONFIG_PROTOS = "./protos"
11+
12+
13+
def update_config(path, field, new_value):
14+
pass
15+
16+
17+
def clean_config(path):
18+
pass
19+
20+
21+
if __name__ == "__main__":
22+
method_name = sys.argv[1]
23+
module_name = ".".join(method_name.split(".")[:-1])
24+
obj = import_module(module_name, "second")
25+
fire.Fire(getattr(obj, (method_name.split(".")[-1])), command=sys.argv[2:])

second/builder/__init__.py

Whitespace-only changes.

0 commit comments

Comments
 (0)