You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+23-8
Original file line number
Diff line number
Diff line change
@@ -8,13 +8,19 @@ DeFlow: Decoder of Scene Flow Network in Autonomous Driving
8
8
9
9
Will present in ICRA'24.
10
10
11
-
Task: Scene Flow Estimation in Autonomous Driving. Pre-trained weights for models are available in [Onedrive link](https://hkustconnect-my.sharepoint.com/:f:/g/personal/qzhangcb_connect_ust_hk/Et85xv7IGMRKgqrVeJEVkMoB_vxlcXk6OZUyiPjd4AArIg?e=lqRGhx). Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualization).
11
+
Task: Scene Flow Estimation in Autonomous Driving.
12
+
Pre-trained weights for models are available in [Onedrive link](https://hkustconnect-my.sharepoint.com/:f:/g/personal/qzhangcb_connect_ust_hk/Et85xv7IGMRKgqrVeJEVkMoB_vxlcXk6OZUyiPjd4AArIg?e=lqRGhx).
13
+
Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualization).
12
14
13
15
**Scripts** quick view in our scripts:
14
16
15
-
-`0_preprocess.py` : pre-process data before training to speed up the whole training time.
17
+
-`dataprocess/extract_*.py` : pre-process data before training to speed up the whole training time.
18
+
[Dataset we included now: Argoverse 2, more on the way: Waymo and Nuscenes, custom data.]
19
+
16
20
-`1_train.py`: Train the model and get model checkpoints. Pls remember to check the config.
21
+
17
22
-`2_eval.py` : Evaluate the model on the validation/test set. And also upload to online leaderboard.
23
+
18
24
-`3_vis.py` : For visualization of the results with a video.
Or another environment setup choice is [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment, you can pull it by.
42
+
If you have different arch, please build it by yourself `cd DeFlow && docker build -t zhangkin/deflow` by going through [build-docker-image](assets/README.md/#build-docker-image) section.
43
+
```bash
44
+
# option 1: pull from docker hub
45
+
docker pull zhangkin/deflow
46
+
47
+
# run container
48
+
docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name deflow zhangkin/deflow /bin/zsh
49
+
```
50
+
35
51
## 1. Train
36
52
37
-
Download tips in [assets/README.md](assets/README.md#dataset-download)
53
+
Download tips in [dataprocess/README.md](dataprocess/README.md#argoverse-20)
38
54
39
55
### Prepare Data
40
56
41
57
Normally need 10-45 mins finished run following commands totally (my computer 15 mins, our cluster 40 mins).
There are two ways to setup the environment: conda in your desktop and docker container isolate environment.
5
+
6
+
## Docker Environment
7
+
8
+
### Build Docker Image
9
+
If you want to build docker with compile all things inside, there are some things need setup first in your own desktop environment:
10
+
-[NVIDIA-driver](https://www.nvidia.com/download/index.aspx): which I believe most of people may already have it. Try `nvidia-smi` to check if you have it.
Then follow [this stackoverflow answers](https://stackoverflow.com/questions/59691207/docker-build-with-nvidia-runtime):
33
+
1. Edit/create the /etc/docker/daemon.json with content:
34
+
```bash
35
+
{
36
+
"runtimes": {
37
+
"nvidia": {
38
+
"path": "/usr/bin/nvidia-container-runtime",
39
+
"runtimeArgs": []
40
+
}
41
+
},
42
+
"default-runtime": "nvidia"
43
+
}
44
+
```
45
+
2. Restart docker daemon:
46
+
```bash
47
+
sudo systemctl restart docker
48
+
```
49
+
50
+
3. Then you can build the docker image:
51
+
```bash
52
+
cd DeFlow && docker build -t zhangkin/deflow .
53
+
```
54
+
4
55
## Installation
5
56
6
57
We will use conda to manage the environment with mamba for faster package installation.
@@ -41,77 +92,6 @@ python -c "import lightning.pytorch as pl"
41
92
python -c "from mmcv.ops import Voxelization, DynamicScatter;print('success test on mmcv package')"
42
93
```
43
94
44
-
## Dataset Download
45
-
46
-
We will note down the dataset download and itself detail here.
47
-
48
-
### Download
49
-
50
-
Since we focus on large point cloud dataset in autonomous driving, we choose Argoverse 2 for our dataset, you can also easily process other driving dataset in this framework. References: [3d_scene_flow user guide](https://argoverse.github.io/user-guide/tasks/3d_scene_flow.html), [Online Leaderboard](https://eval.ai/web/challenges/challenge-page/2010/evaluation).
Then to quickly pre-process the data, we can run the following command to generate the pre-processed data for training and evaluation. This will take around 2 hour for the whole dataset (train & val) based on how powerful your CPU is.
If you want to contribute to new model, here are tips you can follow:
114
+
1. Dataloader: we believe all data could be process to `.h5`, we named as different scene and inside a scene, the key of each data is timestamp.
115
+
2. Model: All model files can be found [here: scripts/network/models](../scripts/network/models). You can view deflow and fastflow3d to know how to implement a new model.
116
+
3. Loss: All loss files can be found [here: scripts/network/loss_func.py](../scripts/network/loss_func.py). There are three loss functions already inside the file, you can add a new one following the same pattern.
117
+
4. Training: Once you have implemented the model, you can add the model to the config file [here: conf/model](../conf/model) and train the model using the command `python 1_train.py model=your_model_name`. One more note here may: if your res_dict from model output is different, you may need add one pattern in `def training_step` and `def validation_step`.
0 commit comments