You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix scene flow ground truth generation in Argoverse 2 (#5)
* !fix(gt): expanding the bbx based on object speed for non-ego motion distortion in data.
check more details on the pull request description.
* docs(README): fix typo on readme and comments in code.
* tested successfully on docker things also.
* fix(env): add c++ compiler into env and pathtools for potential err on run codes.
* docs(bib): add himo into cite for reference on fixed flow gt.
💞 If you find *OpenSceneFlow* useful to your research, please cite [our works 📖](#cite-us) and give a star 🌟 as encouragement. (੭ˊ꒳ˋ)੭✧
31
+
💞 If you find *OpenSceneFlow* useful to your research, please cite [**our works** 📖](#cite-us) and give a star 🌟 as encouragement. (੭ˊ꒳ˋ)੭✧
32
32
33
-
🎁 <b>One repository, All methods!</b>. Additionally, *OpenSceneFlow* integrates the following excellent work: [ICLR'24 ZeroFlow](https://arxiv.org/abs/2305.10424), [ICCV'23 FastNSF](https://arxiv.org/abs/2304.09121), [RA-L'21 FastFlow](https://arxiv.org/abs/2103.01306), [NeurIPS'21 NSFP](https://arxiv.org/abs/2111.01253),
33
+
🎁 <b>One repository, All methods!</b>
34
+
Additionally, *OpenSceneFlow* integrates following excellent works: [ICLR'24 ZeroFlow](https://arxiv.org/abs/2305.10424), [ICCV'23 FastNSF](https://arxiv.org/abs/2304.09121), [RA-L'21 FastFlow](https://arxiv.org/abs/2103.01306), [NeurIPS'21 NSFP](https://arxiv.org/abs/2111.01253). (More on the way...)
34
35
35
36
<details> <summary> Summary of them:</summary>
36
37
@@ -42,7 +43,7 @@ International Conference on Robotics and Automation (**ICRA**) 2024
42
43
43
44
</details>
44
45
45
-
💡: Want to learn how to add your own network in this structure? Check [Contribute section] and know more about the code. Fee free to pull request and your bibtex [here](#cite-us) by pull request.
46
+
💡: Want to learn how to add your own network in this structure? Check [Contribute section](assets/README.md#contribute) and know more about the code. Fee free to pull request and your bibtex [here](#cite-us) by pull request.
46
47
47
48
---
48
49
@@ -56,23 +57,29 @@ International Conference on Robotics and Automation (**ICRA**) 2024
56
57
57
58
## 0. Installation
58
59
59
-
**Environment**: Setup
60
+
There are two ways to install the codebase: directly on your [local machine](#environment-setup) or in a [Docker container](#docker-recommended-for-isolation).
CUDA package (need install nvcc compiler), the compile time is around 1-5 minutes:
72
+
CUDA package (we already install nvcc compiler inside conda env), the compile time is around 1-5 minutes:
67
73
```bash
68
74
mamba activate opensf
69
75
# CUDA already install in python environment. I also tested others version like 11.3, 11.4, 11.7, 11.8 all works
70
76
cd assets/cuda/mmcv && python ./setup.py install &&cd ../../..
71
77
cd assets/cuda/chamfer3D && python ./setup.py install &&cd ../../..
72
78
```
73
79
74
-
Or you always can choose [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment and free yourself from installation, you can pull it by.
75
-
If you have different arch, please build it by yourself `cd OpenSceneFlow && docker build -t zhangkin/opensf` by going through [build-docker-image](assets/README.md#build-docker-image) section.
80
+
### Docker (Recommended for Isolation)
81
+
82
+
You always can choose [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment and free yourself from installation. Pull the pre-built Docker image or build manually.
76
83
77
84
```bash
78
85
# option 1: pull from docker hub
@@ -83,25 +90,36 @@ docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data
83
90
# and better to read your own gpu device info to compile the cuda extension again:
84
91
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv && /opt/conda/envs/opensf/bin/python ./setup.py install
85
92
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D && /opt/conda/envs/opensf/bin/python ./setup.py install
93
+
94
+
mamba activate opensf
86
95
```
87
96
97
+
If you prefer to build the Docker image by yourself, Check [build-docker-image](assets/README.md#build-docker-image) section for more details.
88
98
89
99
## 1. Data Preparation
90
100
91
101
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, and **custom datasets** (more datasets will be added in the future).
92
102
93
-
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process). For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)).
103
+
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process).
104
+
105
+
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)).
0 commit comments