Skip to content

Commit d7186ba

Browse files
Merge SSF into codebase (#4)
* update ssf model * add env dependencies, fix bug * fix bug in ssf metric * added details in readme * docs: move cite at the end of README. * style: change print style. * think about whether have better way to print #number of pts. * add SSF into model init file. * env: strict spconv version to avoid error on sparse lib. * update readme with link and add OpenPCSeg also. * update rerun into the main env: #3 * hotfix(compile): split ssf_module with mmengine and torch scatter. * so that users can run other models even without engine or scatter. * move two packages into README for the extra package at least for now.... in case people struggling with setup env again. * docs update ssf model link. * docs(readme): update readme. --------- Co-authored-by: Kin <kinzhangglimmer@gmail.com>
1 parent e089267 commit d7186ba

28 files changed

+2668
-105
lines changed

README.md

+37-11
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,13 @@
11
<p align="center">
2+
<a href="https://github.com/KTH-RPL/OpenSceneFlow">
23
<picture>
34
<img alt="opensceneflow" src="assets/docs/logo.png" width="600">
45
</picture><br>
6+
</a>
57
</p>
68

9+
💞 If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** 📖](#cite-us) and [give a star 🌟](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (੭ˊ꒳​ˋ)੭✧
10+
711
OpenSceneFlow is a codebase for point cloud scene flow estimation.
812
It is also an official implementation of the following papers (sored by the time of publication):
913

@@ -27,10 +31,7 @@ European Conference on Computer Vision (**ECCV**) 2024
2731
International Conference on Robotics and Automation (**ICRA**) 2024
2832
[ Backbone ] [ Supervised ] - [ [arXiv](https://arxiv.org/abs/2401.16122) ] [ [Project](https://github.com/KTH-RPL/DeFlow) ] &rarr; [here](#deflow)
2933

30-
31-
💞 If you find *OpenSceneFlow* useful to your research, please cite [**our works** 📖](#cite-us) and give a star 🌟 as encouragement. (੭ˊ꒳​ˋ)੭✧
32-
33-
🎁 <b>One repository, All methods!</b>
34+
🎁 <b>One repository, All methods!</b>
3435
Additionally, *OpenSceneFlow* integrates following excellent works: [ICLR'24 ZeroFlow](https://arxiv.org/abs/2305.10424), [ICCV'23 FastNSF](https://arxiv.org/abs/2304.09121), [RA-L'21 FastFlow](https://arxiv.org/abs/2103.01306), [NeurIPS'21 NSFP](https://arxiv.org/abs/2111.01253). (More on the way...)
3536

3637
<details> <summary> Summary of them:</summary>
@@ -43,7 +44,7 @@ Additionally, *OpenSceneFlow* integrates following excellent works: [ICLR'24 Zer
4344

4445
</details>
4546

46-
💡: Want to learn how to add your own network in this structure? Check [Contribute section](assets/README.md#contribute) and know more about the code. Fee free to pull request and your bibtex [here](#cite-us) by pull request.
47+
💡: Want to learn how to add your own network in this structure? Check [Contribute section](assets/README.md#contribute) and know more about the code. Fee free to pull request and your bibtex [here](#cite-us).
4748

4849
---
4950

@@ -102,7 +103,7 @@ Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download ins
102103

103104
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process).
104105

105-
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)).
106+
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)/[Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)).
106107

107108

108109
```bash
@@ -114,7 +115,9 @@ Once extracted, you can directly use this dataset to run the [training script](#
114115

115116
## 2. Quick Start
116117

117-
Don't forget to active Python environment before running the code.
118+
Don't forget to active Python environment before running the code.
119+
If you want to use [wandb](wandb.ai), replace all `entity="kth-rpl",` to your own entity otherwise tensorboard will be used locally.
120+
And free yourself from trainning, you can download the pretrained weight from [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow) and we provided the detail `wget` command in each model section.
118121

119122
```bash
120123
mamba activate opensf
@@ -133,7 +136,28 @@ Pretrained weight can be downloaded through:
133136
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/flow4d_best.ckpt
134137
```
135138

136-
<!-- ### SSF -->
139+
### SSF
140+
141+
Extra pakcges needed for SSF model:
142+
```bash
143+
pip install mmengine-lite torch-scatter
144+
```
145+
146+
Train SSF with the leaderboard submit config. [Runtime: Around 6 hours in 8x A100 GPUs.]
147+
148+
```bash
149+
python train.py model=ssf lr=8e-3 epochs=25 batch_size=64 loss_fn=deflowLoss "voxel_size=[0.2, 0.2, 6]" "point_cloud_range=[-51.2, -51.2, -3, 51.2, 51.2, 3]"
150+
```
151+
152+
Pretrained weight can be downloaded through:
153+
```bash
154+
# the leaderboard weight
155+
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/ssf_best.ckpt
156+
157+
# the long-range weight:
158+
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/ssf_long.ckpt
159+
```
160+
137161

138162
### SeFlow
139163

@@ -223,7 +247,8 @@ https://github.com/user-attachments/assets/07e8d430-a867-42b7-900a-11755949de21
223247

224248
## Cite Us
225249

226-
*OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow project. If you find it useful, please cite our works:
250+
[*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) is originally designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow.
251+
If you find it useful, please cite our works:
227252

228253
```bibtex
229254
@inproceedings{zhang2024seflow,
@@ -251,7 +276,7 @@ https://github.com/user-attachments/assets/07e8d430-a867-42b7-900a-11755949de21
251276
}
252277
```
253278

254-
And our excellent collaborators works as followings:
279+
And our excellent collaborators works contributed to this codebase also:
255280

256281
```bibtex
257282
@article{kim2025flow4d,
@@ -272,6 +297,7 @@ And our excellent collaborators works as followings:
272297
}
273298
```
274299

300+
Thank you for your support! ❤️
275301
Feel free to contribute your method and add your bibtex here by pull request!
276302

277-
❤️: [BucketedSceneFlowEval](https://github.com/kylevedder/BucketedSceneFlowEval); [Pointcept](https://github.com/Pointcept/Pointcept); [ZeroFlow](https://github.com/kylevedder/zeroflow) ...
303+
❤️: [BucketedSceneFlowEval](https://github.com/kylevedder/BucketedSceneFlowEval); [Pointcept](https://github.com/Pointcept/Pointcept); [OpenPCSeg](https://github.com/BAI-Yeqi/OpenPCSeg); [ZeroFlow](https://github.com/kylevedder/zeroflow) ...

assets/README.md

+5-4
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,7 @@ Create base env: [~5 mins]
6969

7070
```bash
7171
git clone https://github.com/KTH-RPL/OpenSceneFlow.git
72+
cd OpenSceneFlow
7273
mamba env create -f assets/environment.yml
7374
```
7475

@@ -92,16 +93,16 @@ python -c "from assets.cuda.chamfer3D import nnChamferDis;print('successfully im
9293

9394
### Other issues
9495

95-
1. looks like open3d and fire package conflict, not sure
96+
<!-- 1. looks like open3d and fire package conflict, not sure
9697
- need install package like `pip install --ignore-installed`, ref: [pip cannot install distutils installed project](https://stackoverflow.com/questions/53807511/pip-cannot-uninstall-package-it-is-a-distutils-installed-project), my error: `ERROR: Cannot uninstall 'blinker'.`
97-
- need specific werkzeug version for open3d 0.16.0, otherwise error: `ImportError: cannot import name 'url_quote' from 'werkzeug.urls'`. But need update to solve the problem: `pip install --upgrade Flask` [ref](https://stackoverflow.com/questions/77213053/why-did-flask-start-failing-with-importerror-cannot-import-name-url-quote-fr)
98+
- need specific werkzeug version for open3d 0.16.0, otherwise error: `ImportError: cannot import name 'url_quote' from 'werkzeug.urls'`. But need update to solve the problem: `pip install --upgrade Flask` [ref](https://stackoverflow.com/questions/77213053/why-did-flask-start-failing-with-importerror-cannot-import-name-url-quote-fr) -->
9899

99100

100-
2. `ImportError: libtorch_cuda.so: undefined symbol: cudaGraphInstantiateWithFlags, version libcudart.so.11.0`
101+
1. `ImportError: libtorch_cuda.so: undefined symbol: cudaGraphInstantiateWithFlags, version libcudart.so.11.0`
101102
The cuda version: `pytorch::pytorch-cuda` and `nvidia::cudatoolkit` need be same. [Reference link](https://github.com/pytorch/pytorch/issues/90673#issuecomment-1563799299)
102103

103104

104-
3. In cluster have error: `pandas ImportError: /lib64/libstdc++.so.6: version 'GLIBCXX_3.4.29' not found`
105+
2. In cluster have error: `pandas ImportError: /lib64/libstdc++.so.6: version 'GLIBCXX_3.4.29' not found`
105106
Solved by `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/proj/berzelius-2023-154/users/x_qinzh/mambaforge/lib`
106107

107108

assets/cuda/mmdet/__init__.py

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# Copyright (c) OpenMMLab. All rights reserved.
2+
from .conv import *
3+
from .norm import *
4+
from .plugin import *
5+
from .resnet import BasicBlock, Bottleneck

assets/cuda/mmdet/conv.py

+51
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# Copyright (c) OpenMMLab. All rights reserved.
2+
import inspect
3+
from typing import Dict, Optional
4+
5+
from mmengine.registry import MODELS
6+
from torch import nn
7+
8+
MODELS.register_module('Conv1d', module=nn.Conv1d)
9+
MODELS.register_module('Conv2d', module=nn.Conv2d)
10+
MODELS.register_module('Conv3d', module=nn.Conv3d)
11+
MODELS.register_module('Conv', module=nn.Conv2d)
12+
13+
14+
def build_conv_layer(cfg: Optional[Dict], *args, **kwargs) -> nn.Module:
15+
"""Build convolution layer.
16+
17+
Args:
18+
cfg (None or dict): The conv layer config, which should contain:
19+
- type (str): Layer type.
20+
- layer args: Args needed to instantiate an conv layer.
21+
args (argument list): Arguments passed to the `__init__`
22+
method of the corresponding conv layer.
23+
kwargs (keyword arguments): Keyword arguments passed to the `__init__`
24+
method of the corresponding conv layer.
25+
26+
Returns:
27+
nn.Module: Created conv layer.
28+
"""
29+
if cfg is None:
30+
cfg_ = dict(type='Conv2d')
31+
else:
32+
if not isinstance(cfg, dict):
33+
raise TypeError('cfg must be a dict')
34+
if 'type' not in cfg:
35+
raise KeyError('the cfg dict must contain the key "type"')
36+
cfg_ = cfg.copy()
37+
38+
layer_type = cfg_.pop('type')
39+
if inspect.isclass(layer_type):
40+
return layer_type(*args, **kwargs, **cfg_) # type: ignore
41+
# Switch registry to the target scope. If `conv_layer` cannot be found
42+
# in the registry, fallback to search `conv_layer` in the
43+
# mmengine.MODELS.
44+
with MODELS.switch_scope_and_registry(None) as registry:
45+
conv_layer = registry.get(layer_type)
46+
if conv_layer is None:
47+
raise KeyError(f'Cannot find {conv_layer} in registry under scope '
48+
f'name {registry.scope}')
49+
layer = conv_layer(*args, **kwargs, **cfg_)
50+
51+
return layer

0 commit comments

Comments
 (0)