This repository is dedicated to comparing different place recognition methods on the HeLiPR dataset. We provide code for the following methods:
Note (2024/09/26): Currently, only the validation code for each method has been tested. The training code will be tested and updated.
Method | Status |
---|---|
PointNetVLAD | Complete (24.09.25) |
LoGG3D-Net | Complete (24.09.26) |
MinkLoc3Dv2 | Complete (24.09.25) |
CROSSLOC3D | Complete (24.09.24) |
CASSPR | Complete (24.09.25) |
SOLID | Complete (24.09.24) |
HeLiOS | To Do |
- PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition [Original Code] (CVPR 2018)
- LoGG3D-Net: Locally Guided Global Descriptor Learning for 3D Place Recognition [Original Code] (ICRA 2021)
- MinkLoc3Dv2: Improving Point Cloud Based Place Recognition with Ranking-based Loss and Large Batch Training [Original Code] (ICPR 2022)
- CrossLoc3D: Aerial-Ground Cross-Source 3D Place Recognition [Original Code] (ICCV 2023)
- CASSPR: Cross Attention Single Scan Place Recognition [Original Code] (ICCV 2023)
- SOLID: Spatially Organized and Lightweight Global Descriptor for FOV-constrained LiDAR Place Recognition [Original Code] (RA-L 2024)
- HeLiOS: Heterogeneous LiDAR Place Recognition via Overlap-based Learning and Local Spherical Transformer [Original Code] (ICRA 2025 submission)
This table showcases a comparison of different methods on the HeLiPR dataset using 3D point cloud data with 8,192 points per scan. To ensure fairness, we utilize identical parameter settings across all methods. Average Recall@1 and Average Recall@5 are used as evaluation metrics, with each value representing the average 4 to 6 results in the HeLiPR dataset.
-
Ouster - Narrow (Narrow FOV LiDAR data from the Ouster sensor)
- Database:
Seq01-Ouster
- Query:
Seq01-Aeva
,Seq01-Livox
,Seq02-Aeva
,Seq02-Livox
,Seq03-Aeva
,Seq03-Livox
- Database:
-
Aeva - Wide (Wide FOV LiDAR data from the Aeva sensor)
- Database:
Seq01-Aeva
- Query:
Seq01-Ouster
,Seq01-Velodyne
,Seq02-Ouster
,Seq02-Velodyne
,Seq03-Ouster
,Seq03-Velodyne
- Database:
For the Bridge Seqs, we grouped Bridge01-04
and Bridge02-03
to ensure sufficient overlap between the database and query scans. In the Bridge01-04
cases:
- Ouster - Narrow
- Database:
Seq01-Ouster
- Query:
Seq01-Ouster
,Seq01-Velodyne
,Seq02-Ouster
,Seq02-Velodyne
- Database:
Please download the validation dataset from here. This link contains:
- Sampled point cloud data from the HeLiPR dataset (Roundabout, Town, and Bridge)
- Checkpoint files for each method
- Overlap matrix files
Each scan contains 8,192 points and is sampled at 5m intervals. If you wish to test custom settings, please use the HeLiPR-Pointcloud-Toolbox.
Place the sequences, overlap matrices, and checkpoint files in the data_validation/
, data_overlap/
, and data_ckpt/
folders, respectively. The directory structure should look like:
HeLiPR-Place-Recognition
├── model_X
├── data_validation
│ ├── SequenceA-Sensor1
│ │ ├── LiDAR
│ │ │ ├── time0.bin
│ │ │ ├── time1.bin
│ │ │ └── ...
│ │ └── trajectory.csv
│ ├── SequenceB-Sensor2
│ │ ├── LiDAR
│ │ │ ├── time0.bin
│ │ │ ├── time1.bin
│ │ │ └── ...
│ │ └── trajectory.csv
│ └── ...
├── data_overlap
│ ├── overlap_matrix_validation_SequenceA.txt
│ ├── overlap_matrix_validation_SequenceB.txt
│ └── ...
├── data_ckpt
│ ├── X_ckpt.pth
│ ├── Y_ckpt.pth
│ └── ...
We provide a Dockerfile
for each method. You can build the Docker image and execute the code within a Docker container. We tested this code on an NVIDIA RTX 3090 and 3080 GPU.
docker build -t helipr_evaluation .
docker run --gpus all -dit --env="DISPLAY" --net=host --ipc=host \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
-v /:/mydata --volume /dev/:/dev/ \
helipr_evaluation:latest /bin/bash
Note: You can adjust the Docker run command according to your environment.
git clone https://github.com/minwoo0611/HeLiPR-Place-Recognition
cd HeLiPR-Place-Recognition
Before proceeding, ensure that you've downloaded the validation dataset and placed it in the appropriate folders as described above.
Run the following script:
python generate_test_sets.py
In the generate_test_sets.py
script, modify the following variables to match your data setup:
base_path
: Base directory where your data is locatedoverlap_matrix
: Name of the overlap matrix filelocation
: Name of the sequencedb_folder
: Database sequence-sensorquery_folder
: Query sequence-sensor
After running the script, you should find two files in base_path
:
helipr_validation_db.pickle
helipr_validation_query.pickle
Navigate to the directory of the method you wish to test and follow the instructions in its README.md
.
cd model_X
# Follow the instructions in model_X/README.md
If you find this repository useful, please cite the following papers:
@article{jung2024heteropr,
author={Minwoo Jung and Sangwoo Jung and Hyeonjae Gil and Ayoung Kim},
title={HeLiOS: Heterogeneous LiDAR Place Recognition via Overlap-based Learning and Local Spherical Transformer},
year={2024},
journal={ICRA 2025 submission}
}
@article{jung2024hetero,
author = {Minwoo Jung and Wooseong Yang and Dongjae Lee and Hyeonjae Gil and Giseop Kim and Ayoung Kim},
title = {HeLiPR: Heterogeneous LiDAR dataset for inter-LiDAR place recognition under spatiotemporal variations},
journal= {The International Journal of Robotics Research},
year = {2024}
}
For any questions, please contact us at moonshot@snu.ac.kr or create an issue in this repository.