This guide contains information about required dependencies and how to run the TrackDLO ROS package.
Installation and execution of TrackDLO was verified with the below dependencies on an Ubuntu 20.04 system with ROS Noetic.
- ROS Noetic
- Eigen3 (Our version: 3.3.7)
- Point Cloud Library (Our version: 1.10.0)
- OpenCV
- NumPy
- SciPy
- scikit-image
- Pillow
- ROS Numpy
We also provide Docker files for compatibility with other system configurations. Refer to the DOCKER.md for more information on using docker, and see the docker requirements.txt file for a list of the tested versions of TrackDLO package dependencies.
We used an Intel RealSense d435 camera in all of the experiments performed in our paper. We used the librealsense and realsense-ros packages for testing with the RealSense D435 camera and for obtaining the camera configuration file.
The repository is organized into the following directories:
├── trackdlo/ # contains TrackDLO class and corresponding ROS node
├── launch/ # contains ROS launch files for the camera and RViz
├── rviz/ # contains Rviz configurations for visualization
├── config/ # contains camera configurations
└── utils/ # contains scripts used for testing and evaluation
First, create a ROS workspace. Next, cd YOUR_TRACKING_ROS_WORKSPACE/src
. Clone the TrackDLO repository into this workspace and build the package:
git clone https://github.com/RMDLO/trackdlo.git
catkin build trackdlo
source ../devel/setup.bash
All configurable parameters for the TrackDLO algorithm are in launch/trackdlo.launch
. Rebuilding the package is not required for any parameter modifications to take effect. However, catkin build
is required after modifying any C++ files. Remember that source <YOUR_TRACKING_ROS_WS>/devel/setup.bash
is required in every terminal running TrackDLO ROS nodes.
To run TrackDLO, the three ROS topic names below should be modified in launch/trackdlo.launch
to match the ROS topic names published by the user's RGB-D camera:
camera_info_topic
: a CameraInfo topic that contains the camera's projection matrixrgb_topic
: a RGB image stream topicdepth_topic
: a depth image stream topic
Note: TrackDLO assumes the RGB image and its corresponding depth image are ALIGNED and SYNCHRONIZED. This means depth_image(i, j) should contain the depth value of pixel rgb_image(i, j) and the two images should be published with the same ROS timestamp.
TrackDLO uses color thresholding in Hue, Saturation, and Value (HSV) color space to obtain the DLO segmentation mask. A tutorial on how to obtain the HSV limits is provided in COLOR_THRESHOLD.md
Other useful parameters:
num_of_nodes
: the number of nodes initialized for the DLOresult_frame_id
: the tf frame the tracking results (point cloud and marker array) will be published tovisualize_initialization_process
: if set totrue
, OpenCV windows will appear to visualize the results of each step in initialization. This is helpful for debugging in the event of initialization failures.
Once all parameters in launch/trackdlo.launch
are set to proper values, run TrackDLO with the following steps:
- Launch the RGB-D camera node
- Launch RViz to visualize all published topics:
roslaunch trackdlo visualize_output.launch
- Launch the TrackDLO node to publish tracking results:
roslaunch trackdlo trackdlo.launch
The TrackDLO node outputs the following:
/trackdlo/results_marker
: the tracking result with nodes visualized with spheres and edges visualized with cylinders in MarkerArray format,/trackdlo/results_pc
: the tracking results in PointCloud2 format/trackdlo/results_img
: the tracking results projected onto the received input RGB image in RGB Image format
This package was tested using an Intel RealSense D435 camera. The exact camera configurations used are provided in /config/preset_decimation_4.0_depth_step_100.json
and can be loaded into the camera using the launch files from realsense-ros
. Run the following commands to start the RealSense camera and the tracking node:
- Launch an RViz window visualizing the color image, mask, and tracking result (in both the image and the 3D pointcloud) with
roslaunch trackdlo realsense_node.launch
- Launch the TrackDLO tracking node and publish messages containing the estimated node positions defining the object shape with
roslaunch trackdlo trackdlo.launch
- Download one of the provided rosbag experiment
.bag
files here. Note: the file sizes are large--each will require 5-10GB of storage! - Open a new terminal and run
roslaunch trackdlo visualize_output.launch bag:=True
- In another terminal, run the below command to start the tracking algorithm with parameters for the
stationary.bag
,perpendicular_motion.bag
, andparallel_motion.bag
files used for quantitative evaluation in the TrackDLO paper.
roslaunch trackdlo trackdlo_eval.launch
If testing any of the other provided .bag
files, run the below command:
roslaunch trackdlo trackdlo.launch
- Open a third ternimal and run the below command to replay the
.bag
file and publish its topics:
rosbag play --clock /path/to/filename.bag
Occlusion can also be injected using our provided simulate_occlusion_eval.py
script. Run the below command and draw bounding rectangles for the occlusion mask in the graphical user interface that appears:
rosrun trackdlo simulate_occlusion_eval.py
The ROS bag files used in our paper and the supplementary video can be found here. The experiment
folder is organized into the following directories:
├── rope/ # contains the three experiments performed with a rope in the supplementary video
├── rubber_tubing/ # contains the three experiments performed with a rope in the supplementary video
├── failure_cases/ # contains the three failure cases shown in the supplementary video
└── quantitative_eval/ # contains the bag files used for quantitative evaluation in our paper
- The rope and the rubber tubing require different hsv thresholding values. Both of these objects use the
hsv_threshold_upper_limit
default value =130 255 255
however the rope uses thehsv_threshold_lower_limit
default value =90 90 30
and the rubber tubing uses thehsv_threshold_upper_limit
default value =100 200 60
. - For
.bag
files inrope/
,rubber_tubing/
, andfailure_cases/
, thecamera_info_topic
is published under/camera/aligned_depth_to_color/camera_info
. For.bag
files inquantitative_eval/
, thecamera_info_topic
is published under/camera/color/camera_info
.