Skip to content

Latest commit

 

History

History
139 lines (112 loc) · 9.5 KB

README.md

File metadata and controls

139 lines (112 loc) · 9.5 KB

Demo videos

  • ORB-SLAM3: feature based (indirect) method for simultaneous mapping and localization, thanks to the loop closure and global pose optimisation (Bundle Adjustment) it can keep the two loops aligned together YouTube Link
    slammy_zed_two_loops.gif
  • Stereo Direct Sparse Odometry (Stereo-DSO): photometric based direct method, although no loop closure and global optimisation - strictly speaking an odometry method - it still produces a reasonable result while much denser map compared with ORB-SLAM result YouTube Link
    slammy_zed_two_loops.gif
  • Depth map and point cloud: the depth map is computed based on the stereo image caputured by the Zed2 stereo camera. With the camera intrinsic parameter, the pixels are re-projected to 3D space to obtain a 3D point cloud. From the visualisation of point cloud, the cluttering and temperal inconsistency can be observed. How to fuse the noisy depth map sequence is a challenging task YouTube Link
    slammy_zed_two_loops.gif
  • Volumetric dense mapping: TSDF based volumetric dense mapping is a de-facto standard for fusing depth maps with known camera poses. Here we choose to use the camera poses estimated by ORB-SLAM3 as it has the better global consistency thanks to global optimisation. YouTube Link
    slammy_zed_two_loops.gif

Trajectory estimation results compared to GT

The trajectories estimated by different methods are evaluated against groundtruth, which is carried out in matlab script
results/example_compare_trajectories_vision.jpg

How to set up Jetson NX board

How to install Zed2 camera driver and its toolbox

How to record rosbag file

  • Terminal 1: launch the zed2 driver with roslaunch zed_wrapper zed2.launch
  • Terminal 2: start the recording with sudo ./data/record.sh(sudo for correcting the system date)
  • The rosbag and the two image sequences of the stereo cameras from the "two_loops" run can be downloaded from here:
    https://webdav.ifp.uni-stuttgart.de/s/SycT9Ya96ikBHsS
  • A full list of rosbag topics is as shown below: \

ros_topic_list

How to visualize pointcloud from depth map

  • The depth map with topic name /zed2/zed_node/depth/depth_registered and topic type sensor_msgs/Image are ready to be visualized as 3D point cloud using the built-in Rviz display plug-in DepthCloud
  • To let the plug-in find the right message, it is required to set the field Depth Map Topic and Color Image Topic, an example rviz config can be found here
  • Additionally the depth map is accompanied by confidence map which can be also visualized using image display plug-in
  • More info see DepthCloud

How to run ORB-SLAM3

  • Configure ORB-slam3

  • Adapt camera parameter for Zed2 camera

    • Look up the parameters from rostopic echo /zed2/zed_node/rgb/camera_info
    • Note the topic name of the image message also needs to be adapted in the source code.
    • An example can be found in configs/orb-slam3
  • Run ORB-SLAM3 with Zed images

    • Online with images streamed from ros topics

      • One terminal for zed driver roslaunch zed_wrapper zed2.launch
      • Other one for ORB-slam and switch to directory cd Software/projects/ORB_SLAM3
      • rosrun ORB_SLAM3 Mono Vocabulary/ORBvoc.txt Examples/Monocular/Zed2.yaml (Monocular setting)
      • rosrun ORB_SLAM3 Stereo Vocabulary/ORBvoc.txt Examples/Stereo/Zed2.yaml 0 (Stereo setting)
    • Offline with a recorded rosbag

      • Play a pre-recorded rosbag containing the Zed images rosbag play 2021-xx-xxxx.bag /zed/left/image_raw:=/camera/image_raw
      • Switch to directory cd Software/projects/ORB_SLAM3
      • rosrun ORB_SLAM3 Mono Vocabulary/ORBvoc.txt Examples/Monocular/Zed2.yaml (Monocular setting)
      • rosrun ORB_SLAM3 Stereo Vocabulary/ORBvoc.txt Examples/Stereo/Zed2.yaml 0 (Stereo setting)

How to run DSO

  • Clone and build DSO project

    • Link: https://github.com/JakobEngel/dso.git
    • Clone to ~/Software/projects/dso and follow the project instruction to build the project
  • Build DSO ROS wrapper

    • Link: https://github.com/BlueWhaleRobot/dso_ros
    • Clone to catkin workspace ~/Software/src/dso_ros
    • Set the DSO_PATH before the build by export DSO_PATH=/home/jetson/Software/projects/dso
    • Note this ROS wrapper project contains some bugfixes therefore we choose this over the original one
  • Run DSO offline

    • Play a pre-recorded rosbag containing the Zed images rosbag play 2021-xx-xxxx.bag
    • rosrun dso_ros dso_live image:=/zed2/zed_node/rgb/image_rect_gray calib=/home/jetson/Software/src/dso_ros/zed2.txt mode=1

How to run Stereo-DSO

  • Clone and build Stereo-DSO project
    • Link: https://github.com/HorizonAD/stereo_dso
    • Clone to ~/Software/projects/stereo_dso and follow the project instruction to build the project
  • Build Stereo-DSO ROS wrapper
    • Link: https://github.com/LinHuican/stereo_dso_ros
    • Clone to catkin workspace ~/Software/src/stereo_dso_ros
    • Adjust the STEREO_DSO_PATH in CMakeLists.txt by set(STEREO_DSO_PATH /home/jetson/Software/projects/stereo_dso)
    • Note: it is found the parameter setting_kfGlobalWeight=1.0f suited better to our data, so please update it in stereo_dso_ros.cpp

How to run volumetric dense mapping with Voxblox

  • As mentioned before, the depth maps produced by Zed2 camera is error-contaminated, thus how to eliminate the noise is a challenging task. One de-factor standard method is to use the TSDF-based volumetric representation to fuse the depth maps.
  • Clone https://github.com/ethz-asl/voxblox to catkin workspace ~/Software/src/voxblox
  • Install the dependencies as instruction
  • Here we use the estimated camera estimated by Stereo ORB-SLAM results\slammy_two_loops_traj_stereo-orb-slam.txt
  • A customized voxblox config for our data can be found here
  • Note that voxblox used to look-up the camera transform from rosbag, to feed the poses by an external file some modifications are needed as can be seen in configs/voxblox/transformer.h and configs/voxblox/transformer.cc
  • After performing the aboved metioned steps, the voxblox program should be ready to run with roslaunch voxblox_ros zed2.launch
  • An example mesh output can be found in results/2021-07-17-voxblox-mesh.ply

How to run SuperPoint & SuperGlue

  • A hands-on notebook is ready to demostrate how the SuperPoint and SuperGlue is performed using the pre-trained model.
  • Notebook for SuperPoint: SuperPoint\superpoint_handson.ipynb
    superpoint.gif
  • Notebook for SuperGlue: SuperGlue\superglue_handson.ipynb
    superglue.gif
  • Video shows SuperPoint run on slammy dataset can be found https://www.youtube.com/watch?v=HWGL0IcTkmQ
  • Please refer to script SuperPoint/extract.py on how to extract superpoint features for a sequence of images and dump the results as .mat files