Skip to content

Commit

Permalink
Merge branch 'main' into DocumentationOffline
Browse files Browse the repository at this point in the history
  • Loading branch information
TieJean authored Dec 9, 2023
2 parents c0f5ee3 + 54ec6fd commit 3411dc8
Show file tree
Hide file tree
Showing 49 changed files with 18,494 additions and 159 deletions.
2 changes: 2 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ find_package(Boost COMPONENTS filesystem REQUIRED)
find_package(pcl_conversions)
find_package(pcl_ros)
find_package(SuiteSparse)
find_package(TBB REQUIRED)

#find_package(darknet_ros_msgs)
#find_package(catkin REQUIRED COMPONENTS darknet_ros_msgs)
Expand All @@ -70,6 +71,7 @@ set(LIBS
gflags
cv_bridge
amrl_shared_lib
TBB::tbb
${CERES_LIBRARIES}
${OpenCV_LIBS}
${PCL_LIBRARIES}
Expand Down
14 changes: 6 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# ObVi-SLAM
ObVi-SLAM is a joint object-visual SLAM approach aimed at long-term multi-session robot deployments.

[[Paper](https://arxiv.org/abs/2309.15268)] [[Video](https://youtu.be/quJOgnEdaZ0)]
[[Paper with added appendix](https://arxiv.org/abs/2309.15268)] [[Video](https://youtu.be/quJOgnEdaZ0)]

Offline execution instructions coming soon.
ROS implementation coming late 2023/early 2024.
Expand All @@ -12,11 +12,6 @@ Please email amanda.adkins4242@gmail.com with any questions!
## Evaluation
For information on how to set up and run the comparison algorithms, see our [evaluation repo](https://github.com/ut-amrl/ObVi-SLAM-Evaluation).


## Extended Results
See the [version of our paper](https://drive.google.com/file/d/1Cf6QfheKa09mJO8oqgUqdTUC3y12JXRN/view?usp=share_link) with an appendix containing extended results and the full ablation study details.


## Installation Instructions
<!-- TODO
- dockerfile version (recommended)
Expand Down Expand Up @@ -124,10 +119,13 @@ TODO
- Explain how to modify configuration file -- which parameters will someone need to modify for different environment, (lower priority): explain each of the parameters in the config file
## Evaluation
Our YOLO model: TODO
For our experiments, we used [YOLOv5](https://github.com/ut-amrl/yolov5/tree/ROS) (based on [this repo](https://github.com/ultralytics/yolov5)) with [this model](https://drive.google.com/file/d/15xv-Se991Pzes7R3KfyPBkuSQ7TeCb1T/view?usp=sharing).
We used detections with labels 'lamppost', 'treetrunk', 'bench', and 'trashcan' with [this configuration file](https://github.com/ut-amrl/ObVi-SLAM/blob/main/config/base7a_1_fallback_a_2.json).
Please contact us if you would like to obtain the videos on which we performed the evaluation.
## TODOs
- Add installation instructions
- Add offline execution instructions
- Add YOLO model -->
Loading

0 comments on commit 3411dc8

Please sign in to comment.