diff --git a/readme.md b/readme.md index 4b364c75..25e7b1ad 100644 --- a/readme.md +++ b/readme.md @@ -4,13 +4,12 @@ This is the official DOPE ROS package for detection and 6-DoF pose estimation of **known objects** from an RGB camera. The network has been trained on the following YCB objects: cracker box, sugar box, tomato soup can, mustard bottle, potted meat can, and gelatin box. For more details, see our [CoRL 2018 paper](https://arxiv.org/abs/1809.10790) and [video](https://youtu.be/yVGViBqWtBI). -*Note:* The instructions below refer to inference only. Training code is also provided but not supported. Thank you to [@Blaine141](https://github.com/blaine141) You can check out how to train DOPE on a single [GPU and using NVISII](https://github.com/NVlabs/Deep_Object_Pose/issues/155#issuecomment-791148200). ![DOPE Objects](dope_objects.png) ## Updates -2024/03/07 - New training code, code reorganization and new synthetic data generation code, using Blenderproc +2024/03/07 - New training code. New synthetic data generation code, using Blenderproc. Repo reorganization 2022/07/13 - Added a script with a simple example for computing the ADD and ADD-S metric on data. Please refer to [script/metrics/](https://github.com/NVlabs/Deep_Object_Pose/tree/master/scripts/metrics). @@ -24,15 +23,11 @@ This is the official DOPE ROS package for detection and 6-DoF pose estimation of 2020/03/09 - Added HOPE [weights to google drive](https://drive.google.com/open?id=1DfoA3m_Bm0fW8tOWXGVxi4ETlLEAgmcg), [the 3d models](https://drive.google.com/drive/folders/1jiJS9KgcYAkfb8KJPp5MRlB0P11BStft), and the objects dimensions to config. [Tremblay et al., IROS 2020](https://arxiv.org/abs/2008.11822). The HOPE dataset can be found [here](https://github.com/swtyree/hope-dataset/) and is also part of the [BOP challenge](https://bop.felk.cvut.cz/datasets/#HOPE) - - +
+
- - - - -## Installing +## Tested Configurations We have tested on Ubuntu 20.04 with ROS Noetic with an NVIDIA Titan X and RTX 2080ti with Python 3.8. The code may work on other systems. @@ -42,9 +37,11 @@ We have tested on Ubuntu 20.04 with ROS Noetic with an NVIDIA Titan X and RTX 20 For hardware-accelerated ROS2 inference support, please visit [Isaac ROS DOPE](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation/tree/main/isaac_ros_dope) which has been tested with ROS2 Foxy on Jetson AGX Xavier/JetPack 4.6 and on x86/Ubuntu 20.04 with RTX3060i. --- +
+
## Synthetic Data Generation -Code and instructions for generating synthetic training data are found in the `data_generation` directory. There are two options for generation: you can use [NVISII](https://github.com/owl-project/NVISII) or [Blenderproc](https://github.com/DLR-RM/BlenderProc) +Code and instructions for generating synthetic training data are found in the `data_generation` directory. There are two options for the render engine: you can use [NVISII](https://github.com/owl-project/NVISII) or [Blenderproc](https://github.com/DLR-RM/BlenderProc) ## Training Code and instructions for training DOPE are found in the `train` directory. @@ -61,6 +58,8 @@ Code and instructions for evaluating the quality of your results are found in th DOPE returns the poses of the objects in the camera coordinate frame. DOPE uses the aligned YCB models, which can be obtained using [NVDU](https://github.com/NVIDIA/Dataset_Utilities) (see the `nvdu_ycb` command). +--- + ## HOPE 3D Models ![HOPE 3D models rendered in UE4](https://i.imgur.com/V6wX64p.png) diff --git a/train/README.md b/train/README.md index e627b401..d06087be 100644 --- a/train/README.md +++ b/train/README.md @@ -4,7 +4,11 @@ This repo contains a simplified version of the **training** script for DOPE. The original repo for DOPE [can be found here](https://github.com/NVlabs/Deep_Object_Pose). In addition, this repo contains scripts for inference, evaluation, and data visualization. -More instructions can be found in the subdirectories `/evaluate` and `/inference`. +More instructions can be found in the directories `evaluate` and `inference`. + +You can check out how to train DOPE on a single [GPU and using NVISII](https://github.com/NVlabs/Deep_Object_Pose/issues/155#issuecomment-791148200). + + ## Installing Dependencies ***Note***