Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
mintar committed Jul 1, 2019
1 parent 6235ac2 commit 7f9e1fe
Show file tree
Hide file tree
Showing 2 changed files with 78 additions and 58 deletions.
2 changes: 1 addition & 1 deletion docker/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ with NVIDIA Docker v1, that version is no longer supported by NVIDIA; rather, us
Only the first invocation of this script with a given name will create a container. Subsequent executions will attach to the running container allowing you -- in effect -- to have multiple terminal sessions into a single container.

5. **Build DOPE**
Return to step 5 of the [installation instructions](../readme.md).
Return to step 7 of the [installation instructions](../readme.md) (downloading the weights).

*Note:* Since the Docker container binds directly to the host's network, it will see `roscore` even if running outside the docker container.

Expand Down
134 changes: 77 additions & 57 deletions readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[![License CC BY-NC-SA 4.0](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-blue.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)
![Python 2.7](https://img.shields.io/badge/python-2.7-green.svg)
# Deep Object Pose Estimation - ROS Inference
# Deep Object Pose Estimation - ROS Inference

This is the official DOPE ROS package for detection and 6-DoF pose estimation of **known objects** from an RGB camera. The network has been trained on the following YCB objects: cracker box, sugar box, tomato soup can, mustard bottle, potted meat can, and gelatin box. For more details, see our [CoRL 2018 paper](https://arxiv.org/abs/1809.10790) and [video](https://youtu.be/yVGViBqWtBI).

Expand All @@ -10,72 +10,86 @@ This is the official DOPE ROS package for detection and 6-DoF pose estimation of

## Installing

1. **Set up system / Docker image**
We have tested on Ubuntu 16.04 with ROS Kinetic with an NVIDIA Titan X with python 2.7. The code may work on other systems.

The following steps describe the native installation. Alternatively, use the provided [Docker image](docker/readme.md) and skip to Step #7.

1. **Install ROS**

Follow these [instructions](http://wiki.ros.org/kinetic/Installation/Ubuntu).
You can select any of the default configurations in step 1.4; even the
ROS-Base (Bare Bones) package (`ros-kinetic-ros-base`) is enough.

We have tested on Ubuntu 16.04 with ROS Kinetic with an NVIDIA Titan X with python 2.7. The code may work on other systems.
If you do not have the full ROS install, you may need to install some packages, *e.g.*,
```
apt-get install ros-kinetic-cv-bridge
```

Alternatively, use the provided [Docker image](docker/readme.md) and skip to Step #5.

2. **Create a catkin workspace** (if you do not already have one). To create a catkin workspace, follow these [instructions](http://wiki.ros.org/catkin/Tutorials/create_a_workspace):
```
$ mkdir -p ~/catkin_ws/src # Replace `catkin_ws` with the name of your workspace
$ cd ~/catkin_ws/
$ catkin_make
```
```
$ mkdir -p ~/catkin_ws/src # Replace `catkin_ws` with the name of your workspace
$ cd ~/catkin_ws/
$ catkin_make
```
3. **Download the DOPE code**
```
$ cd ~/catkin_ws/src
$ git clone https://github.com/NVlabs/Deep_Object_Pose.git dope
```
```
$ cd ~/catkin_ws/src
$ git clone https://github.com/NVlabs/Deep_Object_Pose.git dope
```
4. **Install python dependencies**
```
$ cd ~/catkin_ws/src/dope
$ pip install -r requirements.txt
```
4. **Install dependencies**
```
$ cd ~/catkin_ws/src/dope
$ pip install -r requirements.txt
```
5. **Install ROS dependencies**
```
$ cd ~/catkin_ws
$ rosdep install --from-paths src -i --rosdistro kinetic
$ sudo apt-get install ros-kinetic-rosbash ros-kinetic-ros-comm
```
5. **Build**
```
$ cd ~/catkin_ws
$ catkin_make
```
6. **Build**
```
$ cd ~/catkin_ws
$ catkin_make
```
6. **Download [the weights](https://drive.google.com/open?id=1DfoA3m_Bm0fW8tOWXGVxi4ETlLEAgmcg)** and save them to the `weights` folder, *i.e.*, `~/catkin_ws/src/dope/weights/`.
7. **Download [the weights](https://drive.google.com/open?id=1DfoA3m_Bm0fW8tOWXGVxi4ETlLEAgmcg)** and save them to the `weights` folder, *i.e.*, `~/catkin_ws/src/dope/weights/`.
## Running
1. **Start ROS master**
```
$ cd ~/catkin_ws
$ source devel/setup.bash
$ roscore
```
```
$ cd ~/catkin_ws
$ source devel/setup.bash
$ roscore
```
2. **Start camera node** (or start your own camera node)
```
$ roslaunch dope camera.launch # Publishes RGB images to `/dope/webcam_rgb_raw`
```
```
$ roslaunch dope camera.launch # Publishes RGB images to `/dope/webcam_rgb_raw`
```
The camera must publish a correct `camera_info` topic to enable DOPE to compute the correct poses. Basically all ROS drivers have a `camera_info_url` parameter where you can set the calibration info (but most ROS drivers include a reasonable default).
For details on calibration and rectification of your camera see the [camera tutorial](doc/camera_tutorial.md).
3. **Edit config info** (if desired) in `~/catkin_ws/src/dope/config/config_pose.yaml`
* `topic_camera`: RGB topic to listen to
* `topic_camera_info`: camera info topic to listen to
* `topic_publishing`: topic name for publishing
* `topic_publishing`: topic namespace for publishing
* `input_is_rectified`: Whether the input images are rectified. It is strongly suggested to use a rectified input topic.
* `downscale_height`: If the input image is larger than this, scale it down to this pixel height. Very large input images eat up all the GPU memory and slow down inference. Also, DOPE works best when the object size (in pixels) has appeared in the training data (which is downscaled to 400 px). For these reasons, downscaling large input images to something reasonable (e.g., 400-500 px) improves memory consumption, inference speed *and* recognition results.
* `weights`: dictionary of object names and there weights path name, **comment out any line to disable detection/estimation of that object**
* `dimension`: dictionary of dimensions for the objects (key values must match the `weights` names)
* `draw_colors`: dictionary of object colors (key values must match the `weights` names)
* `dimensions`: dictionary of dimensions for the objects (key values must match the `weights` names)
* `class_ids`: dictionary of class ids to be used in the messages published on the `/dope/detected_objects` topic (key values must match the `weights` names)
* `draw_colors`: dictionary of object colors (key values must match the `weights` names)
* `model_transforms`: dictionary of transforms that are applied to the pose before publishing (key values must match the `weights` names)
* `meshes`: dictionary of mesh filenames for visualization (key values must match the `weights` names)
* `mesh_scales`: dictionary of scaling factors for the visualization meshes (key values must match the `weights` names)
* `thresh_angle`: undocumented
* `thresh_map`: undocumented
* `sigma`: undocumented
* `thresh_points`: Thresholding the confidence for object detection; increase this value if you see too many false positives, reduce it if objects are not detected.
* `downscale_height`: If the input image is larger than this, scale it down to this pixel height. Very large input images eat up all the GPU memory and slow down inference. Also, DOPE works best when the object size (in pixels) has appeared in the training data (which is downscaled to 400 px). For these reasons, downscaling large input images to something reasonable (e.g., 400-500 px) improves memory consumption, inference speed *and* recognition results.
4. **Start DOPE node**
```
Expand All @@ -85,20 +99,26 @@ This is the official DOPE ROS package for detection and 6-DoF pose estimation of
## Debugging
* The following ROS topics are published:
```
/dope/webcam_rgb_raw # RGB images from camera
/dope/dimension_[obj_name] # dimensions of object
/dope/pose_[obj_name] # timestamped pose of object
/dope/rgb_points # RGB images with detected cuboids overlaid
```
*Note:* `[obj_name]` is in {cracker, gelatin, meat, mustard, soup, sugar}
* The following ROS topics are published (assuming `topic_publishing == 'dope'`):
```
/dope/webcam_rgb_raw # RGB images from camera
/dope/dimension_[obj_name] # dimensions of object
/dope/pose_[obj_name] # timestamped pose of object
/dope/rgb_points # RGB images with detected cuboids overlaid
/dope/detected_objects # vision_msgs/Detection3DArray of all detected objects
/dope/markers # RViz visualization markers for all objects
```
*Note:* `[obj_name]` is in {cracker, gelatin, meat, mustard, soup, sugar}
* To debug in RViz, run `rviz`, then add one or more of the following displays:
* `Add > Image` to view the raw RGB image or the image with cuboids overlaid
* `Add > Pose` to view the object coordinate frame in 3D.
* `Add > MarkerArray` to view the cuboids, meshes etc. in 3D.
* `Add > Camera` to view the RGB Image with the poses and markers from above.
* To debug in RViz, `rosrun rviz rviz`, then either
* `Add > Image` to view the raw RGB image or the image with cuboids overlaid
* `Add > Pose` to view the object coordinate frame in 3D. If you do not have a coordinate frame set up, you can run this static transformation: `rosrun tf2_ros static_transform_publisher 0 0 0 0.7071 0 0 -0.7071 world <camera_frame_id>`, where `<camera_frame_id>` is the `frame_id` of your input camera messages. Make sure that in RViz's `Global Options`, the `Fixed Frame` is set to `world`. Alternatively, you can skip the `static_transform_publisher` step and directly set the `Fixed Frame` to your `<camera_frame_id>`.
If you do not have a coordinate frame set up, you can run this static transformation: `rosrun tf2_ros static_transform_publisher 0 0 0 0.7071 0 0 -0.7071 world <camera_frame_id>`, where `<camera_frame_id>` is the `frame_id` of your input camera messages. Make sure that in RViz's `Global Options`, the `Fixed Frame` is set to `world`. Alternatively, you can skip the `static_transform_publisher` step and directly set the `Fixed Frame` to your `<camera_frame_id>`.
* If `rosrun` does not find the package (`[rospack] Error: package 'dope' not found`), be sure that you called `source devel/setup.bash` as mentioned above. To find the package, run `rospack find dope`.
* If `rosrun` does not find the package (`[rospack] Error: package 'dope' not found`), be sure that you called `source devel/setup.bash` as mentioned above. To find the package, run `rospack find dope`.
## YCB 3D Models
Expand All @@ -124,9 +144,9 @@ If you use this tool in a research project, please cite as follows:
Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
## Acknowledgment
## Acknowledgment
Thanks to Jeffrey Smith (jeffreys@nvidia.com) for creating the Docker image.
Thanks to Jeffrey Smith (jeffreys@nvidia.com) for creating the Docker image.
## Contact
Expand Down

0 comments on commit 7f9e1fe

Please sign in to comment.