Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Isaac ROS 0.20.0 (DP2) #20

Merged
merged 1 commit into from
Oct 19, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Isaac ROS Contribution Rules

Any contribution that you make to this repository will
be under the Apache 2 License, as dictated by that
[license](http://www.apache.org/licenses/LICENSE-2.0.html):

> **5. Submission of Contributions.** Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

Contributors must sign-off each commit by adding a `Signed-off-by: ...`
line to commit messages to certify that they have the right to submit
the code they are contributing to the project according to the
[Developer Certificate of Origin (DCO)](https://developercertificate.org/).

[//]: # (202201002)
266 changes: 201 additions & 65 deletions LICENSE

Large diffs are not rendered by default.

171 changes: 114 additions & 57 deletions README.md

Large diffs are not rendered by default.

13 changes: 9 additions & 4 deletions docs/model-preparation.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Preparing Deep Learning Models for Isaac ROS

## Obtaining a Pre-trained Model from NGC

The NVIDIA GPU Cloud hosts a [catalog](https://catalog.ngc.nvidia.com/models) of Deep Learning pre-trained models that are available for your development.

1. Use the **Search Bar** to find a pre-trained model that you are interested in working with.
Expand All @@ -15,6 +16,7 @@ The NVIDIA GPU Cloud hosts a [catalog](https://catalog.ngc.nvidia.com/models) of
5. **Paste** the copied command into a terminal to download the model in the current working directory.

## Using `tao-converter` to decrypt the Encrypted TLT Model (`.etlt`) Format

As discussed above, models distributed with the `.etlt` file extension are encrypted and must be decrypted before use via NVIDIA's [`tao-converter`](https://developer.nvidia.com/tao-toolkit-get-started).

`tao-converter` is already included in the Docker images available as part of the standard [Isaac ROS Development Environment](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/dev-env-setup.md).
Expand All @@ -26,22 +28,25 @@ The per-platform installation paths are described below:
| x86_64 | `/opt/nvidia/tao/tao-converter-x86-tensorrt8.0/tao-converter` | **`/opt/nvidia/tao/tao-converter`** |
| Jetson(aarch64) | `/opt/nvidia/tao/jp5` | **`/opt/nvidia/tao/tao-converter`** |


### Converting `.etlt` to a TensorRT Engine Plan

Here are some examples for generating the TensorRT engine file using `tao-converter`. In this example, we will use the [`PeopleSemSegnet Shuffleseg` model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplesemsegnet/files?version=deployable_shuffleseg_unet_v1.0):

#### Generate an engine file for the `fp16` data type:
#### Generate an engine file for the `fp16` data type

```bash
mkdir -p /workspaces/isaac_ros-dev/models && \
/opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t fp16 -e /workspaces/isaac_ros-dev/models/peoplesemsegnet_shuffleseg.engine -o argmax_1 peoplesemsegnet_shuffleseg_etlt.etlt
```

> **Note:** The specific values used in the command above are retrieved from the **PeopleSemSegnet** page under the **Overview** tab. The model input node name and output node name can be found in `peoplesemsegnet_shuffleseg_cache.txt` from `File Browser`. The output file is specified using the `-e` option. The tool needs write permission to the output directory.
>
> A detailed explanation of the input parameters is available [here](https://docs.nvidia.com/tao/tao-toolkit/text/tensorrt.html#running-the-tao-converter).

#### Generate an engine file for the data type `int8`:
#### Generate an engine file for the data type `int8`

Create the models directory:

```bash
mkdir -p /workspaces/isaac_ros-dev/models
```
Expand Down
32 changes: 18 additions & 14 deletions docs/tensorrt-and-triton-info.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,45 @@
# Isaac ROS Triton and TensorRT Nodes for DNN Inference

NVIDIA's Isaac ROS suite of packages provides two separate nodes for performing DNN inference: Triton and TensorRT.
NVIDIA's Isaac ROS suite of packages provides two separate nodes for performing DNN inference: Triton and TensorRT.

Our benchmarks show comparable performance and inference speed with both nodes, so a decision should be based on other characteristics of the overall model being deployed.

## NVIDIA Triton
The NVIDIA Triton Inference Server is an [open-source inference serving software](https://developer.nvidia.com/nvidia-triton-inference-server) that provides a uniform interface for deploying AI models. Crucially, Triton supports a wide array of compute devices like NVIDIA GPUs and both x86 and ARM CPUs, and also operates with all major frameworks such as TensorFlow, TensorRT, and PyTorch.

The NVIDIA Triton Inference Server is an [open-source inference serving software](https://developer.nvidia.com/nvidia-triton-inference-server) that provides a uniform interface for deploying AI models. Crucially, Triton supports a wide array of compute devices like NVIDIA GPUs and both x86 and ARM CPUs, and also operates with all major frameworks such as TensorFlow, TensorRT, and PyTorch.

Because Triton can take advantage of additional compute devices beyond just the GPU, Triton can be a better choice in situations where there is GPU resource contention from other model inference or processing tasks. However, in order to provide for this flexibility, Triton requires the creation of a model repository and additional configuration files before deployment.

## NVIDIA TensorRT
NVIDIA TensorRT is a specific CUDA-based, on-GPU inference framework that performs a number of optimizations to deliver extremely performant model execution. TensorRT only supports ONNX and TensorRT Engine Plans, providing less flexibility than Triton but also requiring less initial configuration.

NVIDIA TensorRT is a specific CUDA-based, on-GPU inference framework that performs a number of optimizations to deliver extremely performant model execution. TensorRT only supports ONNX and TensorRT Engine Plans, providing less flexibility than Triton but also requiring less initial configuration.

## Using either Triton or TensorRT Nodes
Both nodes use the Isaac ROS [Tensor List message](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/isaac_ros_tensor_list_interfaces/msg/TensorList.msg) for input data and output inference result.

Both nodes use the Isaac ROS [Tensor List message](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/isaac_ros_tensor_list_interfaces/msg/TensorList.msg) for input data and output inference result.

Users can either prepare a custom model or download pre-trained models from NGC as described [here](./model-preparation.md#obtaining-a-pre-trained-model-from-ngc). Models should be converted to the TensorRT Engine File format using the `tao-converter` tool as described [here](./model-preparation.md#using-tao-converter-to-decrypt-the-encrypted-tlt-model-etlt-format).

> **Note:** While the TensorRT node can automatically convert ONNX plans to the TensorRT Engine Plan format if configured to use a `.onnx` file, this conversion step will considerably extend the node's per-launch initial setup time.
>
> **Note:** While the TensorRT node can automatically convert ONNX plans to the TensorRT Engine Plan format if configured to use a `.onnx` file, this conversion step will considerably extend the node's per-launch initial setup time.
>
> As a result, we recommend converting any ONNX models to TensorRT Engine Plans first, and configuring the TensorRT node to use the Engine Plan instead.


## Pre- and Post-Processing Nodes
In order to be a useful component of a ROS graph, both Isaac ROS Triton and TensorRT inference nodes will require application-specific `pre-processor` (`encoder`) and `post-processor` (`decoder`) nodes to handle type conversion and other necessary steps.

A `pre-processor` node should take in a ROS2 message, perform the pre-processing steps dictated by the model, and then convert the data into an Isaac ROS Tensor List message. For example, a `pre-processor` node could resize an image, normalize it, and then convert it into a Tensor List.
In order to be a useful component of a ROS graph, both Isaac ROS Triton and TensorRT inference nodes will require application-specific `pre-processor` (`encoder`) and `post-processor` (`decoder`) nodes to handle type conversion and other necessary steps.

A `pre-processor` node should take in a ROS2 message, perform the pre-processing steps dictated by the model, and then convert the data into an Isaac ROS Tensor List message. For example, a `pre-processor` node could resize an image, normalize it, and then convert it into a Tensor List.

A `post-processor` node should be used to convert the Isaac ROS Tensor List output of the model inference into a useful ROS2 message. For example, a `post-processor` node may perform argmax to identify the class label from a classification problem.
A `post-processor` node should be used to convert the Isaac ROS Tensor List output of the model inference into a useful ROS2 message. For example, a `post-processor` node may perform argmax to identify the class label from a classification problem.

<div align="center">
<div align="center">

![Using TensorRT or Triton](../resources/pipeline.png "Using TensorRT or Triton")
![Using TensorRT or Triton](../resources/pipeline.png "Using TensorRT or Triton")

</div>

## Further Reading
For more documentation on Triton, see [here](https://developer.nvidia.com/nvidia-triton-inference-server).

For more documentation on TensorRT, see [here](https://developer.nvidia.com/tensorrt).
For more documentation on Triton, see [here](https://developer.nvidia.com/nvidia-triton-inference-server).

For more documentation on TensorRT, see [here](https://developer.nvidia.com/tensorrt).
7 changes: 6 additions & 1 deletion docs/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
# DNN Inference Troubleshooting

## Seeing operation failed followed by the process dying

One cause of this issue is when the GPU being used does not have enough memory to run the model. For example, DOPE may require up to 6GB of VRAM to operate, depending on the application.

### Symptom
```

```log
[component_container_mt-1] 2022-06-27 08:35:37.518 ERROR extensions/tensor_ops/Reshape.cpp@71: reshape tensor failed.
[component_container_mt-1] 2022-06-27 08:35:37.518 ERROR extensions/tensor_ops/TensorOperator.cpp@151: operation failed.
[component_container_mt-1] 2022-06-27 08:35:37.518 ERROR gxf/std/entity_executor.cpp@200: Entity with 102 not found!
Expand All @@ -14,5 +17,7 @@ One cause of this issue is when the GPU being used does not have enough memory t
[component_container_mt-1] what(): [NitrosPublisher] Vault ("vault/vault", eid=102) was stopped. The graph may have been terminated due to an error.
[ERROR] [component_container_mt-1]: process has died [pid 13378, exit code -6, cmd '/opt/ros/humble/install/lib/rclcpp_components/component_container_mt --ros-args -r __node:=dope_container -r __ns:=/'].
```

### Solution

Try using the Isaac ROS TensorRT node or the Isaac ROS Triton node with the TensorRT backend instead. Otherwise, a discrete GPU with more VRAM may be required.
25 changes: 15 additions & 10 deletions isaac_ros_dnn_encoders/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,19 @@
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0

cmake_minimum_required(VERSION 3.8)
project(isaac_ros_dnn_encoders LANGUAGES C CXX)
Expand Down Expand Up @@ -58,10 +67,6 @@ install(TARGETS dnn_image_encoder_node

if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)

# Ignore copyright notices since we use custom NVIDIA Isaac ROS Software License
set(ament_cmake_copyright_FOUND TRUE)

ament_lint_auto_find_test_dependencies()

find_package(launch_testing_ament_cmake REQUIRED)
Expand Down
21 changes: 15 additions & 6 deletions isaac_ros_dnn_encoders/config/dnn_image_encoder_node.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,20 @@
%YAML 1.2
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0
---
name: global
components:
Expand Down
21 changes: 15 additions & 6 deletions isaac_ros_dnn_encoders/config/namespace_injector_rule.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,20 @@
%YAML 1.2
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0
---
name: DNN Image Encoder Namespace Injector Rule
operation: namespace_injector
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
/**
* Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
*
* NVIDIA CORPORATION and its licensors retain all intellectual property
* and proprietary rights in and to this software, related documentation
* and any modifications thereto. Any use, reproduction, disclosure or
* distribution of this software and related documentation without an express
* license agreement from NVIDIA CORPORATION is strictly prohibited.
*/
// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
// Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// SPDX-License-Identifier: Apache-2.0

#ifndef ISAAC_ROS_DNN_ENCODERS__DNN_IMAGE_ENCODER_NODE_HPP_
#define ISAAC_ROS_DNN_ENCODERS__DNN_IMAGE_ENCODER_NODE_HPP_
Expand Down
24 changes: 16 additions & 8 deletions isaac_ros_dnn_encoders/package.xml
Original file line number Diff line number Diff line change
@@ -1,22 +1,30 @@
<?xml version="1.0"?>

<!--
Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

NVIDIA CORPORATION and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA CORPORATION is strictly prohibited.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

SPDX-License-Identifier: Apache-2.0
-->

<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>isaac_ros_dnn_encoders</name>
<version>0.11.0</version>
<version>0.20.0</version>
<description>Encoders for preprocessing before running deep learning inference</description>
<maintainer email="hemals@nvidia.com">Hemal Shah</maintainer>
<license>NVIDIA Isaac ROS Software License</license>
<license>Apache-2.0</license>
<url type="website">https://developer.nvidia.com/isaac-ros-gems/</url>
<author>Ethan Yu</author>
<author>Kajanan Chinniah</author>
Expand Down
25 changes: 16 additions & 9 deletions isaac_ros_dnn_encoders/src/dnn_image_encoder_node.cpp
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
/**
* Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
*
* NVIDIA CORPORATION and its licensors retain all intellectual property
* and proprietary rights in and to this software, related documentation
* and any modifications thereto. Any use, reproduction, disclosure or
* distribution of this software and related documentation without an express
* license agreement from NVIDIA CORPORATION is strictly prohibited.
*/
// SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
// Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// SPDX-License-Identifier: Apache-2.0

#include "isaac_ros_dnn_encoders/dnn_image_encoder_node.hpp"

Expand Down
Loading