Skip to content

Commit

Permalink
Merge pull request #36 from NVIDIA-ISAAC-ROS/release-dp3.1
Browse files Browse the repository at this point in the history
Isaac ROS 0.31.0 (DP3.1)
  • Loading branch information
jaiveersinghNV authored May 26, 2023
2 parents 1af625e + 68b5bf8 commit 9a11305
Show file tree
Hide file tree
Showing 13 changed files with 65 additions and 66 deletions.
17 changes: 9 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,13 @@ This package is powered by [NVIDIA Isaac Transport for ROS (NITROS)](https://dev

The following table summarizes the per-platform performance statistics of sample graphs that use this package, with links included to the full benchmark output. These benchmark configurations are taken from the [Isaac ROS Benchmark](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark#list-of-isaac-ros-benchmarks) collection, based on the [`ros2_benchmark`](https://github.com/NVIDIA-ISAAC-ROS/ros2_benchmark) framework.

| Sample Graph | Input Size | AGX Orin | Orin NX | Orin Nano 8GB | x86_64 w/ RTX 3060 Ti |
| ----------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [TensorRT Node<br>DOPE](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_tensor_rt_dope_node.py) | VGA | [48.1 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-agx_orin.json)<br>22 ms | [17.2 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-orin_nx.json)<br>56 ms | [13.0 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-orin_nano_8gb.json)<br>79 ms | [94.9 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-x86_64_rtx_3060Ti.json)<br>10 ms |
| [Triton Node<br>DOPE](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_triton_dope_node.py) | VGA | [48.0 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-agx_orin.json)<br>22 ms | [20.1 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-orin_nx.json)<br>540 ms | [14.5 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-orin_nano_8gb.json)<br>790 ms | [94.2 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-x86_64_rtx_3060Ti.json)<br>11 ms |
| [TensorRT Node<br>PeopleSemSegNet](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_tensor_rt_ps_node.py) | 544p | [467 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-agx_orin.json)<br>2.3 ms | [270 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-orin_nx.json)<br>4.0 ms | [184 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-orin_nano_8gb.json)<br>9.0 ms | [1500 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-x86_64_rtx_3060Ti.json)<br>1.1 ms |
| [Triton Node<br>PeopleSemSegNet](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_triton_ps_node.py) | 544p | [293 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-agx_orin.json)<br>3.7 ms | [191 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-orin_nx.json)<br>5.5 ms | -- | [512 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-x86_64_rtx_3060Ti.json)<br>2.1 ms |
| [DNN Image Encoder Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_dnn_image_encoder_node.py) | VGA | [2230 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-agx_orin.json)<br>0.60 ms | [1560 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-orin_nx.json)<br>0.89 ms | -- | [5780 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-x86_64_rtx_3060Ti.json)<br>0.45 ms |
| Sample Graph | Input Size | AGX Orin | Orin NX | Orin Nano 8GB | x86_64 w/ RTX 4060 Ti |
| ----------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [TensorRT Node<br>DOPE](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_tensor_rt_dope_node.py) | VGA | [48.1 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-agx_orin.json)<br>21 ms | [19.0 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-orin_nx.json)<br>54 ms | [13.0 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-orin_nano.json)<br>79 ms | [102 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_dope_node-nuc_4060ti.json)<br>10 ms |
| [Triton Node<br>DOPE](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_triton_dope_node.py) | VGA | [48.0 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-agx_orin.json)<br>22 ms | [20.5 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-orin_nx.json)<br>540 ms | [14.5 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-orin_nano.json)<br>790 ms | [99.4 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_dope_node-nuc_4060ti.json)<br>10 ms |
| [TensorRT Node<br>PeopleSemSegNet](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_tensor_rt_ps_node.py) | 544p | [468 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-agx_orin.json)<br>2.6 ms | [272 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-orin_nx.json)<br>4.1 ms | [185 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-orin_nano.json)<br>5.9 ms | [1990 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_tensor_rt_ps_node-nuc_4060ti.json)<br>0.88 ms |
| [Triton Node<br>PeopleSemSegNet](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_triton_ps_node.py) | 544p | [296 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-agx_orin.json)<br>3.5 ms | [190 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-orin_nx.json)<br>5.5 ms | -- | [709 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_triton_ps_node-nuc_4060ti.json)<br>2.0 ms |
| [DNN Image Encoder Node](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/scripts//isaac_ros_dnn_image_encoder_node.py) | VGA | [2120 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-agx_orin.json)<br>1.1 ms | [1550 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-orin_nx.json)<br>1.2 ms | -- | [5340 fps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_benchmark/blob/main/results/isaac_ros_dnn_image_encoder_node-nuc_4060ti.json)<br>0.48 ms |

## Table of Contents

Expand Down Expand Up @@ -87,7 +87,7 @@ The following table summarizes the per-platform performance statistics of sample

## Latest Update

Update 2023-04-05: Source available GXF extensions
Update 2023-05-25: Performance improvements.

## Supported Platforms

Expand Down Expand Up @@ -473,6 +473,7 @@ For solutions to problems with using DNN models, please check [here](docs/troubl

| Date | Changes |
| ---------- | ---------------------------------------------------------------------------------------------------------------------------- |
| 2023-05-25 | Performance improvements |
| 2023-04-05 | Source available GXF extensions |
| 2022-10-19 | Updated OSS licensing |
| 2022-08-31 | Update to be compatible with JetPack 5.0.2 |
Expand Down
37 changes: 18 additions & 19 deletions isaac_ros_dnn_encoders/config/dnn_image_encoder_node.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
%YAML 1.2
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand All @@ -19,36 +19,36 @@
name: global
components:
- name: adapter_video_buffer
type: nvidia::cvcore::tensor_ops::ImageAdapter
type: nvidia::isaac::tensor_ops::ImageAdapter
parameters:
message_type: "VideoBuffer"
- name: adapter_bgr_u8
type: nvidia::cvcore::tensor_ops::ImageAdapter
type: nvidia::isaac::tensor_ops::ImageAdapter
parameters:
message_type: "Tensor"
image_type: "BGR_U8"
- name: adapter_rgb_u8
type: nvidia::cvcore::tensor_ops::ImageAdapter
type: nvidia::isaac::tensor_ops::ImageAdapter
parameters:
message_type: "Tensor"
image_type: "RGB_U8"
- name: adapter_bgr_f32
type: nvidia::cvcore::tensor_ops::ImageAdapter
type: nvidia::isaac::tensor_ops::ImageAdapter
parameters:
message_type: "Tensor"
image_type: "BGR_F32"
- name: adapter_rgb_f32
type: nvidia::cvcore::tensor_ops::ImageAdapter
type: nvidia::isaac::tensor_ops::ImageAdapter
parameters:
message_type: "Tensor"
image_type: "RGB_F32"
- name: adapter_planar_bgr_f32
type: nvidia::cvcore::tensor_ops::ImageAdapter
type: nvidia::isaac::tensor_ops::ImageAdapter
parameters:
message_type: "Tensor"
image_type: "PLANAR_BGR_F32"
- name: adapter_planar_rgb_f32
type: nvidia::cvcore::tensor_ops::ImageAdapter
type: nvidia::isaac::tensor_ops::ImageAdapter
parameters:
message_type: "Tensor"
image_type: "PLANAR_RGB_F32"
Expand Down Expand Up @@ -110,7 +110,7 @@ components:
block_size: 1566720
num_blocks: 40
- name: resize_operator
type: nvidia::cvcore::tensor_ops::Resize
type: nvidia::isaac::tensor_ops::Resize
parameters:
output_width: 0
output_height: 0
Expand Down Expand Up @@ -150,7 +150,7 @@ components:
block_size: 1566720
num_blocks: 40
- name: color_converter_operator
type: nvidia::cvcore::tensor_ops::ConvertColorFormat
type: nvidia::isaac::tensor_ops::ConvertColorFormat
parameters:
output_type: "RGB_U8"
receiver: data_receiver
Expand Down Expand Up @@ -186,7 +186,7 @@ components:
block_size: 6266880
num_blocks: 40
- name: normalizer_operator
type: nvidia::cvcore::tensor_ops::Normalize
type: nvidia::isaac::tensor_ops::Normalize
parameters:
scales: [ 0.0156862745, 0.00490196078, 0.00784313725 ]
offsets: [ -127.5, -153.0, -63.75 ]
Expand Down Expand Up @@ -223,7 +223,7 @@ components:
block_size: 6266880
num_blocks: 40
- name: interleaved_to_planar_operator
type: nvidia::cvcore::tensor_ops::InterleavedToPlanar
type: nvidia::isaac::tensor_ops::InterleavedToPlanar
parameters:
receiver: data_receiver
transmitter: data_transmitter
Expand Down Expand Up @@ -259,7 +259,7 @@ components:
block_size: 6266880
num_blocks: 40
- name: reshape_operator
type: nvidia::cvcore::tensor_ops::Reshape
type: nvidia::isaac::tensor_ops::Reshape
parameters:
receiver: data_receiver
transmitter: data_transmitter
Expand Down Expand Up @@ -304,7 +304,7 @@ components:
camera_model_rx: data_receiver_timestamp
tx: data_transmitter
---
name: vault
name: sink
components:
- name: signal
type: nvidia::gxf::DoubleBufferReceiver
Expand All @@ -315,12 +315,10 @@ components:
parameters:
receiver: signal
min_size: 1
- name: vault
type: nvidia::gxf::Vault
- name: sink
type: nvidia::isaac_ros::MessageRelay
parameters:
source: signal
max_waiting_count: 1
drop_waiting: false
---
components:
- name: edge0
Expand Down Expand Up @@ -376,12 +374,13 @@ components:
type: nvidia::gxf::Connection
parameters:
source: compositor/data_transmitter
target: vault/signal
target: sink/signal
---
components:
- type: nvidia::gxf::GreedyScheduler
parameters:
clock: clock
stop_on_deadlock: false
check_recession_period_us: 100
- name: clock
type: nvidia::gxf::RealtimeClock
12 changes: 6 additions & 6 deletions isaac_ros_dnn_encoders/config/namespace_injector_rule.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
%YAML 1.2
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# Copyright (c) 2022-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand All @@ -20,13 +20,13 @@ name: DNN Image Encoder Namespace Injector Rule
operation: namespace_injector
body:
components:
- type: nvidia::cvcore::tensor_ops::ConvertColorFormat
- type: nvidia::isaac::tensor_ops::ConvertColorFormat
path_parameter_keys: [input_adapter, output_adapter]
- type: nvidia::cvcore::tensor_ops::Resize
- type: nvidia::isaac::tensor_ops::Resize
path_parameter_keys: [input_adapter, output_adapter]
- type: nvidia::cvcore::tensor_ops::Normalize
- type: nvidia::isaac::tensor_ops::Normalize
path_parameter_keys: [input_adapter, output_adapter]
- type: nvidia::cvcore::tensor_ops::InterleavedToPlanar
- type: nvidia::isaac::tensor_ops::InterleavedToPlanar
path_parameter_keys: [input_adapter, output_adapter]
- type: nvidia::cvcore::tensor_ops::Reshape
- type: nvidia::isaac::tensor_ops::Reshape
path_parameter_keys: [input_adapter, output_adapter]
2 changes: 1 addition & 1 deletion isaac_ros_dnn_encoders/package.xml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ SPDX-License-Identifier: Apache-2.0
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>isaac_ros_dnn_encoders</name>
<version>0.30.0</version>
<version>0.31.0</version>
<description>Encoders for preprocessing before running deep learning inference</description>
<maintainer email="hemals@nvidia.com">Hemal Shah</maintainer>
<license>Apache-2.0</license>
Expand Down
Loading

0 comments on commit 9a11305

Please sign in to comment.