10_00_03_00
Pre-release
Pre-release
New in this Release
Description | Notes |
---|---|
Support for Centernet model architecture : Added/optimized new operators : Object detection layer for Centernet | |
Support for wrap mode of pad using Slice and Concat operators | |
Support for partially batched networks (networks with part of network having multiple batches and remaining network as single batch) | |
Optimization of TIDLRT-Create for boot time performance improvement | |
Robustness improvement and improved logging for compiler (parser, graph partition and optimizer modules) | |
Performance optimization of non-linear activation functions using iLUT feature ( Tanh, Sigmoid, Softmax, GELU, ELU, SiLU ) | Specific to J722S/AM67A/TDA4AEN platform |
Support for low latency mode for single neural network by splitting across multiple C7x-MMA | Specific to J722S/AM67A/TDA4AEN platform |
Support for high throughput mode by scheduling multiple instances of network across multiple C7x-MMA (multi-core batch processing) | Specific to J722S/AM67A/TDA4AEN platform |
Fixed in this Release
ID | Description | Affected Platforms |
---|---|---|
TIDL-4672 | 16-bit Softmax produces produces poorer results than expected | All except AM62 |
TIDL-4670 | 3D OD detection layer output tensor size is not correct | All except AM62 |
TIDL-4665 | Model may give wrong output when it contains reshape layers with batches | All except AM62 |
TIDL-4661 | Models silently fail during compilation when /dev/shm is full | All except AM62 |
TIDL-4660 | Models with Cast as an intermediate operator results in functional issue due to unintentional offload to TIDL-RT | All except AM62 |
TIDL-4638 | Element wise layers with height greater than 65535 functionally doesn't work in host/PC emulation mode | All except AM62 |
TIDL-4480 | Network containing large number of reshape layers may result in longer compilation time | All except AM62 |
TIDL-3918 | Pooling Layer with K=2x2 and S=2x2 results in a C7x Exception | All except AM62 |
Known Issues
ID | Description | Affected Platforms | Occurrence | Workaround in this release |
---|---|---|---|---|
TIDL-4024 | "QDQ models with self-attention blocks error out during model compilation with ""RUNTIME_EXCEPTION : Non-zero status code returned while running TIDL_0 node. Name:'TIDLExecutionProvider_TIDL_0_0' Status Message: CHECK failed: (index) < (current_size_)"")" | All except AM62 | Rare | None |
TIDL-3905 | "TFLite Prequantized models with ""add_dataconvert_ops"": 3 fails with error ""Unable to split bias""" | All except AM62 | Rare | None |
TIDL-3895 | 2x2s2 Max Pooling with ceil_mode=0 and odd input dimensions results in incorrect outputs | All except AM62 | Rare | None |
TIDL-3886 | Maxpool 2x2 with stride 1x1 is considered supported but is incorrectly denied from being offloaded to C7x | All except AM62 | Rare | None |
TIDL-3845 | Running model compilation and inference back to back in the same python script results in a segfault | All except AM62 | Rare | None |
TIDL-3780 | Prototext based scale input may result in slight degradation in quantized output | All except AM62 | Rare | None |
TIDL-3704 | Intermediate subgraphs whose outputs are not 4D result in incorrect outputs | All except AM62 | Rare | None |
TIDL-3622 | Quantization prototxt does not correctly fill information for tflite const layers | All except AM62 | Rare | None |
TIDL-2990 | PReLU layer does not correctly parse the slope parameter and produces incorrect outputs | All except AM62 | Rare | None |