Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump Microsoft.ML.OnnxRuntime from 1.6.0 to 1.10.0 in /ch9_release/src/Tailwind.Traders.Web #85

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Dec 25, 2021

Bumps Microsoft.ML.OnnxRuntime from 1.6.0 to 1.10.0.

Release notes

Sourced from Microsoft.ML.OnnxRuntime's releases.

ONNX Runtime v1.10.0

Announcements

  • As noted in the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider. e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
  • Python 3.6 support removed for Mac builds. Since 3.6 is end-of-life in December 2021, it will no longer be supported from next release (ORT 1.11) onwards
  • Removed dependency on optional-lite
  • Removed experimental Featurizers code

General

  • Support for plug-in custom thread creation and join functions to enable usage of external threads
  • Optional type support from opset15

Performance

  • Introduced indirect Convolution method for QLinearConv which has symmetrically quantized filter, i.e., filter type is int8 and zero point of filter is 0. The method leverages in-direct buffer instead of memcpy'ing the original data and doesn’t need to compute the sum of each pixel of output image for quantized Conv.
    • X64: new kernels - including avx2, avxvnni, avx512 and avx 512 vnni, for general and depthwise quantized Conv.
    • ARM64: new kernels for depthwise quantized Conv.
  • Tensor shape optimization to avoid allocating heap memory in most cases - #9542
  • Added transpose optimizer to push and cancel transpose ops, significantly improving perf for models requiring layout transformation

API

  • Python
    • Following through on the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider. e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
  • C/C++
    • New API to query CUDA stream to launch a custom kernel for scenarios where custom ops compiled into shared libraries need implicit synchronization with ORT CUDA kernels - #9141
    • Updated Invalid -> OrtInvalidAllocator
    • Updated every item in OrtCudnnConvAlgoSearch to a safer global name
  • WinML
    • New APIs to create OrtValues from Windows platform specific ID3D12Resources by exposing DirectML Execution Provider specific APIs. These APIs allow DML to extend the C-API and provide EP specific extensions.
      • OrtSessionOptionsAppendExecutionProviderEx_DML
      • DmlCreateGPUAllocationFromD3DResource
      • DmlFreeGPUAllocation
      • DmlGetD3D12ResourceFromAllocation
    • Bug fix: LearningModel::LoadFromFilePath in UWP apps

Packages

  • Added Mac M1 Universal2 build support for a single binary that runs natively on both Apple silicon and Intel-based Macs. These are included in the official packages and can also be built using "-arch arm64 -arch x86_64"
  • Windows C API Symbols are now uploaded to Microsoft symbol server
  • Nuget package now supports ARM64 Linux C#
  • Python GPU package now includes both TensorRT and CUDA EPs. Note: EPs need to be explicitly registered to ensure the correct provider is used. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']). Please also ensure you have appropriate TensorRT dependencies and CUDA dependencies installed.

Execution Providers

  • TensorRT EP
    • Python GPU release packages now include support for TensorRT 8.0. Enable TensorrtExecutionProvider by explicitly setting providers parameter when creating an InferenceSession. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
    • Published quantized BERT model example
  • OpenVINO EP
    • Add support for OpenVINO 2021.4.x
    • Auto Plugin support
    • IO Buffer/Copy Avoidance Optimizations for GPU plugin

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

@dependabot dependabot bot added .NET Pull requests that update .net code dependencies Pull requests that update a dependency file labels Dec 25, 2021
Bumps [Microsoft.ML.OnnxRuntime](https://github.com/Microsoft/onnxruntime) from 1.6.0 to 1.10.0.
- [Release notes](https://github.com/Microsoft/onnxruntime/releases)
- [Changelog](https://github.com/microsoft/onnxruntime/blob/master/docs/ReleaseManagement.md)
- [Commits](microsoft/onnxruntime@v1.6.0...v1.10.0)

---
updated-dependencies:
- dependency-name: Microsoft.ML.OnnxRuntime
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot force-pushed the dependabot/nuget/ch9_release/src/Tailwind.Traders.Web/Microsoft.ML.OnnxRuntime-1.10.0 branch from 33916e1 to 0403a7f Compare December 28, 2021 10:33
@github-actions github-actions bot merged commit 50bfd28 into main Dec 28, 2021
@dependabot dependabot bot deleted the dependabot/nuget/ch9_release/src/Tailwind.Traders.Web/Microsoft.ML.OnnxRuntime-1.10.0 branch December 28, 2021 10:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file .NET Pull requests that update .net code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants