Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WebNN EP] Fix issues with MLTensor caching #22701

Merged
merged 2 commits into from
Nov 6, 2024

Conversation

egalli
Copy link
Contributor

@egalli egalli commented Nov 2, 2024

This PR fixes a bug that occurs when searching for compatible MLTensor in the cache. We were missing checking the number of dimensions in the shape. This would mean that a cached buffer of shape [1] could match for [1, 1, 256, 256].

This PR also adds better handling when attempting to force an MLTensor to a different shape.

This PR fixes a bug that occurs when searching for compatible `MLTensor` in the cache. We were missing checking the number of dimensions in the shape. This would mean that a cached buffer of shape `[1]` could match for `[1, 1, 256, 256]`.

This PR also adds better handling when attempting to force an `MLTensor` to a different shape.
Copy link
Contributor

@Honry Honry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@fdwr
Copy link
Contributor

fdwr commented Nov 5, 2024

/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 5, 2024

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 5, 2024

/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 5, 2024

/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

1 similar comment
Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link
Contributor

@fdwr fdwr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@fdwr
Copy link
Contributor

fdwr commented Nov 5, 2024

/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 5, 2024

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 5, 2024

/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Nov 5, 2024

/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

1 similar comment
Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@guschmue guschmue merged commit 1cb5cee into microsoft:main Nov 6, 2024
50 checks passed
@guschmue guschmue added the ep:WebNN WebNN execution provider label Nov 6, 2024
ishwar-raut1 pushed a commit to ishwar-raut1/onnxruntime that referenced this pull request Nov 19, 2024
This PR fixes a bug that occurs when searching for compatible `MLTensor`
in the cache. We were missing checking the number of dimensions in the
shape. This would mean that a cached buffer of shape `[1]` could match
for `[1, 1, 256, 256]`.

This PR also adds better handling when attempting to force an `MLTensor`
to a different shape.
guschmue pushed a commit that referenced this pull request Dec 2, 2024
This PR fixes a bug that occurs when searching for compatible `MLTensor`
in the cache. We were missing checking the number of dimensions in the
shape. This would mean that a cached buffer of shape `[1]` could match
for `[1, 1, 256, 256]`.

This PR also adds better handling when attempting to force an `MLTensor`
to a different shape.
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
This PR fixes a bug that occurs when searching for compatible `MLTensor`
in the cache. We were missing checking the number of dimensions in the
shape. This would mean that a cached buffer of shape `[1]` could match
for `[1, 1, 256, 256]`.

This PR also adds better handling when attempting to force an `MLTensor`
to a different shape.
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
This PR fixes a bug that occurs when searching for compatible `MLTensor`
in the cache. We were missing checking the number of dimensions in the
shape. This would mean that a cached buffer of shape `[1]` could match
for `[1, 1, 256, 256]`.

This PR also adds better handling when attempting to force an `MLTensor`
to a different shape.
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request Dec 11, 2024
This PR fixes a bug that occurs when searching for compatible `MLTensor`
in the cache. We were missing checking the number of dimensions in the
shape. This would mean that a cached buffer of shape `[1]` could match
for `[1, 1, 256, 256]`.

This PR also adds better handling when attempting to force an `MLTensor`
to a different shape.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:WebNN WebNN execution provider
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants