Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adopt MatX v0.4.0 #921

Closed

Conversation

dagardner-nv
Copy link
Contributor

Description

  • Adopt updated utilities to pick up MatX 0.4.0
  • Currently we are tracking a commit hash to adopt fixes needed for Morpheus, which were rolled into the 0.4.0 release.

fixes #909

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@dagardner-nv dagardner-nv requested a review from a team as a code owner April 28, 2023 19:09
@dagardner-nv dagardner-nv added non-breaking Non-breaking change improvement Improvement to existing functionality 2 - In Progress and removed 2 - In Progress labels Apr 28, 2023
@cwharris
Copy link
Contributor

cwharris commented May 1, 2023

The build error is caused by a compute capability issue. https://stackoverflow.com/questions/74201452/cuda-11-8-fails-to-compile-atomiccas-for-16-bit-unsigned-integers-is-cudas-doc

Looking for a workaround.

@dagardner-nv dagardner-nv added Merge After Dependencies PR is completed and reviewed but depends on another PR; do not merge out of order and removed 3 - Ready for Review labels May 1, 2023
@cwharris
Copy link
Contributor

cwharris commented May 1, 2023

@cliffburdick has fixed the compilation error, but the PR does not add support for half reduction to pascal architecture.

NVIDIA/MatX#412

@cliffburdick
Copy link

@cliffburdick has fixed the compilation error, but the PR does not add support for half reduction to pascal architecture.

NVIDIA/MatX#412

Please let us know if there's a use case for this on Pascal. Specifically this would be argmax/argmin with fp16/bf16 only I believe.

@dagardner-nv
Copy link
Contributor Author

@cliffburdick has fixed the compilation error, but the PR does not add support for half reduction to pascal architecture.
NVIDIA/MatX#412

Please let us know if there's a use case for this on Pascal. Specifically this would be argmax/argmin with fp16/bf16 only I believe.

Thanks for the quick fix!
We use matx::rmax, I'm not sure if that uses argmax or not, but even then we only use matx::rmax for 32bit and 64bit floats.

@dagardner-nv
Copy link
Contributor Author

Closing this PR as the goal was to move off of tracking a commit hash on to an official release tag.

@dagardner-nv dagardner-nv deleted the david-matx-4 branch May 1, 2023 23:49
@cliffburdick
Copy link

@cliffburdick has fixed the compilation error, but the PR does not add support for half reduction to pascal architecture.
NVIDIA/MatX#412

Please let us know if there's a use case for this on Pascal. Specifically this would be argmax/argmin with fp16/bf16 only I believe.

Thanks for the quick fix! We use matx::rmax, I'm not sure if that uses argmax or not, but even then we only use matx::rmax for 32bit and 64bit floats.

Those are just max/min and will use cub, so I don't think you're using anything that would be affected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvement to existing functionality Merge After Dependencies PR is completed and reviewed but depends on another PR; do not merge out of order non-breaking Non-breaking change
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

[FEA]: Adopt MatX v0.4.0
3 participants