-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG]: nb::c_contig
constraint is not enforced for non-contiguous PyTorch arrays
#278
Comments
Just double-checking: wasn't the error in the cholespy repository about a crash? (Rather than a |
The bug mentioned a crash, but the reproducer produced this error, without crashing. |
Does that mean that the code in the original issue didn't have the annotation? Otherwise it sounds like there are two separate issues. |
The bindings in |
Fixed in 23ea320. |
nanobind ndarrays provide the ``nb::c_contig`` and ``nb::f_contig`` annotations to specify that input arrays must be represented by contiguous memory blocks in C or Fortran-style ordering. When this is not the case, the nanobind will by default attempt an implicit conversion. This conversion previously failed in some cases: when no underlying scalar type was specified, and when converting from PyTorch. Those issues are addressed by this commit. Fixes issue #278.
@bathal1 : I released nanobind v1.5.2 -- you might want to re-release cholespy with those fixes. |
Problem description
When binding a function processing tensors, one can specify contiguity flags, like
nb::c_contig
. Nanobind is expected to ensure the input tensors are contiguous in memory before calling the function.However, when passing a non-contiguous PyTorch tensor to such a function will raise an error, instead of making it contiguous under the hood.
Modifying the Nanobind example project with the following function definition allows to reproduce this issue.
Function binding
Reproducer
This raises the following issue
The text was updated successfully, but these errors were encountered: