Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FastMarching crashes on a volume created from a NumPy array #98

Closed
Tracked by #133
ylep opened this issue May 12, 2023 · 3 comments
Closed
Tracked by #133

FastMarching crashes on a volume created from a NumPy array #98

ylep opened this issue May 12, 2023 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@ylep
Copy link
Member

ylep commented May 12, 2023

Describe the bug
FastMarching crashes when given a Volume_S16 created from a NumPy array:

  • if the NumPy array is pointing to another Volume allocated by AIMS (case 1 below);
  • or if the NumPy array is allocated by NumPy (case 2 below).

I could not spot any difference on volume size, strides, etc. between the original volume in case 1. The Volume_S16 data structure seems to be okay, since the volume can be written to a file without issue.

FastMarching probably expects some property from the Volume, which is not fulfilled in the converted volume. I would have guessed maybe contiguity of the X axis, but the strides are the same (moreover, this crash happens on BrainVISA 5.1.1, before the recent changes by @denisri to allow arbitrary strides).

To Reproduce

#!/usr/bin/env python3

import sys

import numpy as np
from soma import aims, aimsalgo


# Dummy test volume
vol = aims.Volume(141, 180, 197, dtype='S16')
vol.fill(0)
#vol[0, 0, 0] = 1  # optional, does not change crash

print("repr(vol) =", repr(vol))
print("vol.getSize() =", vol.getSize())
print("vol.getBorders() =", vol.getBorders())
print("vol.getStrides() =", vol.getStrides())
print("vol.getVoxelSize() =", vol.getVoxelSize())
print("vol.refVolume().isNull() =", vol.refVolume().isNull())
print("vol.header() =", vol.header())
print("vol.__dict__.keys()= ", vol.__dict__.keys())
print("vol._cpp_rcptr =", vol._cpp_rcptr)
arr = vol.np
print("id(vol.np) =", id(vol.np))
print("vol.__dict__.keys()= ", vol.__dict__.keys())
if hasattr(vol, "_arrayext"):
    print("id(vol._arrayext) =", id(vol._arrayext))
if hasattr(vol, "_arrayref"):
    print("id(vol._arrayref) =", id(vol._arrayref))
print("vol.np.shape =", arr.shape)
print("vol.np.dtype =", arr.dtype)
print("vol.np.strides =", arr.strides)
print("vol.np.ctypes.data =", arr.ctypes.data)
#print("vol.np.__dict__.keys()", arr.__dict__.keys())  # no dict

# FastMarching on original volume: ok
fm = aims.FastMarching()
fm.setVerbose(True)
fm.doit(vol, {1}, {0})

print()

# Case 1 (AIMS-allocated volume converted to ndarray and back): CRASH
vol2 = aims.Volume(arr)

# Case 2 (NumPy-allocated volume): CRASH
#vol2 = aims.Volume(np.copy(arr))  # same crash

# Case 3 (AIMS-allocated volume): works fine
#vol2 = aims.Volume(arr.shape[0], arr.shape[1], arr.shape[2], dtype=arr.dtype)
#vol2.np[...] = arr

#vol2.copyHeaderFrom(vol.header())  # optional, does not change crash
print("repr(vol2) =", repr(vol2))
print("vol2.getSize() =", vol.getSize())
print("vol2.getBorders() =", vol2.getBorders())
print("vol2.getStrides() =", vol2.getStrides())
print("vol2.getVoxelSize() =", vol2.getVoxelSize())
print("vol2.refVolume().isNull() =", vol2.refVolume().isNull())
print("vol2.header() =", vol2.header())
print("vol2.__dict__.keys() =", vol2.__dict__.keys())
print("vol2._cpp_rcptr =", vol2._cpp_rcptr)
if hasattr(vol2, "_arrayext"):
    print("id(vol2._arrayext) =", id(vol2._arrayext))
if hasattr(vol2, "_arrayref"):
    print("id(vol2._arrayref) =", id(vol2._arrayref))
print("id(vol2.np) =", id(vol2.np))
print("vol2.__dict__.keys()= ", vol2.__dict__.keys())
print("vol2.np.shape =", vol2.np.shape)
print("vol2.np.dtype =", vol2.np.dtype)
print("vol2.np.strides =", vol2.np.strides)
print("vol2.np.ctypes.data =", vol2.np.ctypes.data)
#print("vol2.np.__dict__.keys()", vol2.np.__dict__.keys())  # no dict

# roundtripped volume seems well-formed: it writes without error
aims.write(vol2, "roundtripped_vol.nii.gz")

# FastMarching on roundtripped volume: CRASH
fm = aims.FastMarching()
fm.setVerbose(True)
print('before crash', file=sys.stderr)
fm.doit(vol2, {1}, {0})
print('after crash', file=sys.stderr)

Here is the output of that script in case 1:

repr(vol) = <soma.aims.Volume_S16 object at 0x7f6039debd90>
vol.getSize() = [141, 180, 197, 1]
vol.getBorders() = [0, 0, 0, 0, 0, 0, 0, 0]
vol.getStrides() = [1, 141, 25380, 4999860]
vol.getVoxelSize() = [1, 1, 1, 1]
vol.refVolume().isNull() = True
vol.header() = { 'volume_dimension' : [ 141, 180, 197, 1 ], 'sizeX' : 141, 'sizeY' : 180, 'sizeZ' : 197, 'sizeT' : 1 }
vol.__dict__.keys()=  dict_keys(['_cpp_rcptr'])
vol._cpp_rcptr = 1
id(vol.np) = 140050023388368
vol.__dict__.keys()=  dict_keys(['_cpp_rcptr', '_arrayref'])
id(vol._arrayref) = 140050032060480
vol.np.shape = (141, 180, 197, 1)
vol.np.dtype = int16
vol.np.strides = (2, 282, 50760, 9999720)
vol.np.ctypes.data = 140049990856720
FastMarching...
initializing speed map
work voxels: 0, interface voxels: 0

repr(vol2) = <soma.aims.Volume_S16 object at 0x7f5fefecde10>
vol2.getSize() = [141, 180, 197, 1]
vol2.getBorders() = [0, 0, 0, 0, 0, 0, 0, 0]
vol2.getStrides() = [1, 141, 25380, 4999860]
vol2.getVoxelSize() = [1, 1, 1, 1]
vol2.refVolume().isNull() = True
vol2.header() = { 'volume_dimension' : [ 141, 180, 197, 1 ], 'sizeX' : 141, 'sizeY' : 180, 'sizeZ' : 197, 'sizeT' : 1 }
vol2.__dict__.keys() = dict_keys(['_arrayext', '_cpp_rcptr'])
vol2._cpp_rcptr = 1
id(vol2._arrayext) = 140050023388368
id(vol2.np) = 140050023388368
vol2.__dict__.keys()=  dict_keys(['_arrayext', '_cpp_rcptr'])
vol2.np.shape = (141, 180, 197, 1)
vol2.np.dtype = int16
vol2.np.strides = (2, 282, 50760, 9999720)
vol2.np.ctypes.data = 140049990856720
before crash
FastMarching...
Segmentation fault (core dumped)

Environment:

  • Engine: Singularity
  • Version of BrainVISA: 5.1.1
@ylep ylep added the bug Something isn't working label May 12, 2023
@ylep
Copy link
Member Author

ylep commented May 12, 2023

Interestingly, the behaviour is exactly the same on the master branch! (crash in cases 1 and 2, no crash in case 3)

denisri added a commit that referenced this issue May 12, 2023
…doesn't own its data

in this case we force a more "regular" allocator, otherwise the volume
is not allocated and the program crashes (#98).
denisri added a commit that referenced this issue May 12, 2023
…doesn't own its data

in this case we force a more "regular" allocator, otherwise the volume
is not allocated and the program crashes (#98).
@denisri
Copy link
Contributor

denisri commented May 12, 2023

It was an allocator problem: in SparseVolume::alloc() the source volume allocator is used to reallocate a second one, but in the case of a numpy array, the data block is not owned by the volume, so the allocator type is "unallocated". In this case we have to force a new allocator. It's fixed in 5.1 and master, the example code does not crash any more (but I have not checked the correctness of the result).

@ylep
Copy link
Member Author

ylep commented May 17, 2023

Thanks a lot @denisri! I did not check the correctness of the result either, but it seems very unlikely that your fix affects the correctness, so I think we can close this issue!

@ylep ylep closed this as completed May 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants