Skip to content

Commit

Permalink
Merge pull request #601 from catalystneuro/update_Suite2P_multiplane
Browse files Browse the repository at this point in the history
Update `Suite2pSegmentationInterface` to support multiple channels and planes
  • Loading branch information
CodyCBakerPhD authored Dec 1, 2023
2 parents 74689fd + f8ecf3c commit 9da6dcb
Show file tree
Hide file tree
Showing 5 changed files with 217 additions and 13 deletions.
11 changes: 6 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
# Upcoming

### Improvement
### Features
* Changed the `Suite2pSegmentationInterface` to support multiple plane segmentation outputs. The interface now has a `plane_name` and `channel_name` arguments to determine which plane output and channel trace add to the NWBFile. [PR #601](https://github.com/catalystneuro/neuroconv/pull/601)

### Improvements
* `nwbinspector` has been removed as a minimal dependency. It becomes an extra (optional) dependency with `neuroconv[dandi]`. [PR #672](https://github.com/catalystneuro/neuroconv/pull/672)


Expand All @@ -16,17 +18,16 @@
* Added tool function `get_default_dataset_configurations` for identifying and collecting all fields of an in-memory `NWBFile` that could become datasets on disk; and return instances of the Pydantic dataset models filled with default values for chunking/buffering/compression. [PR #569](https://github.com/catalystneuro/neuroconv/pull/569)
* Added tool function `get_default_backend_configuration` for conveniently packaging the results of `get_default_dataset_configurations` into an easy-to-modify mapping from locations of objects within the file to their correseponding dataset configuration options, as well as linking to a specific backend DataIO. [PR #570](https://github.com/catalystneuro/neuroconv/pull/570)
* Added `set_probe()` method to `BaseRecordingExtractorInterface`. [PR #639](https://github.com/catalystneuro/neuroconv/pull/639)
* Changed default chunking of `ImagingExtractorDataChunkIterator` to select `chunk_shape` less than the chunk_mb threshold while keeping the original image size.
The default `chunk_mb` changed to 10MB. [PR #667](https://github.com/catalystneuro/neuroconv/pull/667)
* Changed default chunking of `ImagingExtractorDataChunkIterator` to select `chunk_shape` less than the chunk_mb threshold while keeping the original image size. The default `chunk_mb` changed to 10MB. [PR #667](https://github.com/catalystneuro/neuroconv/pull/667)

### Fixes
* Fixed GenericDataChunkIterator (in hdmf.py) in the case where the number of dimensions is 1 and the size in bytes is greater than the threshold of 1 GB. [PR #638](https://github.com/catalystneuro/neuroconv/pull/638)
* Changed `np.floor` and `np.prod` usage to `math.floor` and `math.prod` in various files. [PR #638](https://github.com/catalystneuro/neuroconv/pull/638)
* Updated minimal required version of DANDI CLI; updated `run_conversion_from_yaml` API function and tests to be compatible with naming changes. [PR #664](https://github.com/catalystneuro/neuroconv/pull/664)

### Improvements
* Change metadata extraction library from `fparse` to `parse`. [PR #654](https://github.com/catalystneuro/neuroconv/pull/654)
* The `dandi` CLI/API is now an optional dependency; it is still required to use the `tool` function for automated upload as well as the YAML-based NeuroConv CLI. [PR #655](https://github.com/catalystneuro/neuroconv/pull/655)
* Change metadata extraction library from `fparse` to `parse`. [PR #654](https://github.com/catalystneuro/neuroconv/pull/654)
* The `dandi` CLI/API is now an optional dependency; it is still required to use the `tool` function for automated upload as well as the YAML-based NeuroConv CLI. [PR #655](https://github.com/catalystneuro/neuroconv/pull/655)



Expand Down
36 changes: 36 additions & 0 deletions docs/conversion_examples_gallery/segmentation/suite2p.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,16 @@ Install NeuroConv with the additional dependencies necessary for reading suite2p
Convert suite2p segmentation data to NWB using
:py:class:`~neuroconv.datainterfaces.ophys.suite2p.suite2pdatainterface.Suite2pSegmentationInterface`.

Suite2p segmentation output is saved for each plane in a separate folder (e.g. "plane0", "plane1").
To specify which plane to convert, use the `plane_name` argument (to see what planes are available use the
`Suite2pSegmentationInterface.get_available_planes(folder_path)` method).
For multichannel recordings, use the `channel_name` argument to specify the channel name
(to see what channels are available use the `Suite2pSegmentationInterface.get_channel_names(folder_path)` method).
When not specified, the first plane and channel are used.

The optional `plane_segmentation_name` argument specifies the name of the :py:class:`~pynwb.ophys.PlaneSegmentation` to be created.
When multiple planes and/or channels are present, the name should be unique for each plane and channel combination (e.g. "PlaneSegmentationChan1Plane0").

.. code-block:: python
>>> from datetime import datetime
Expand All @@ -28,3 +38,29 @@ Convert suite2p segmentation data to NWB using
>>> # Choose a path for saving the nwb file and run the conversion
>>> nwbfile_path = f"{path_to_save_nwbfile}"
>>> interface.run_conversion(nwbfile_path=nwbfile_path, metadata=metadata)
**Multi-plane example**

This example shows how to convert multiple planes from the same dataset.

.. code-block:: python
>>> from datetime import datetime
>>> from dateutil import tz
>>> from pathlib import Path
>>> from neuroconv import ConverterPipe
>>> from neuroconv.datainterfaces import Suite2pSegmentationInterface
>>>
>>> folder_path= OPHYS_DATA_PATH / "segmentation_datasets" / "suite2p"
>>> interface_first_plane = Suite2pSegmentationInterface(folder_path=folder_path, plane_name="plane0", verbose=False)
>>> interface_second_plane = Suite2pSegmentationInterface(folder_path=folder_path, plane_name="plane1", verbose=False)
>>>
>>> converter = ConverterPipe(data_interfaces=[interface_first_plane, interface_second_plane], verbose=False)
>>> metadata = converter.get_metadata()
>>> # For data provenance we add the time zone information to the conversion
>>> session_start_time = datetime(2020, 1, 1, 12, 30, 0, tzinfo=tz.gettz("US/Pacific"))
>>> metadata["NWBFile"].update(session_start_time=session_start_time)
>>>
>>> # Choose a path for saving the nwb file and run the conversion
>>> nwbfile_path = f"{output_folder}/file2.nwb"
>>> converter.run_conversion(nwbfile_path=nwbfile_path, metadata=metadata)
2 changes: 1 addition & 1 deletion src/neuroconv/datainterfaces/ophys/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
roiextractors>=0.5.3
roiextractors>=0.5.5
127 changes: 121 additions & 6 deletions src/neuroconv/datainterfaces/ophys/suite2p/suite2pdatainterface.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,134 @@
from copy import deepcopy
from typing import Optional

from pynwb import NWBFile

from ..basesegmentationextractorinterface import BaseSegmentationExtractorInterface
from ....utils import FolderPathType
from ....utils import DeepDict, FolderPathType


def _update_metadata_links_for_plane_segmentation_name(metadata: dict, plane_segmentation_name: str) -> DeepDict:
"""Private utility function to update the metadata with a new plane segmentation name."""
metadata_copy = deepcopy(metadata)

plane_segmentation_metadata = metadata_copy["Ophys"]["ImageSegmentation"]["plane_segmentations"][0]
default_plane_segmentation_name = plane_segmentation_metadata["name"]
default_plane_suffix = default_plane_segmentation_name.replace("PlaneSegmentation", "")
new_plane_name_suffix = plane_segmentation_name.replace("PlaneSegmentation", "")
imaging_plane_name = "ImagingPlane" + new_plane_name_suffix
plane_segmentation_metadata.update(
name=plane_segmentation_name,
imaging_plane=imaging_plane_name,
)
metadata_copy["Ophys"]["ImagingPlane"][0].update(name=imaging_plane_name)

fluorescence_metadata_per_plane = metadata_copy["Ophys"]["Fluorescence"].pop(default_plane_segmentation_name)
# override the default name of the plane segmentation
metadata_copy["Ophys"]["Fluorescence"][plane_segmentation_name] = fluorescence_metadata_per_plane
trace_names = [property_name for property_name in fluorescence_metadata_per_plane.keys() if property_name != "name"]
for trace_name in trace_names:
default_raw_traces_name = fluorescence_metadata_per_plane[trace_name]["name"].replace(default_plane_suffix, "")
fluorescence_metadata_per_plane[trace_name].update(name=default_raw_traces_name + new_plane_name_suffix)

segmentation_images_metadata = metadata_copy["Ophys"]["SegmentationImages"].pop(default_plane_segmentation_name)
metadata_copy["Ophys"]["SegmentationImages"][plane_segmentation_name] = segmentation_images_metadata
metadata_copy["Ophys"]["SegmentationImages"][plane_segmentation_name].update(
correlation=dict(name=f"CorrelationImage{new_plane_name_suffix}"),
mean=dict(name=f"MeanImage{new_plane_name_suffix}"),
)

return metadata_copy


class Suite2pSegmentationInterface(BaseSegmentationExtractorInterface):
"""Data interface for Suite2pSegmentationExtractor."""

def __init__(self, folder_path: FolderPathType, combined: bool = False, plane_no: int = 0, verbose: bool = True):
@classmethod
def get_available_planes(cls, folder_path: FolderPathType) -> dict:
from roiextractors import Suite2pSegmentationExtractor

return Suite2pSegmentationExtractor.get_available_planes(folder_path=folder_path)

@classmethod
def get_available_channels(cls, folder_path: FolderPathType) -> dict:
from roiextractors import Suite2pSegmentationExtractor

return Suite2pSegmentationExtractor.get_available_channels(folder_path=folder_path)

def __init__(
self,
folder_path: FolderPathType,
channel_name: Optional[str] = None,
plane_name: Optional[str] = None,
plane_segmentation_name: Optional[str] = None,
verbose: bool = True,
combined: Optional[bool] = False, # TODO: to be removed
plane_no: Optional[int] = None, # TODO: to be removed
):
"""
Parameters
----------
folder_path : FolderPathType
combined : bool, default: False
plane_no : int, default: 0
verbose : bool, default: True
channel_name: str, optional
The name of the channel to load, to determine what channels are available use Suite2pSegmentationInterface.get_available_channels(folder_path).
plane_name: str, optional
The name of the plane to load, to determine what planes are available use Suite2pSegmentationInterface.get_available_planes(folder_path).
plane_segmentation_name: str, optional
The name of the plane segmentation to be added.
"""
super().__init__(folder_path=folder_path, combined=combined, plane_no=plane_no)

super().__init__(folder_path=folder_path, channel_name=channel_name, plane_name=plane_name)
available_planes = self.get_available_planes(folder_path=self.source_data["folder_path"])
available_channels = self.get_available_channels(folder_path=self.source_data["folder_path"])

if plane_segmentation_name is None:
plane_segmentation_name = (
"PlaneSegmentation"
if len(available_planes) == 1 and len(available_channels) == 1
else f"PlaneSegmentation{self.segmentation_extractor.channel_name.capitalize()}{self.segmentation_extractor.plane_name.capitalize()}"
)

self.plane_segmentation_name = plane_segmentation_name
self.verbose = verbose

def get_metadata(self) -> DeepDict:
metadata = super().get_metadata()

# No need to update the metadata links for the default plane segmentation name
default_plane_segmentation_name = metadata["Ophys"]["ImageSegmentation"]["plane_segmentations"][0]["name"]
if self.plane_segmentation_name == default_plane_segmentation_name:
return metadata

metadata = _update_metadata_links_for_plane_segmentation_name(
metadata=metadata,
plane_segmentation_name=self.plane_segmentation_name,
)

return metadata

def add_to_nwbfile(
self,
nwbfile: NWBFile,
metadata: Optional[dict] = None,
stub_test: bool = False,
stub_frames: int = 100,
include_roi_centroids: bool = True,
include_roi_acceptance: bool = True,
mask_type: Optional[str] = "image", # Literal["image", "pixel", "voxel"]
plane_segmentation_name: Optional[str] = None,
iterator_options: Optional[dict] = None,
compression_options: Optional[dict] = None,
):
super().add_to_nwbfile(
nwbfile=nwbfile,
metadata=metadata,
stub_test=stub_test,
stub_frames=stub_frames,
include_roi_centroids=include_roi_centroids,
include_roi_acceptance=include_roi_acceptance,
mask_type=mask_type,
plane_segmentation_name=self.plane_segmentation_name,
iterator_options=iterator_options,
compression_options=compression_options,
)
54 changes: 53 additions & 1 deletion tests/test_on_data/test_segmentation_interfaces.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from datetime import datetime
from unittest import TestCase

from neuroconv.datainterfaces import (
Expand Down Expand Up @@ -67,5 +68,56 @@ def test_extract_segmentation_interface_non_default_output_struct_name(self):

class TestSuite2pSegmentationInterface(SegmentationExtractorInterfaceTestMixin, TestCase):
data_interface_cls = Suite2pSegmentationInterface
interface_kwargs = dict(folder_path=str(OPHYS_DATA_PATH / "segmentation_datasets" / "suite2p"))
interface_kwargs = [
dict(
folder_path=str(OPHYS_DATA_PATH / "segmentation_datasets" / "suite2p"),
channel_name="chan1",
plane_name="plane0",
),
dict(
folder_path=str(OPHYS_DATA_PATH / "segmentation_datasets" / "suite2p"),
channel_name="chan2",
plane_name="plane0",
),
]
save_directory = OUTPUT_PATH

@classmethod
def setUpClass(cls) -> None:
plane_suffices = ["Chan1Plane0", "Chan2Plane0"]
cls.imaging_plane_names = ["ImagingPlane" + plane_suffix for plane_suffix in plane_suffices]
cls.plane_segmentation_names = ["PlaneSegmentation" + plane_suffix for plane_suffix in plane_suffices]
cls.mean_image_names = ["MeanImage" + plane_suffix for plane_suffix in plane_suffices]
cls.correlation_image_names = ["CorrelationImage" + plane_suffix for plane_suffix in plane_suffices]
cls.raw_traces_names = ["RoiResponseSeries" + plane_suffix for plane_suffix in plane_suffices]
cls.neuropil_traces_names = ["Neuropil" + plane_suffix for plane_suffix in plane_suffices]
cls.deconvolved_trace_name = "Deconvolved" + plane_suffices[0]

def check_extracted_metadata(self, metadata: dict):
"""Check extracted metadata is adjusted correctly for each plane and channel combination."""
self.assertEqual(metadata["Ophys"]["ImagingPlane"][0]["name"], self.imaging_plane_names[self.case])
plane_segmentation_metadata = metadata["Ophys"]["ImageSegmentation"]["plane_segmentations"][0]
plane_segmentation_name = self.plane_segmentation_names[self.case]
self.assertEqual(plane_segmentation_metadata["name"], plane_segmentation_name)
summary_images_metadata = metadata["Ophys"]["SegmentationImages"][plane_segmentation_name]
self.assertEqual(summary_images_metadata["correlation"]["name"], self.correlation_image_names[self.case])
self.assertEqual(summary_images_metadata["mean"]["name"], self.mean_image_names[self.case])

raw_traces_metadata = metadata["Ophys"]["Fluorescence"][plane_segmentation_name]["raw"]
self.assertEqual(raw_traces_metadata["name"], self.raw_traces_names[self.case])
neuropil_traces_metadata = metadata["Ophys"]["Fluorescence"][plane_segmentation_name]["neuropil"]
self.assertEqual(neuropil_traces_metadata["name"], self.neuropil_traces_names[self.case])
if self.case == 0:
deconvolved_trace_metadata = metadata["Ophys"]["Fluorescence"][plane_segmentation_name]["deconvolved"]
self.assertEqual(deconvolved_trace_metadata["name"], self.deconvolved_trace_name)


class TestSuite2pSegmentationInterfaceWithStubTest(SegmentationExtractorInterfaceTestMixin, TestCase):
data_interface_cls = Suite2pSegmentationInterface
interface_kwargs = dict(
folder_path=str(OPHYS_DATA_PATH / "segmentation_datasets" / "suite2p"),
channel_name="chan1",
plane_name="plane0",
)
save_directory = OUTPUT_PATH
conversion_options = dict(stub_test=True)

0 comments on commit 9da6dcb

Please sign in to comment.