Skip to content

Commit

Permalink
Merge branch 'main' into fix_hdf5_get_frames
Browse files Browse the repository at this point in the history
  • Loading branch information
pauladkisson authored Jan 30, 2024
2 parents f71afcb + f49355c commit cfcc121
Show file tree
Hide file tree
Showing 78 changed files with 6,344 additions and 1,231 deletions.
4 changes: 3 additions & 1 deletion .github/ISSUE_TEMPLATE/bug_report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -60,9 +60,11 @@ body:
attributes:
label: Python Version
options:
- 3.7
- 3.8
- 3.9
- 3.10
- 3.11
- 3.12
validations:
required: true
- type: textarea
Expand Down
6 changes: 3 additions & 3 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,6 @@ contact_links:
- name: SpikeInterface
url: https://github.com/SpikeInterface/spikeinterface
about: The sister project for reading from extracellular electrophysiology data formats.
- name: NWB Conversion Tools
url: https://github.com/catalystneuro/nwb-conversion-tools
about: For writing any ROIExtractor object to the Neurodata Without Borders (NWB) format.
- name: NeuroConv
url: https://github.com/catalystneuro/neuroconv
about: Convenient package used for writing any ROI Extractor object to the Neurodata Without Borders (NWB) format.
24 changes: 3 additions & 21 deletions .github/ISSUE_TEMPLATE/feature_request.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,32 +8,14 @@ body:
value: |
## Thank you for your suggestion!
We welcome any ideas about how to make **nwb-conversion-tools** better for the community.
We welcome any ideas about how to make **roiextractors** better for the community.
Please keep in mind that new features may not get implemented immediately.
- type: textarea
id: summary
attributes:
label: What would you like to see added to nwb-conversion-tools?
description: |
What are you trying to achieve with **nwb-conversion-tools**?
Is this a more convenient way to do something that is already possible, or is a workaround currently unfeasible?
validations:
required: true
- type: textarea
id: problem
attributes:
label: Is your feature request related to a problem?
description: A clear and concise description of what the problem is.
- type: textarea
id: solution
attributes:
label: What solution would you like?
description: |
A clear and concise description of what you want to happen.
Describe alternative solutions you have considered.
label: What would you like to see added to ROI Extractors?
description: Is this a more convenient way to do something that is already possible, or is a workaround currently unfeasible?
validations:
required: true
- type: dropdown
Expand Down
32 changes: 8 additions & 24 deletions .github/workflows/add-to-dashboard.yml
Original file line number Diff line number Diff line change
@@ -1,35 +1,19 @@
name: Add Issue or PR to Dashboard
name: Add Issue or Pull Request to Dashboard

on:
issues:
types: opened

types:
- opened
pull_request:
types:
- opened

jobs:
issue_opened:
name: Add Issue to Dashboard
runs-on: ubuntu-latest
if: github.event_name == 'issues'
steps:
- name: Add Issue to Dashboard
uses: leonsteinhaeuser/project-beta-automations@v1.2.1
with:
gh_token: ${{ secrets.MY_GITHUB_TOKEN }}
organization: catalystneuro
project_id: 3
resource_node_id: ${{ github.event.issue.node_id }}
pr_opened:
name: Add PR to Dashboard
add-to-project:
name: Add issue or pull request to project
runs-on: ubuntu-latest
if: github.event_name == 'pull_request' && github.event.action == 'opened'
steps:
- name: Add PR to Dashboard
uses: leonsteinhaeuser/project-beta-automations@v1.2.1
- uses: actions/add-to-project@v0.5.0
with:
gh_token: ${{ secrets.MY_GITHUB_TOKEN }}
organization: catalystneuro
project_id: 3
resource_node_id: ${{ github.event.pull_request.node_id }}
project-url: https://github.com/orgs/catalystneuro/projects/3
github-token: ${{ secrets.PROJECT_TOKEN }}
12 changes: 12 additions & 0 deletions .github/workflows/check-docstrings.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
name: Check Docstrings
on:
workflow_dispatch:
pull_request:

jobs:
check-docstrings:
uses: catalystneuro/.github/.github/workflows/check_docstrings.yaml@main
with:
python-version: '3.10'
repository: 'catalystneuro/roiextractors'
package-name: 'roiextractors'
53 changes: 26 additions & 27 deletions .github/workflows/testing.yml → .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
@@ -1,18 +1,28 @@
name: Full Tests
on:
schedule:
- cron: "0 0 * * *" # daily
pull_request:
workflow_dispatch:
workflow_run:
workflows: [update-testing-data]
types: [completed]

jobs:
run:
on-failure:
name: Notify on failure
runs-on: ${{ matrix.os }}
if: ${{ github.event.workflow_run.conclusion == 'failure' }}
steps:
- run: |
echo 'The triggering workflow failed.'
0
on-success:
name: Full tests on ${{ matrix.os }} with Python ${{ matrix.python-version }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: [3.7, 3.8, 3.9]
os: [ubuntu-latest, windows-latest] # macos-latest; problems with git-annex and all tests require data here (no minimal/internals)
python-version: [3.8, 3.9, "3.10", 3.11]
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: s-weigand/setup-conda@v1
- uses: actions/checkout@v2
Expand All @@ -28,13 +38,16 @@ jobs:
pip install pytest-xdist
git config --global user.email "CI@example.com"
git config --global user.name "CI Almighty"
pip install wheel # needed for scanimage
- name: Test minimal installation
run: pip install .
- name: Test full installation
run: pip install .[full]
- name: Install testing requirements (-e needed for codecov report)
run: pip install -e .[test]
pip install wheel==0.41.2 # needed for scanimage
- name: Install roiextractors with minimal requirements
run: pip install .[test]

- name: Run minimal tests
run: pytest tests/test_internals -n auto --dist loadscope

- name: Test full installation (-e needed for codecov report)
run: pip install -e .[full]

- name: Get ophys_testing_data current head hash
id: ophys
Expand All @@ -44,21 +57,7 @@ jobs:
id: cache-ophys-datasets
with:
path: ./ophys_testing_data
key: ophys-datasets-051822-${{ matrix.os }}-${{ steps.ophys.outputs.HASH_OPHYS_DATASET }}
- if: ${{ steps.cache-ophys-datasets.outputs.cache-hit == false && matrix.os == 'ubuntu-latest' }}
name: Get datalad - Linux
run: conda install -c conda-forge datalad==0.16.3
- if: ${{ steps.cache-ophys-datasets.outputs.cache-hit == false && matrix.os == 'windows-latest' }}
name: Get git-annex - Windows
uses: crazy-max/ghaction-chocolatey@v1.6.0
with:
args: install git-annex --ignore-checksums
- if: ${{ steps.cache-ophys-datasets.outputs.cache-hit == false && matrix.os == 'windows-latest' }}
name: Get datalad - Windows and Mac
run: pip install datalad==0.16.3
- name: "Force GIN: ophys download"
if: steps.cache-ophys-datasets.outputs.cache-hit == false
run: datalad install -rg https://gin.g-node.org/CatalystNeuro/ophys_testing_data
key: ophys-datasets-042023-${{ matrix.os }}-${{ steps.ophys.outputs.HASH_OPHYS_DATASET }}

- name: Run full pytest with coverage
run: pytest -n auto --dist loadscope --cov=./ --cov-report xml:./codecov.xml
Expand Down
48 changes: 48 additions & 0 deletions .github/workflows/update-testing-data.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
name: update-testing-data
on:
schedule:
- cron: "0 0 * * *" # daily
workflow_dispatch:

jobs:
run:
name: Update testing data on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: s-weigand/setup-conda@v1
- uses: actions/checkout@v2
- run: git fetch --prune --unshallow --tags
- name: Setup Python 3.11
uses: actions/setup-python@v2
with:
python-version: 3.11

- name: Global Setup
run: |
pip install -U pip
git config --global user.email "CI@example.com"
git config --global user.name "CI Almighty"
pip install wheel==0.41.2 # needed for scanimage
- name: Get ophys_testing_data current head hash
id: ophys
run: echo "::set-output name=HASH_OPHYS_DATASET::$(git ls-remote https://gin.g-node.org/CatalystNeuro/ophys_testing_data.git HEAD | cut -f1)"
- name: Cache ophys dataset - ${{ steps.ophys.outputs.HASH_OPHYS_DATASET }}
uses: actions/cache@v2
id: cache-ophys-datasets
with:
path: ./ophys_testing_data
key: ophys-datasets-042023-${{ matrix.os }}-${{ steps.ophys.outputs.HASH_OPHYS_DATASET }}
- if: steps.cache-ophys-datasets.outputs.cache-hit == false
name: Install and configure AWS CLI
run: |
pip install awscli==1.29.56
aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID }}
aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- if: steps.cache-ophys-datasets.outputs.cache-hit == false
name: Download data from S3
run: aws s3 cp --recursive s3://${{ secrets.S3_GIN_BUCKET }}//ophys_testing_data ./ophys_testing_data
11 changes: 9 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
rev: v4.5.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 22.6.0
rev: 24.1.1
hooks:
- id: black
exclude: ^docs/
- repo: https://github.com/pycqa/pydocstyle
rev: 6.3.0
hooks:
- id: pydocstyle
args:
- --convention=numpy
- --add-ignore=D1
124 changes: 124 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
# Upcoming

### Improvements
* Improved xml parsing with Bruker [PR #267](https://github.com/catalystneuro/roiextractors/pull/267)


# v0.5.5

### Features

* Updated `Suite2pSegmentationExtractor` to support multi channel and multi plane data. [PR #242](https://github.com/catalystneuro/roiextractors/pull/242)

### Fixes

* Fixed `MicroManagerTiffImagingExtractor` private extractor's dtype to not override the parent's dtype. [PR #257](https://github.com/catalystneuro/roiextractors/pull/257)
* Fixed override of `channel_name` in `Suite2pSegmentationExtractor`. [PR #263](https://github.com/catalystneuro/roiextractors/pull/263)


# v0.5.4

### Features

* Added volumetric and multi-channel support for Bruker format. [PR #230](https://github.com/catalystneuro/roiextractors/pull/230)



# v0.5.3

### Features

* Added support for Miniscope AVI files with the `MiniscopeImagingExtractor`. [PR #225](https://github.com/catalystneuro/roiextractors/pull/225)



# v0.5.2

### Features

* Added support for MicroManager TIFF files with the `MicroManagerTiffImagingExtractor`. [PR #222](https://github.com/catalystneuro/roiextractors/pull/222)

* Added support for Bruker TIFF files with the `BrukerTiffImagingExtractor`. [PR #220](https://github.com/catalystneuro/roiextractors/pull/220)



# v0.5.1

### Features

* Added a `has_time_vector` function for ImagingExtractors and SegmentationExtractors, similar to the SpikeInterface API for detecting if timestamps have been set. [PR #216](https://github.com/catalystneuro/roiextractors/pull/216)

### Fixes

* Fixed two issues with the `SubFrameSegementation` class: (i) attempting to set the private attribute `_image_masks` even when this was not present in the parent, and (ii) not calling the parent function for `get_pixel_masks` and instead using the base method even in cases where this had been overridden by the parent. [PR #215](https://github.com/catalystneuro/roiextractors/pull/215)



# v0.5.0

### Back-compatability break
* The orientation of traces in all `SegmentationExtractor`s has been standardized to have time (frames) as the first axis, and ROIs as the final axis. [PR #200](https://github.com/catalystneuro/roiextractors/pull/200)

### Features
* Add support for newer versions of EXTRACT output files. [PR #170](https://github.com/catalystneuro/roiextractors/pull/170)
The `ExtractSegmentationExtractor` class is now abstract and redirects to the newer or older
extractor depending on the version of the file. [PR #170](https://github.com/catalystneuro/roiextractors/pull/170)
* The `ExtractSegmentationExtractor.write_segmentation` method has now been deprecated. [PR #170](https://github.com/catalystneuro/roiextractors/pull/170)

### Improvements
* Add `frame_to_time` to `SegmentationExtractor`, `get_roi_ids` is now a class method. [PR #187](https://github.com/catalystneuro/roiextractors/pull/187)
* Add `set_times` to `SegmentationExtractor`. [PR #188](https://github.com/catalystneuro/roiextractors/pull/188)
* Updated the test for segmentation images to check all images for the given segmentation extractors. [PR #190](https://github.com/catalystneuro/roiextractors/pull/190)
* Refactored the `NwbSegmentationExtractor` to be more flexible with segmentation images and keep up
with the change in [catalystneuro/neuoroconv#41](https://github.com/catalystneuro/neuroconv/pull/41)
of trace names. [PR #191](https://github.com/catalystneuro/roiextractors/pull/191)
* Implemented a more efficient case of the base `ImagingExtractor.get_frames` through `get_video` when the indices are contiguous. [PR #195](https://github.com/catalystneuro/neuroconv/pull/195)
* Removed `max_frame` check on `MultiImagingExtractor.get_video()` to adhere to upper-bound slicing semantics. [PR #195](https://github.com/catalystneuro/neuroconv/pull/195)
* Improved the `MultiImagingExtractor.get_video()` to no longer rely on `get_frames`. [PR #195](https://github.com/catalystneuro/neuroconv/pull/195)
* Added `dtype` consistency check across `MultiImaging` components as well as a direct override method. [PR #195](https://github.com/catalystneuro/neuroconv/pull/195)
* Added the `FrameSliceSegmentationExtractor` class and corresponding `Segmentation.frame_slice(...)` method. [PR #201](https://github.com/catalystneuro/neuroconv/pull/201)
* Changed the `output_struct_name` argument to optional in `ExtractSegmentationExtractor`.
to allow more flexible usage for the user and better error message when it cannot be found in the file.
For consistency, `output_struct_name` argument has been also added to the legacy extractor.
The orientation of segmentation images are transposed for consistency in image orientation (height x width). [PR #210](https://github.com/catalystneuro/roiextractors/pull/210)
* Relaxed rounding of `ImagingExtractor.frame_to_time(...)` and `SegmentationExtractor.frame_to_time(...)` to be more consistent with SpikeInterface. [PR #212](https://github.com/catalystneuro/roiextractors/pull/212)

### Fixes
* Fixed the reference to the proper `mov_field` in `Hdf5ImagingExtractor`. [PR #195](https://github.com/catalystneuro/neuroconv/pull/195)
* Updated the name of the ROICentroids column for the `NwbSegmentationExtractor` to be up-to-date with NeuroConv v0.2.0 `write_segmentation`. [PR #208](https://github.com/catalystneuro/roiextractors/pull/208)
* Updated the trace orientation for the `NwbSegmentationExtractor`. [PR #208](https://github.com/catalystneuro/roiextractors/pull/208)



# v0.4.18

### Improvements
* `get_video` is now an abstract method in `ImagingExtractor` [PR #180](https://github.com/catalystneuro/roiextractors/pull/180)

### Features
* Add dummy segmentation extractor [PR #176](https://github.com/catalystneuro/roiextractors/pull/176)

### Testing
* Added unittests to the `get_frames` method from `ImagingExtractors` to assert that they are consistent with numpy
indexing behavior. [PR #154](https://github.com/catalystneuro/roiextractors/pull/154)
* Tests for spikeinterface like-behavior for the `get_video` funtiction [PR #181](https://github.com/catalystneuro/roiextractors/pull/181)



# v0.4.17

### Depreceations
- Suite2P argument has become `folder_path` instead of `file_path`, `file_path` deprecation scheduled for august or later.

### Documentation
- Improved docstrings across many extractors.

### Features
- Adds MultiImagingExtractor for combining multiple imaging extractors.
- Adds ScanImageTiffExtractor for reading .tiff files output from ScanImage
- Adds NumpyImagingExtractor for extracting raw video data as memmaps.
- Added frame slicing capabilities for imaging extractors.

### Testing
- Added checks and debugs that all sampling frequencies returns are floats
- Round trip testing working for all extractors that have a working write method.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[![PyPI version](https://badge.fury.io/py/roiextractors.svg)](https://badge.fury.io/py/roiextractors)
![Full Tests](https://github.com/catalystneuro/roiextractors/actions/workflows/testing.yml/badge.svg)
![Full Tests](https://github.com/catalystneuro/roiextractors/actions/workflows/run-tests.yml/badge.svg)
![Auto-release](https://github.com/catalystneuro/roiextractors/actions/workflows/auto-publish.yml/badge.svg)
[![codecov](https://codecov.io/github/catalystneuro/roiextractors/coverage.svg?branch=master)](https://codecov.io/github/catalystneuro/roiextractors?branch=master)
[![documentation](https://readthedocs.org/projects/roiextractors/badge/?version=latest)](https://roiextractors.readthedocs.io/en/latest/)
Expand Down
Loading

0 comments on commit cfcc121

Please sign in to comment.