You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I suggest we don't download the files from Synapse at least in the first version as we will spend a lot of time staging data.
The HTAN tower workspace is setup to be able to access s3 URIs from the HTAN s3 buckets directly. (I think it also works with the google buckets as well).
When preparing the samplesheet we can get this from the dataFileBucket and dataFileKey in synapse fileviews.
I suggest we don't download the files from Synapse at least in the first version as we will spend a lot of time staging data.
The HTAN tower workspace is setup to be able to access s3 URIs from the HTAN s3 buckets directly. (I think it also works with the google buckets as well).
When preparing the samplesheet we can get this from the
dataFileBucket
anddataFileKey
in synapse fileviews.We can always add back in in due course. I have a pattern to do this here where you can mix synid and s3 uri
https://github.com/ncihtan/nf-imagecleaner/blob/main/subworkflows/samplesheet_split.nf
The text was updated successfully, but these errors were encountered: