-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No way of manipulating data tables in memory #273
Comments
hi @roll when you are back next week, can you please look at this? |
Hi @as2875, could you please elaborate a little bit? You mean you don't want |
Hi @roll. I mean that I don't want to write CSV files to the disk, just the final zipped data package. Say I have some tables stored in Python data structures in memory. Rather than write the tables to CSV files, then call The example is from converting multiple HDF5 files to Frictionless data packages. At the moment, I have to create a package, read in the HDF5 datasets, store them as 2-D lists, write the contents of the lists to CSV files, call |
Thanks. I think it's not possible at the moment. I've marked it as feature request |
Thanks @roll. This would make data conversion pipelines and parallel processing a lot simpler. |
@as2875 |
Thanks for the suggestion @roll. |
MERGED into frictionlessdata/frictionless-py#439 More info about Frictionless Framework |
@lwinfree, @sje30
When converting to Frictionless from another data format (e.g. HDF5), scripts have to
package.save
.It would be useful to skip the stage with writing CSV files to disk. If this functionality exists and I am missing something, please let me know, it would be very useful.
Please preserve this line to notify @roll (lead of this repository)
The text was updated successfully, but these errors were encountered: