You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Imagine if you are trying to do the power spectrum of a really long timeseries. So long that you don't want to put it all in memory. Currently xgcm / dask fail when you try to take an fft over a chunked dimension:
ValueError: Dask array only supports taking an FFT along an axis that
has a single chunk. An FFT operation was tried on axis 0
which has chunks (100, 100, 100, 100, 100, 100, 100, 100, 100, 100). To change the array's chunks use dask.Array.rechunk.
This makes sense if you care about the full Fourier transform. But often with long timeseries, we want to do some sort of tapering in which we average over multiple segments in order to get a less biased estimate of the power spectrum (like in Bartlett's method). In this case, the chunks provide a natural way to split up the full interval.
With the example above, we can do the following:
# get raw datadata=da.data# reshape so there is one chunk for each item on axis 0data_rs=data.reshape((10, 100))
# transform along the last axisdsa.fft.fft(data_rs, axis=1)
Imagine if you are trying to do the power spectrum of a really long timeseries. So long that you don't want to put it all in memory. Currently xgcm / dask fail when you try to take an fft over a chunked dimension:
This makes sense if you care about the full Fourier transform. But often with long timeseries, we want to do some sort of tapering in which we average over multiple segments in order to get a less biased estimate of the power spectrum (like in Bartlett's method). In this case, the chunks provide a natural way to split up the full interval.
With the example above, we can do the following:
this works and returns
You could then apply Bartlett's method by averaging over axis 0.
In xrft, we could try to accommodate this by adding a new keyword
which would then return something like
I think this would be highly useful!
The text was updated successfully, but these errors were encountered: