-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory leaks in hcubature.c ? #7
Comments
Test case: using Cubature
function f(x::Array{Float64, 2}, v::Array{Float64, 1})
for i in 1:length(v)
v[i] = exp(-x[1,i]^2-x[2,i]^2)
end
end
while true
hcubature_v(f, (-1, +1), (-1, +1))
end will grow rapidly in memory consumption when used with the original libcubature.dylib that's shipped with Cubature.jl. Recompiling libcubature.dylib with the above change leads to constant memory usage. |
Ah, interesting, there appears to be a difference between Linux |
Should be fixed now. |
Thanks! I would like to ask a (mostly) unrelated question: I'm using cubature to pixelize a continuous distribution, that's why I'm integrating many times over many pixels. I don't suppose there is an easy way to create _vn versions of cubature that integrate over many integration domains, reusing already allocated resources? |
I don't understand. Why would there be reuse over multiple integration domains? |
I'm sorry, I didn't explain myself very well there. Let me try with code. What I have is something along the lines of # integrate m*n pixels of my image one by one
for j in 1:m, i in 1:n
img[i,j], err[i,j] = hcubature_v(f, (i - 0.5, j - 0.5), (i + 0.5, j + 0.5))
end I see that this results in a large number of internal memory allocations, and thought there might be a way around those. Something more like lo = reshape([ (i - 0.5, j - 0.5) for j in 1:m, i in 1:n ], m*n)
hi = reshape([ (i + 0.5, j + 0.5) for j in 1:m, i in 1:n ], m*n)
# integrate m*n pixels of my image all at once
img, err = hcubature_vn(f, lo, hi) |
For this sort of problem, I would typically just use a fixed-order cubature rule (a product of Gauss quadrature rules along x and y, for example) ... do you really need adaptive integration over each pixel? Especially if The thing is, the innermost loops here are all in C (and/or calls to |
It’s definitely not a big fraction of the total time spent, but I am trying to save whatever I can. Unfortunately, I have a highly unpredictable integrand with irregular and pointy peaks inside the pixels, and I am trying to work around this. My best bet so far is to use an adaptive strategy and limit the number of evaluations, as all fixed rules fail quite miserably. Together with moderate error tolerances, this seems to not waste much time on the well-behaved pixels and get the problematic pixels more or less right. Anyway, that’s far OT. Thanks again for fixing this issue, it all works smoothly now. |
Hey, |
@ChrisRackauckas, the One possible improvement would be to allow an upper bound on the parallelism, to prevent it from trying to evaluate too many regions at once. |
Alright thanks. I think also the cubature methods were not good for my problem, so it was more problem related than solver related. Sorry about that. |
Ignore-this: b1882e00b1fe21f6846f8783d67d73bf darcs-hash:20141107140821-c8de0-dc30796d5001042c4e6f5f04cf1240e7997a24f8
I was running
hcubature_v
many, many times in a row without theoretically allocating any new space, and it ended up filling 32 GB of RAM. I am not sure whether this is a cubature thing or a Julia thing (I'm still testing), but it seems that replacing the zero-resize inwith an explicit free
fixes at least a large part of the memory issue.
I'm on Mac OS X Yosemite and checked that the same behaviour occurs with your provided library and with a self-built one.
The text was updated successfully, but these errors were encountered: