Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory leaks in hcubature.c ? #7

Closed
ntessore opened this issue Nov 7, 2014 · 11 comments
Closed

memory leaks in hcubature.c ? #7

ntessore opened this issue Nov 7, 2014 · 11 comments
Labels

Comments

@ntessore
Copy link

ntessore commented Nov 7, 2014

I was running hcubature_v many, many times in a row without theoretically allocating any new space, and it ended up filling 32 GB of RAM. I am not sure whether this is a cubature thing or a Julia thing (I'm still testing), but it seems that replacing the zero-resize in

static void heap_free(heap *h)
{
     h->n = 0;
     heap_resize(h, 0);
     h->fdim = 0;
     free(h->ee);
}

with an explicit free

static void heap_free(heap *h)
{
     h->n = 0;
     h->nalloc = 0;
     free(h->items);
     h->items = NULL;
     h->fdim = 0;
     free(h->ee);
}

fixes at least a large part of the memory issue.

I'm on Mac OS X Yosemite and checked that the same behaviour occurs with your provided library and with a self-built one.

@ntessore
Copy link
Author

ntessore commented Nov 7, 2014

Test case:

using Cubature

function f(x::Array{Float64, 2}, v::Array{Float64, 1})
    for i in 1:length(v)
        v[i] = exp(-x[1,i]^2-x[2,i]^2)
    end
end

while true
    hcubature_v(f, (-1, +1), (-1, +1))
end

will grow rapidly in memory consumption when used with the original libcubature.dylib that's shipped with Cubature.jl.

Recompiling libcubature.dylib with the above change leads to constant memory usage.

@stevengj
Copy link
Member

stevengj commented Nov 7, 2014

Ah, interesting, there appears to be a difference between Linux realloc (which says that a zero-size realloc is equivalent to free) and BSD realloc (which says that a zero-size realloc allocates a "minimum sized object" while the "original object is freed").

@stevengj stevengj added the bug label Nov 7, 2014
@stevengj
Copy link
Member

stevengj commented Nov 7, 2014

Should be fixed now.

@ntessore
Copy link
Author

ntessore commented Nov 7, 2014

Thanks! I would like to ask a (mostly) unrelated question: I'm using cubature to pixelize a continuous distribution, that's why I'm integrating many times over many pixels. I don't suppose there is an easy way to create _vn versions of cubature that integrate over many integration domains, reusing already allocated resources?

@stevengj
Copy link
Member

stevengj commented Nov 7, 2014

I don't understand. Why would there be reuse over multiple integration domains?

@ntessore
Copy link
Author

ntessore commented Nov 7, 2014

I'm sorry, I didn't explain myself very well there. Let me try with code.

What I have is something along the lines of

# integrate m*n pixels of my image one by one
for j in 1:m, i in 1:n
    img[i,j], err[i,j] = hcubature_v(f, (i - 0.5, j - 0.5), (i + 0.5, j + 0.5))
end

I see that this results in a large number of internal memory allocations, and thought there might be a way around those. Something more like

lo = reshape([ (i - 0.5, j - 0.5) for j in 1:m, i in 1:n ], m*n)
hi = reshape([ (i + 0.5, j + 0.5) for j in 1:m, i in 1:n ], m*n)

# integrate m*n pixels of my image all at once
img, err = hcubature_vn(f, lo, hi)

@stevengj
Copy link
Member

stevengj commented Nov 7, 2014

For this sort of problem, I would typically just use a fixed-order cubature rule (a product of Gauss quadrature rules along x and y, for example) ... do you really need adaptive integration over each pixel? Especially if f is smooth and not too crazy within a pixel.

The thing is, the innermost loops here are all in C (and/or calls to f), so I'm not convinced the memory allocations to set up the cubature over each pixel are the limiting factor.

@ntessore
Copy link
Author

ntessore commented Nov 7, 2014

It’s definitely not a big fraction of the total time spent, but I am trying to save whatever I can.

Unfortunately, I have a highly unpredictable integrand with irregular and pointy peaks inside the pixels, and I am trying to work around this. My best bet so far is to use an adaptive strategy and limit the number of evaluations, as all fixed rules fail quite miserably. Together with moderate error tolerances, this seems to not waste much time on the well-behaved pixels and get the problematic pixels more or less right.

Anyway, that’s far OT. Thanks again for fixing this issue, it all works smoothly now.

@ChrisRackauckas
Copy link
Member

Hey,
I am getting massive amounts of memory usage as well with hcubature_v. I am running this on Windows. I don't know if it's a memory leak, but the memory usage ramps up to over 60Gb on a large problem, while the non-vectorized version is fine. How does Cubature choose the size of the vectors?

@stevengj
Copy link
Member

stevengj commented Feb 4, 2016

@ChrisRackauckas, the hcubature_v routine maximizes parallelism, so it allocates memory for as many subregions as can be evaluated simultaneously. At high resolutions or in high dimensions, this can be quite large.

One possible improvement would be to allow an upper bound on the parallelism, to prevent it from trying to evaluate too many regions at once.

@ChrisRackauckas
Copy link
Member

Alright thanks. I think also the cubature methods were not good for my problem, so it was more problem related than solver related. Sorry about that.

stevengj added a commit to stevengj/cubature that referenced this issue Jul 20, 2017
Ignore-this: b1882e00b1fe21f6846f8783d67d73bf

darcs-hash:20141107140821-c8de0-dc30796d5001042c4e6f5f04cf1240e7997a24f8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants