Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory consumption growth with repeated meshing, especially with Gmsh #298

Closed
fipymigrate opened this issue Sep 19, 2014 · 6 comments
Closed
Milestone

Comments

@fipymigrate
Copy link

I'm currently experiencing issues with growing memory usage with FiPy-2.2_dev5097 when generating meshes with Gmsh or gridded meshes then solving problems on that mesh. For 3D gridded meshes, growth is gradual:

from fipy import Grid3D, CellVariable, DiffusionTerm
import resource


for i in range(200):
print "Iter:", i
print "MaxRSS:", resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
mesh = Grid3D(nx = 50, ny = 50, nz = 50, dx = 0.5, dy = 0.5, dz = 0.5)
phi = CellVariable(name = "solution variable", mesh = mesh, value = 0.)
D = 1.
phi.equation = - DiffusionTerm(coeff=D)

phi.constrain(1., where=mesh.getFacesLeft())
phi.constrain(-1., where=mesh.getFacesRight())

phi.equation.solve(var=phi)

Which grows from returning the following on the first iteration:
Iter: 0
MaxRSS: 44640

to the following on the last iteration:
Iter: 199
MaxRSS: 1511112

(this is about 1.5 GB)

It seems to be faster and more extreme with GmshGrid3D reaching the same allocation level at about 5 iterations using the same parameters:

from fipy import GmshGrid3D, CellVariable, DiffusionTerm
import resource


for i in range(200):
print "Iter:", i
print "MaxRSS:", resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
mesh = GmshGrid3D(nx = 50, ny = 50, nz = 50, dx = 0.5, dy = 0.5, dz = 0.5)
phi = CellVariable(name = "solution variable", mesh = mesh, value = 0.)
D = 1.
phi.equation = - DiffusionTerm(coeff=D)

phi.constrain(1., where=mesh.getFacesLeft())
phi.constrain(-1., where=mesh.getFacesRight())

phi.equation.solve(var=phi)

Iter: 0
MaxRSS: 44656

Iter: 5
MaxRSS: 1587540

Iter: 10
MaxRSS: 2726772

Iter: 15
MaxRSS: 3847696

Iter: 20
MaxRSS: 4662136

Iter: 25
MaxRSS: 5347572

(killed after that point.. eventually I get a failure to run Gmsh, returning a null version)

I see similar behavior on Ubuntu 11.04 (GNU/Linux 2.6.38-13-server x86_64) and Mac OS X 10.7 (Darwin Kernel Version 11.2.0, x86_64), numbers above are from Linux. I haven't gone further to try and narrow down what section might be leaking objects that aren't GC'able, but if there are recommendations, I'd be happy to try and track this down further.

It's possible that there is more than one bug or leak, but the fastest rate of growth is certainly associated with Gmsh-generated meshes.

Imported from trac ticket #417, created by jbsnyder on 01-03-2012 at 00:20, last modified: 01-10-2012 at 16:26

@fipymigrate fipymigrate added this to the 3.0 milestone Sep 19, 2014
@fipymigrate
Copy link
Author

To add to this, the Gmsh-related growth occurs even without solving an equation on the mesh, just computing cell centers to get it to create the mesh:

from fipy import GmshGrid3D, Grid3D, CellVariable, DiffusionTerm
import resource


for i in range(200):
print "Iter:", i
print "MaxRSS:", resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
mesh = GmshGrid3D(nx = 50, ny = 50, nz = 50, dx = 0.5, dy = 0.5, dz = 0.5)
mesh.cellCenters

Still yielding rather rapid memory growth:
Iter: 0
MaxRSS: 44652
Iter: 5
MaxRSS: 1497424
Iter: 10
MaxRSS: 2635072

This is not seen with Grid3D at all:
Iter: 0
MaxRSS: 44652
Iter: 5
MaxRSS: 60428
Iter: 10
MaxRSS: 60448
Iter: 199
MaxRSS: 60516

Trac comment by jbsnyder on 01-03-2012 at 00:30

@guyer
Copy link
Member

guyer commented Sep 19, 2014

I can reproduce the problem here, too. Thanks for the clear diagnostics. We will investigate.

Trac comment by guyer on 01-03-2012 at 14:09

@fipymigrate
Copy link
Author

Just to add another note to this, the issue also occurs with GmshGrid2D (and it's certainly not something related to the grids because I originally observed this from arbitrary geometries used with Gmsh3D and was just looking for simplified test cases), and the mesh.cellCenters shown in the previous example is not necessary:

from fipy import GmshGrid2D
import resource

for i in range(200):
print "Iter:", i
print "MaxRSS:", resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
mesh = GmshGrid2D(nx = 50, ny = 50, dx = 0.5, dy = 0.5,)

Example Output:

Iter: 0[[BR]]
MaxRSS: 44648[[BR]]
Iter: 10[[BR]]
MaxRSS: 74164[[BR]]
Iter: 20[[BR]]
MaxRSS: 100552[[BR]]

Trac comment by jbsnyder on 01-03-2012 at 17:01

@fipymigrate
Copy link
Author

I'll add a bit more data. I've tried playing around with some of the memory profiling tools available for python, and have a few notes from "pympler" (http://code.google.com/p/pympler/), running the following code and swapping commenting out the first or second lines:

from fipy import GmshGrid2D as Grid2D
#from fipy import Grid2D
from pympler.classtracker import ClassTracker

tracker = ClassTracker()

tracker.create_snapshot()

for i in range(4):
mesh = Grid2D(nx = 50, ny = 50, dx = 0.5, dy = 0.5,)
tracker.track_object(mesh)
tracker.create_snapshot()
del(mesh)

tracker.stats.print_summary()

I get the following object counts for GmshGrid2D and UniformGrid2D:

mech-90-251:~> python test_pympler_gmsh.py
---- SUMMARY ------------------------------------------------------------------
active      0     B      average   pct
  GmshGrid2D                                  0      0     B      0     B    0%
active      0     B      average   pct
  GmshGrid2D                                  1    122.87 KB    122.87 KB    0%
active      0     B      average   pct
  GmshGrid2D                                  2    235.21 KB    117.61 KB    0%
active      0     B      average   pct
  GmshGrid2D                                  3    347.55 KB    115.85 KB    0%
active      0     B      average   pct
  GmshGrid2D                                  4    459.90 KB    114.97 KB    0%
-------------------------------------------------------------------------------
mech-90-251:~> python test_pympler_ugrid.py
---- SUMMARY ------------------------------------------------------------------
active      0     B      average   pct
  UniformGrid2D                               0      0     B      0     B    0%
active      0     B      average   pct
  UniformGrid2D                               1      8.47 KB      8.47 KB    0%
active      0     B      average   pct
  UniformGrid2D                               1      8.47 KB      8.47 KB    0%
active      0     B      average   pct
  UniformGrid2D                               1      8.47 KB      8.47 KB    0%
active      0     B      average   pct
  UniformGrid2D                               1      8.47 KB      8.47 KB    0%
-------------------------------------------------------------------------------

Looking a little further, it looks like the GmshGrid2D objects are ending up as uncollectable garbage in gc.garbage. I believe it has to do with the del destructors, which appears to have been added to handle cleaning up leftover mesh files. I may experiment a bit further to see if there might be a simple fix and patch for this.

Trac comment by jbsnyder on 01-03-2012 at 19:08

@fipymigrate
Copy link
Author

The following change seems to stabilize the memory growth somewhat. I'm not sure whether this opens any holes for mesh files to remain behind since it seems like these weren't getting called before:

Index: fipy/meshes/gmshImport.py
===================================================================
--- fipy/meshes/gmshImport.py   (revision 5097)
+++ fipy/meshes/gmshImport.py   (working copy)
@@ -1637,11 +1637,6 @@
return nx.arange(len(self.cellGlobalIDs) 
+ len(self.gCellGlobalIDs))

-    def __del__(self):
-        # never gets called (circular references?)
-        if hasattr(self, "mshFile"):
-            del self.mshFile
-    
def _test(self):
"""
First, we'll test Gmsh2D on a small circle with triangular
@@ -1927,11 +1922,6 @@
return nx.arange(len(self.cellGlobalIDs) 
+ len(self.gCellGlobalIDs))

-    def __del__(self):
-        # never gets called (circular references?)
-        if hasattr(self, "mshFile"):
-            del self.mshFile
-
def _test(self):
"""
>>> prism = Gmsh3D('''

Trac comment by jbsnyder on 01-04-2012 at 14:34

@wd15
Copy link
Contributor

wd15 commented Sep 19, 2014

Applied the patch suggested and it stops the memory leak and doesn't leave any tmp file when running the tests and when running the sample script. UniformGrid2D appears to leak memory when the del(mesh) statement is commented. However, if the memory tracker is turned off then it no longer leaks. Closing this bug with r5106.

Trac comment by wd15 on 01-10-2012 at 16:26

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants