-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
creating pngs in background mode leaks memory #1397
Comments
@dlonie any idea why the memory could grow when bg=True ? The renWin is different obviously. Could it be VTK related? |
I'm a bit confused, it doesn't look like the In any case, it doesn't look like VTK is holding onto anything in offscreen mode:
Looks like the leaks are on the VCS side of things. |
@dlonie I'm not quite sure what's going. All @durack1 and I noticed is that if you plot with vcs bg=1 then the process mem size keeps growing. And looking at the code it seems that the only differenc eis this: if self.bg:
self.renWin.SetOffScreenRendering(True)
self.renWin.SetSize(self.canvas.bgX,self.canvas.bgY) But obviously there must be something else. |
@doutriaux1 @dlonie for perspective, the loop generating these images has ~1700 steps and as it passed the 1000 mark yesterday it was using 50GB! of memory.. So it's not a small leak at all.. Each png being generated is around the 100KB mark, so around 170MB total as an output.. |
@dlonie also it seems to be coming from the png function not the rendering function. self.renWin.SetWindowName("VCS Canvas %i" % self.canvas._canvas_id)
self.renWin.SetAlphaBitPlanes(1)
## turning on Stencil for Labels on iso plots
self.renWin.SetStencilCapable(1)
## turning off antialiasing by default
## mostly so that pngs are same accross platforms
self.renWin.SetMultiSamples(0) |
Those additional settings on the renderwindow shouldn't affect the offscreen rendering mechanism. Well, What is the memory usage for the following scenarios?
Also, have you printed out the leaking objects and compared the results with a single run? The objects with increased leaks may point you to the issue. |
The 50GB would make sense if the plot map were leaking -- that holds on to all of the plotted datasets as well as the intermediate filter outputs, which would add up very, very quickly. |
Oh, and by plot map I mean the map returned by |
Ok I'll run your runTest and print the leaked object in bg vs not in bg, that should point us in the right direction. Thanks. |
@dlonie yes but why isn't it cleaned up ONLY when plotting in bg mode? That part is really odd. |
@dlonie would a well placed |
When I was having troubles with it I was doing foreground plots, so I don't think fg/bg really matters for that particular leak. Garbage collection should happen automatically by the interpreter as-needed. I doubt it's an issue of delayed collection -- given the plethora of other leaks occurring in vcs, it's more likely a reference counting problem ( |
actually, bg does matter, we watch the process go for a while with @durack1 and the memory only grows when using bg=1. |
Must be a different leak then. I was seeing that dictionary leak in On Wed, Jun 17, 2015 at 3:09 PM, Charles Doutriaux <[email protected]
|
@dlonie @doutriaux1 just FYI.. I ran this for one year, so 48 steps (I'm looping through a bunch of variables) and for this with Let me know if you need more info from me regarding my X11/client setup or other info.. Full disclosure here I'm using a VNC client.. As an aside, this plotting at least using |
@durack1 Thanks for the numbers, that is troubling indeed. I'm hesitant to say VTK is at fault here, since
So my suspicion is that something in VCS is holding onto application state in-between calls to |
@dlonie can you try to add the vtkpngwriter to your bit of coe above, just to make sure. Thanks. That's where the leak is probably coming from. |
@dlonie does this output mean anything to you?
I'll run this code again with some timing turned on, it seems like things really grind to a halt and slow down remarkably when reusing the same canvas from |
@dlonie let's try adding vtkpngwriter as @doutriaux1 suggested. That will help. May be we can give it to @sankhesh |
@durack1 Nope, haven't seen that before. I remember we were having problems with the some template objects growing out of control when reusing a canvas for animations, causing a similar slowdown/memory leak. @doutriaux1 @aashish24 Rebuilding to rerun w/ png writer now. |
|
@dlonie @doutriaux1 not sure this info below is totally related to the counter = 1
for var in ['sic','sst']:
...
x = vcs.init()
bg = False ; # For 1 yr uses ~260MB
for data in ['obs','bcs']:
...
for count,y in enumerate(range(1870,2013)):
...
for m in range(12):
startTime = time.time()
...
x.plot(title,bg=bg)
x.plot(s1,t1,iso,bg=bg); #,ratio="autot"); #,vtk_backend_grid=g)
x.plot(diff,t2,iso2,bg=bg); #,ratio="autot") ; #,vtk_backend_grid=g)
x.plot(s2,t3,iso,bg=bg); #,ratio="autot") ; #,vtk_backend_grid=g)
...
x.png(fileName)
x.clear()
endTime = time.time()
...
print counterStr,printStr,varName.ljust(6),BC,timeStr,memStr
counter = counter+1
...
gc.collect() ; # Attempt to force a memory flush
...
x.ffmpeg(os.path.join(outPath,outMP4File),outFiles,rate=5,bitrate=2048); #,options=u'-r 2') ; # Rate is frame per second - 1/2s per month
x.clear() So the
|
You can try profiling the code and looking at where it's spending all that extra time -- that's how we tracked down the template issue. I'll see if I can dig up some resources. |
@durack1 That is fine. ssh -X/Y will give you the OpenGL on your local machine. If you have a linux local machine I would be interested in the test results/gxlinfo on that machine, even if you don't run uvcdat there. |
@danlipsa almost all our local machines are now OS X laptops.. What info do you need from these machines locally? Is there another way I can get the same info - querying the system hardware? |
@durack1 @doutriaux1 @aashish24 I merged in a fix for the VTK bug |
@danlipsa the script that I am using is exposing a very big memory leak, it creates a number of VCS objects before saving this using the pngwriter.. If I can get your merged PR in a build I can rerun my script and report the memory usage.. It seems to expose the issue very quickly throughout the ~7k iterations.. |
@danlipsa, @doutriaux1 will pull across your branch for testing locally.. I will run my script against it and see how it compares to |
@durack1 Don't do that yet. I don't think you'll see any difference. I want to look more into this. |
@danlipsa no problem.. Let me know when you want me to kick the tires, it'll take a little time to get the changes into a local build so I can run my script |
@aashish24 @durack1 @doutriaux1 I have a new merge request for the VTK bug. The first time the bug wasn't really fixed. |
@danlipsa thanks for the heads up.. This sounds like it, the two order of magnitude drop in memory usage should certainly flow into better behavior in my example script |
That is great! @durack1 I will build a conda for you to test it with. |
@doutriaux1 Note, the change is not merged in VTK yet - it is in the review process. |
Can I build against your commit? |
@doutriaux1 Yes, you can. It passes all VTK and uvcdat tests. |
@doutriaux1 let me know when I can take a test drive.. I'm curious if this will solve the problem that made things unusable with |
@danlipsa it saves 40Mb per frame on |
@doutriaux1 nice! I'll rerun my script next week and see what gives.. And also outline the same diagnostics included above. An update will also be useful for dealing with #1424 @chaosphere2112 |
@durack1 you snoozed too long, it's in master 😉 |
@doutriaux1 excellent! Is it in conda nightly? |
yep On 06/23/2016 09:11 AM, Paul J. Durack wrote:
|
@danlipsa it seems that the The code iterates through monthly data from 1870-01 to 2012-12 over 4 variables (so steps 1 through 6864) as you can see below, it seems that as a number of arrays are being filled the memory climbs (this is likely an internal python/UV-CDAT behavior), whereas once these variables are being reused there is no memory growth - so 1717 to 3432, 3433 to 5148 and 5149 to 6795 has very stable memory usage:
@chaosphere2112 the increase in plot times is still a problem, as are the python objects that continue to accumulate, but that can be solved in #1424 @danlipsa @doutriaux1 I think you can close this issue |
yay! great job @danlipsa, thanks @doutriaux1 @durack1 |
@durack1 @doutriaux1 Thanks for testing! Great to hear the fix works for your test case. I am closing this ... |
@danlipsa @aashish24 we should take a look at this one! |
A LOT!!!
run and look at memory in top.
The text was updated successfully, but these errors were encountered: