-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RNNDownBeatProcessor using lots of memory #404
Comments
Yes, unfortunately this is the expected behaviour. The library is by no means optimised for memory consumption. Additionally, many objects keep references to underlying objects that sometimes causes extensive memory usage. But as you said, this is not a memory leak, since the memory is freed after computation. There exists an open issue (#250) which is one possibility to reduce the memory footprint. Since I never had any memory issues so far, I never had an urgent need to work on this. Another (probably easier to accomplish) solution would be to somehow "bundle" STFT, spectrogram and filtering operations and do all computations framewise, but without allocating memory for all intermediate steps. Although not explicitly mentioned/discussed in #248, this was one of the ideas behind that issue. If you want to work on this, please let me know since I have a couple of ideas of how this could be designed. |
Yeah it looks like that the maximum amount of memory allocated by any single stage is ~400MB allocated by the I am looking to deploy madmom in resource constrained environments (with 512MB ram) so I would indeed be interested in contributing to reduce the memory footprint. Let me know what approach you would suggest to take. |
To me the memory behaviour looks ok. Of course memory accumulates with all 3 different STFTs, but then decreases again (after combining them with I made a first attempt to address #248 in order to be able to cast the intermediate steps to simple numpy arrays. It reduces the memory footprint roughly by a factor of 2. Will create a PR the next couple of days. Of course memory can be reduced further by block-wise processing (#250), but I'd implement this after fixing #248. May I ask which kind of application it is you want to deploy madmom? Did you have a look at the online variant of the |
Please see PR #405, which at least mitigates the memory problem. For a 10 minutes song, memory consumption goes down from >2.2GB to ~650MB. You can start working on #250 on top of that branch if you have some spare time. If not, I will start working on it next year — which is a rather unspecific expression of time ;) |
Closing this issue, since #405 (although still not merged) provides a solution. |
Both
as well as
use more than 1GB of memory for a ~40MB wav file.
It seems the majority of the memory is getting allocated here: https://github.com/CPJKU/madmom/blob/master/madmom/features/downbeats.py#L88. Specifically, it looks like the
ShortTimeFourierTransformProcessor
is using up quite a bit of memory.I don't have reason to believe there is a memory leak going on, but I wanted to check whether this memory profile was expected for doing the beat processing, or whether there were parameters we could tune to reduce the memory footprint.
The text was updated successfully, but these errors were encountered: