Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RNNDownBeatProcessor using lots of memory #404

Closed
hpx7 opened this issue Dec 29, 2018 · 5 comments
Closed

RNNDownBeatProcessor using lots of memory #404

hpx7 opened this issue Dec 29, 2018 · 5 comments

Comments

@hpx7
Copy link

hpx7 commented Dec 29, 2018

Both

RNNBeatProcessor()('file.wav')

as well as

RNNDownBeatProcessor()('file.wav')

use more than 1GB of memory for a ~40MB wav file.

It seems the majority of the memory is getting allocated here: https://github.com/CPJKU/madmom/blob/master/madmom/features/downbeats.py#L88. Specifically, it looks like the ShortTimeFourierTransformProcessor is using up quite a bit of memory.

I don't have reason to believe there is a memory leak going on, but I wanted to check whether this memory profile was expected for doing the beat processing, or whether there were parameters we could tune to reduce the memory footprint.

@superbock
Copy link
Collaborator

Yes, unfortunately this is the expected behaviour. The library is by no means optimised for memory consumption. Additionally, many objects keep references to underlying objects that sometimes causes extensive memory usage. But as you said, this is not a memory leak, since the memory is freed after computation.

There exists an open issue (#250) which is one possibility to reduce the memory footprint. Since I never had any memory issues so far, I never had an urgent need to work on this.

Another (probably easier to accomplish) solution would be to somehow "bundle" STFT, spectrogram and filtering operations and do all computations framewise, but without allocating memory for all intermediate steps. Although not explicitly mentioned/discussed in #248, this was one of the ideas behind that issue.

If you want to work on this, please let me know since I have a couple of ideas of how this could be designed.

@hpx7
Copy link
Author

hpx7 commented Dec 29, 2018

Yeah it looks like that the maximum amount of memory allocated by any single stage is ~400MB allocated by the ShortTimeFourierTransformProcessor with a 4096 frame_size. However, the memory allocations accumulate with all the stages, ending up with over 1GB total. It's not clear whether this is because of references hanging around too long or because the garbage collector just hasn't run yet.

I am looking to deploy madmom in resource constrained environments (with 512MB ram) so I would indeed be interested in contributing to reduce the memory footprint. Let me know what approach you would suggest to take.

@superbock
Copy link
Collaborator

To me the memory behaviour looks ok. Of course memory accumulates with all 3 different STFTs, but then decreases again (after combining them with np.hstack). Garbage collector seems to work ok (at least in my tests).

I made a first attempt to address #248 in order to be able to cast the intermediate steps to simple numpy arrays. It reduces the memory footprint roughly by a factor of 2. Will create a PR the next couple of days.

Of course memory can be reduced further by block-wise processing (#250), but I'd implement this after fixing #248.

May I ask which kind of application it is you want to deploy madmom? Did you have a look at the online variant of the RNNBeatProcessor? By setting online=True frames are processed one by one, which reduces memory to a certain (fixed) amount. Of course the detection performance is not that accurate any more then.

@superbock
Copy link
Collaborator

superbock commented Dec 31, 2018

Please see PR #405, which at least mitigates the memory problem. For a 10 minutes song, memory consumption goes down from >2.2GB to ~650MB.
Edit: memory went further down to ~250MB, which is basically one order of magnitude :)

You can start working on #250 on top of that branch if you have some spare time. If not, I will start working on it next year — which is a rather unspecific expression of time ;)

@superbock
Copy link
Collaborator

Closing this issue, since #405 (although still not merged) provides a solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants