-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OOM killed when opening large files #2899
Comments
Thanks for reporting this @Breee! Unfortunately this is one of those cases where this works as intended 😬 or at least it is how it has always worked and there isn't a none breaking change way to fix this.
If you do this with enough VUs or/and big enough files - this will always OOM. While likely just adding You can read the docs to see of ways where for files that are basically very big lists of items it is recommended that you use Your case unfortunately falls in more closely to #1931, but as I explain in #2311 just having some representation of a file that isn't a memory hog is not enough .... as people generally want to do stuff with those files instead of just having them. In this particular case this is just one more thing that isn't well implemented or not with big files in mind. Basically 3 things with some good intentions combine to make what should on the surface be 1GB allocation to be 8+GB:
Unfortunately 3 basically means that instead of it allocating 1GB for the buffer, filling that up and then making a copy that we will keep. It basically allocates a bunch of buffers each double the previous size which in total go to 3GB of allocations 🤦 .
We do the caching on We then return go to actually read it (from memory already) and you can even clearly see where the code decides to not make the buffer big enough 🤦 I would like to say that this is an easy fix but I see no point in "fixing" this particular part as the one above will make this whole thing irrelevant once again. As even if you just managed to load this it will still not work once you have to have a bunch of VUs having a copy of it - even if it just takes 1GB each. I would argue that even if it takes 1GB for all of them it will still fall apart. I don't have better workaround at this point than "write an extension for your particular workcase". I would like to tell you that we will be working on this soon, but I am doubtful that will have a finished solution soon enough. One of the big problems in practice is that the moment you want to actually use this "big file" everything that you can use it will need to support whatever abstraction we end up on. But you can go read #2311 for those same arguments. I am leaving this open in order to be able to refer to it and so it is easier to find. But this likely will be closed with one of the issues linked. |
Thanks a lot for the explanation! |
Brief summary
k6 allocates alot of ram (7+ GB) when I try to open and read a big file.
k6 version
k6 v0.42.0 ((devel), go1.19.4, linux/amd64)
OS
Ubuntu 20.04.5 LTS
Docker version and image (if applicable)
No response
Steps to reproduce the problem
fallocate -l 1G bigfile.img
k6 run oom.js
Expected behaviour
No OOM kill
Actual behaviour
k6 gets OOM killed quickly
The text was updated successfully, but these errors were encountered: