-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
workers being constantly booted #8
Comments
Just passing though but right away I'm guessing you have too many workers. Our app isn't that big but we still only run 2 Puma workers per 1x dyno - running 3 or more would probably only be OK with a fairly small app. Try setting your workers to 2. |
@Kagetsuki Thanks, I've tried reducing the workers to 2. So far I haven't hit the memory limit so there haven't been any workers killed. Memory usage seems to be slowly but steadily climbing though, so we'll see how it goes after a few hours. |
OMFG are you serious!? What is the rate of the climb? About 1MB every 3~7 minutes? Does it climb slowly reguardless of traffic? |
@Kagetsuki Yes, exactly. Have you seen the same thing? The strangest thing is that it'll climb even if there's zero traffic hitting the site. I've tried to track down any memory leaks, but I haven't been able to figure it out. (Hence my attempt to use puma worker killer.) 😢 |
@markquezada My problem is EXACTLY.THE.SAME. I have literally done anything and everything to track down this memory leak over the last two weeks. I'm posting this thread to the Heroku support ticket I have open. Just to confirm, do you have any of the same environment set up as I do (I know you have puma so not including it in the list?:
|
@Kagetsuki I'm using a very similar stack with the exception that I'm using postgres instead of mongo.
|
@markquezada I'm betting if you bump Ruby up to 2.1 the problem will remain exactly the same. The fact that you are using Postgres is actually extremely nice to hear! I was suspecting some issue with mongo, maybe some caching issue or buffer issue. Since you're having that same issue without mongo I think I can probably rule it out. As for New Relic I think we ruled that out as the culprit as well. Still, the fact that the only thing the app seems to be doing is creating some logs, and the rate of increase is roughly as much as a few strings, I still have my suspicions here. May I ask: if you run the app locally (with puma) do you see any memory increase over time? We did not see any obvious increase ourselves. |
Memory increaseSoo...I see an increase in puma memory usage WITHOUT puma_worker_killer btw. Using codetriage.com, i see memory steadily increase, i see this in production as well You can measure locally using https://github.com/schneems/derailed_benchmarks#perfram_over_time Ruby 2.1.4 is way way better than previous versions of Ruby 2.1 and Ruby 2.2.0 is even better than 2.1 in terms of memory growth. Puma on HerokuOn a 1x dyno, I can't afford to have more than one worker for http://codetriage.com, or I go over my RAM limits plain and simple. The way that puma_worker_killer measures ram is different than the way that Heroku measures ram see: By default puma_worker_killer will attempt to kill workers long before it's needed on Heroku Seeing multiple Puma worker killersWhere did you put your initialization code? I recommend an initializer. If you try to get fancy by putting it somewhere in your puma config, then I could see it behaving something like how you stated. |
@schneems I don't think either of us meant to imply Puma Worker Killer was causing this issue - rather we both started using it becasue of this issue.
I derive two points from this but I'm not sure if you meant either of them or neither of them:
Puma Worker Killer has done wonders in that I'm not seeing R14's and it does appear to be killing workers at appropriate times even though I have two workers. This is on a 1x dyno. Are you suggesting that this is not a good idea? Initialization code is in initializers just as you reccomend. I hadn't even thought of putting it in the puma config. Honestly if you are implying that gradual memory increases are unavoidable with Puma I'm going to strongly consider switching back to Unicorn - performance penalties and all. |
Gradual memory increase should happen with any web server. I've only done testing with puma. To see the growth you need to hit puma with a bunch of requests. It doesn't just grow by 50mb if you start the server and do nothing (at least i hope not). Running 2 workers and PWKI'm not saying it's a bad idea. I guess i'm saying that i've not really tried it. I don't recommend using soo many workers that PWK is constantly thrashing, but if it doesn't kill a process until it's been alive for a few hours, it should be fine. My comments on where to put the code was directed to @markquezada who opened the original ticket |
@schneems I put the PWK config in an initializer as recommended by the readme. Since lowering my workers to 2 (as per the advice from @Kagetsuki) I haven't seen the worker thrashing any further. Memory still does slowly increase, but I haven't hit the threshold yet. I was originally running with ruby 2.1.2, then 2.1.3 and the memory increases happened much faster so I downgraded to 2.0. Now that 2.1.4 has been released, I'll give that a try. |
It certainly does. Mind you it it increases over many hours, not just a few minutes. With two workers, PWK is currently killing one worker every ~7 to 9 hours for me. |
Maybe it's an issue with Puma, but we're seeing it starts out consuming more than 300mb per worker (with 2 threads), so like @schneems mentioned before, we'll probably have to stick with only one worker on heroku. |
FWIW, I've seen this exact same behavior with Puma + Heroku. If you have any new updates regarding findings on this, that would be great. |
I just reported #14, but on second review, this may be the same issue. Is it possible here that PWK is not killing the correct workers? I note in the initial example, it's always "Worker 0" that is being terminated and booted. If Worker 0 is actually a fresh worker, it makes sense that memory usage is not going down after TERM. |
This is because of the way that PWK works, it might be a design flaw, but Unicorn Worker Killer has it as well. PWK kills the process with the largest amount of memory, since each fork has less memory, the first fork will be the largest this would be expected to be Worker 0. Unicorn Worker Killer does this to, but less intentionally, since you set a per-process threshold instead of a global threshold, but your largest process will always be the first spawned worker and therefore it will always be the first killed off. |
Is it resolved? |
Last comment was from over 4 years ago. Closed as stale. If you're still seeing this then please open a new issue and i'll need a way to reproduce the behavior locally http://codetriage.com/example_app |
Hi,
I'm actually not sure if this is expected behavior or a bug. After installing and deploying to Heroku, I see this in the logs:
It looks like workers are being identified as out of memory, and they're sent a TERM but ram usage never recedes.
This is on a relatively unused install with two 1X Dynos running puma.
I've tried running this on both ruby 2.1.2 (and 2.1.3) and now 2.0.0 with the same result. Thoughts?
The text was updated successfully, but these errors were encountered: