-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimum Sidekiq Configuration on Heroku with Puma #12
Comments
@winston thx for sharing. it's helpful! |
@winston thanks for the update 👍 |
Hey @winston! I thought I'd share our current configuration at Hired. We have pretty much the same stack! Heroku running Sidekiq (Pro) and Puma for the web server. I don't do any of the fancy dynamic sizing calculations that you do, we control everything by environment variables. Here's our simplified Procfile:
Puma config:
Sidekiq config:
We have found that Heroku's performance dynos are phenomenally more performant than the standard ones, and they come with tons of RAM so we can fit many copies of the app in memory. This allows me to use Puma's cluster mode and currently we run 12 Puma processes (PUMA_WORKERS) per dyno, each Puma using up to 3 threads (PUMA_MAX_THREADS). I probably could increase the workers significantly and still have enough memory. For Hired production we run 2 Performance-L dynos, and an additional 1 Performance-L for admins-only (our internal admin app). For Sidekiq, I use cheaper Standard-2X dynos and have a Hubot script that auto scales-them based on queue depth. Currently I have SIDEKIQ_CONCURRENCY=5 and we always run at least 2 or 3 worker dynos and up to 12 at peak times. We also run a single Clock dyno for scheduled jobs. Mostly these jobs just kick off other jobs or log stuff, so it's on a Standard-1X plan. I hope this helps! |
Hey @heythisisnate, this is awesome! Thanks for sharing! Good to know what hired is using. I haven't had a chance to use RE: sidekiq, most of what I do is through ENVs as well, except that I noticed that you didn't set For For Thoughts? Thanks for your reply! 🙇 |
👍 |
hi @winston, dividing by 2 in: |
Btw, I am in use of redis-objects which need at least one connection per app instance (web). Is it better to share the connection pool between sidekiq and redis-objects or I should use another connection pool with size equal to |
@longkt90 Thanks for the feedback!
That's true. Probably should modify the
So that the minimum is at least 2. |
@longkt90 I haven't used redis-objects myself, but I would think this might be better?
|
Yeah. That's what we are using. I dont think it's good idea to share the
|
Btw I need to set our db_pool to be redis server concurrency +
|
Yes that's right. I just set it as the max number of connection that my DB allows. |
👍 Thank you |
It seems to me that this code:
And this code seems to have an issue:
It should just be:
The However after changing You probably didn't get any error because you just reserved bigger pool size to client than you really needed. |
@nitzanav The code was transcribed following the explanations detailed in http://bryanrite.com/heroku-puma-redis-sidekiq-and-connection-limits/. Sidekiq config is often a mystery (to me) and sometimes difficult to know what's exactly "right". The config above has at least worked in all the apps I deployed so far. But do let me know your mileage on the updated config. I am sure it will be a good data point too. Thanks! |
@winston This blog indeed shows this formula, but the usage of it as is, is meant in order to be bale to infer the maximum connections to be expected and configured on the redis server side, rather than the ruby application side. The ruby application side is a bit different, and AFAIK should be used as I described. What you did will work but is no optimum :) |
@winston at the time of auto scale how we will update - NUMBER_OF_WEB_DYNOS
|
@winston I realize this is a couple years old now but I'm only now learning about and using Puma and concurrency so this may still be relevant to anyone else. To help clarify what @nitzanav is saying, I think you're misinterpreting what the blog post is saying in regards to setting the Redis client size. In fact, in the blog Bryan Rite has the following code for setting the size:
These sidekiq settings will configure the size per worker process, so you don't need to factor in the Hope this helps! |
What's the optimum config for Sidekiq on Heroku with Puma?
There are quite a number of answers on the Internet, but nothing definitive, and most of them come with vague numbers and suggestions or are outdated.
Basically, these are the questions that are often asked:
config/initializers/sidekiq.rb
file?size
?concurrency
?The best (and updated) answers I can find are:
With @bryanrite's post as a reference, this is our Sidekiq config:
config/initializers/sidekiq.rb
lib/sidekiq_calculations.rb
The
sidekiq_calculations.rb
file is dependent on a number of ENV variables to work, so if you do scale your app (web or workers), do remember to update these ENVs:MAX_REDIS_CONNECTION
NUMBER_OF_WEB_DYNOS
NUMBER_OF_WORKER_DYNOS
At the same time,
WEB_CONCURRENCY
andWEB_MAX_THREADS
should be the identical ENV variables used to set the number of Puma workers and threads inconfig/initializers/puma.rb
.Our
puma.rb
looks exactly like what Heroku has proposed.The only difference to @bryanrite's calculation is that Sidekiq reserves 5 connections instead of 2 now
according to this line, and I have also added a
paranoid_divisor
to bring down the concurrency number and keep it below a 80% threshold.Let me know how this config works for you. Would love to hear your feedback!
Thank you for reading.
@winston ✏️ Jolly Good Code
About Jolly Good Code
We specialise in Agile practices and Ruby, and we love contributing to open source.
Speak to us about your next big idea, or check out our projects.
The text was updated successfully, but these errors were encountered: