-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
run multiple server threads #10701
base: master
Are you sure you want to change the base?
run multiple server threads #10701
Conversation
🚀 Updated review app: https://pr-10701-badges-shields.fly.dev |
serverModule = await import('./server.js') | ||
server = await startServer() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The core problem I ran into here was that cluster.fork()
attempts to spawn a new worker process that executes the same script as the primary process. When a worker process is spawned, it executes the same Node.js script that started the primary process, but this didn't have the desired effect under test because the primary process is the test runner.
I worked round this by
a) exporting a function we can test from server.js
b) only starting in multi-threaded mode if we are not under test
This means the actual multi-threaded server startup is not under test, but I couldn't find any way around this. I think it is a tradeoff we would have to accept if we go down this road.
} | ||
if (process.argv[3]) { | ||
config.public.bind.address = process.argv[3] | ||
if (process.env.NODE_CONFIG_ENV !== 'test') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switching from a dynamic await import
to a module-level import also meant that this was picking up the test runner args here
run: npm run test:entrypoint -- --reporter json --reporter-option 'output=reports/entrypoint.json' |
This PR may or may not be a good idea, but I'm throwing what I've done out here - partly for for feedback, and partly because it might suddenly become a much better idea on Monday. I'll start off by explaining why I did this:
Firstly
Fly are bringing in throttling for applications using shared CPUs https://community.fly.io/t/predictable-processor-performance/22259 on Monday
We are consistently running over the limit that is going to be imposed
I don't really have a clear view on how bad the performance impact of this will be. We'll find out on Monday. It may or may not be a big deal.
Secondly
If we need to obtain more CPU resources on fly, the cheapest way to do that is to switch from a shared-cpu-1x instance type (which gives you one shared vCPU) to a shared-cpu-2x (which gives you two shared vCPUs).
A shared-cpu-1x with 512Mb RAM is $3.19/month. These are what we currently run in production.
A shared-cpu-2x with 512Mb RAM is $3.89/month.
If the only thing we need is access to more CPU time (and not more memory), then that is way cheaper then running another instance to gain another CPU core.
Thirdly
Shields is a node app, and currently it runs single-threaded.
This is great if you want to scale out horizontally over lots of small machines.
However, if we wanted to run on a shared-cpu-2x, our current architecture doesn't allow us to take advantage of a second core/vCPU on the same server.
So.. tying all those threads together. Depending on how big the impact of being throttled is, it may become advantageous to us to be able to run on a shared-cpu-2x instance type and take advantage of the second vCPU. This PR is a first bash at doing that.
Some notes on this, having actually had a go at it:
numCPUs
with a setting so you can run single-threaded in dev, or maybe make that the default. The local ergonomics of this would be a useful thing to get some feedback on.numCPUs
is only one, use of cluster increases our memory overhead. I tried spinning up a shared-cpu-1x with 256Mb RAM (which is our standard review app config) and the app immediately crashed and ran out of memory, so we would need 512 for a review app with this. I don't really know what this would look like serving production traffic. I think we'd have to try it and see.