-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Scaling on Kubernetes #9290
Comments
@onpaws just remember that if you scale out parity RPC nodes, you'll need to deal with nonce management on the client side. This is probably the only main issue with scaling out ethereum nodes. You can however request the nonce from parity via RPC before sending the transaction. Or deal with it internally within your application (e.g. an increment in a database or having 'sticky' connections to the backend). Hope this helps. |
Thanks @ddorgan that's exactly the kind of insight I'm curious about! |
Anything else or ok to close? |
If I were interested in arranging Horizontal Pod Autoscaling, might there be an existing "health" or "back pressure" endpoint of some kind Parity already maintains you'd consider worth investigating? |
Thanks @ddorgan appreciate the follow up! |
Hey, @onpaws. Could you, please, share your final Parity deployment manifest with autoscaling being enabled? How do you deal with the scaling given the fact that Ethereum client need to be synchronized before operational, which, probably, takes a looot of time? Appreciate any answer. |
This is a question about scaling a multi-instance Parity deployment, with the goal of handling many simultaneous JSON-RPC calls, like some previous issues have covered and fixed.
Scenario
My app:
As part of load testing/capacity planning, I'd like to see what it would take to support e.g. Kubernetes horizontal autoscaling. Might there there be an existing Parity endpoint I could consider using as part of a Kubernetes
readinessProbe
?Thanks to @ddorgan for sharing a manifest the other day - an excellent starting point. Curious what other considerations I should be thinking of?
The text was updated successfully, but these errors were encountered: