Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

Scaling on Kubernetes #9290

Closed
onpaws opened this issue Aug 6, 2018 · 8 comments
Closed

Scaling on Kubernetes #9290

onpaws opened this issue Aug 6, 2018 · 8 comments
Assignees
Labels
M2-config 📂 Chain specifications and node configurations. Z1-question 🙋‍♀️ Issue is a question. Closer should answer.
Milestone

Comments

@onpaws
Copy link

onpaws commented Aug 6, 2018

I'm running:

  • Which Parity version?: 2.1.0-rc1
  • Which operating system?: Linux
  • How installed?: via Docker
  • Are you fully synchronized?: yes
  • Which network are you connected to?: kovan
  • Did you try to restart the node?: n/a

This is a question about scaling a multi-instance Parity deployment, with the goal of handling many simultaneous JSON-RPC calls, like some previous issues have covered and fixed.

Scenario
My app:

  • Uses Parity's JSON-RPC
  • Has no meaningful local state - all state is 100% on the blockchain
  • I have many simultaneous users on whose behalf I make many long-running RPCs (whatever # is enough to cause back pressure on one Parity instance)

As part of load testing/capacity planning, I'd like to see what it would take to support e.g. Kubernetes horizontal autoscaling. Might there there be an existing Parity endpoint I could consider using as part of a Kubernetes readinessProbe?

Thanks to @ddorgan for sharing a manifest the other day - an excellent starting point. Curious what other considerations I should be thinking of?

@Tbaut Tbaut added Z1-question 🙋‍♀️ Issue is a question. Closer should answer. M2-config 📂 Chain specifications and node configurations. labels Aug 6, 2018
@Tbaut Tbaut added this to the 2.1 milestone Aug 6, 2018
@Tbaut
Copy link
Contributor

Tbaut commented Aug 6, 2018

pinging @fevo1971 and @gabreal also for this one :)

@ddorgan ddorgan self-assigned this Aug 17, 2018
@ddorgan
Copy link
Collaborator

ddorgan commented Aug 17, 2018

@onpaws just remember that if you scale out parity RPC nodes, you'll need to deal with nonce management on the client side. This is probably the only main issue with scaling out ethereum nodes.

You can however request the nonce from parity via RPC before sending the transaction. Or deal with it internally within your application (e.g. an increment in a database or having 'sticky' connections to the backend).

Hope this helps.

@onpaws
Copy link
Author

onpaws commented Aug 17, 2018

Thanks @ddorgan that's exactly the kind of insight I'm curious about!

@ddorgan
Copy link
Collaborator

ddorgan commented Aug 17, 2018

Anything else or ok to close?

@onpaws
Copy link
Author

onpaws commented Aug 17, 2018

If I were interested in arranging Horizontal Pod Autoscaling, might there be an existing "health" or "back pressure" endpoint of some kind Parity already maintains you'd consider worth investigating?

@5chdn 5chdn modified the milestones: 2.1, 2.2 Sep 11, 2018
@ddorgan
Copy link
Collaborator

ddorgan commented Oct 17, 2018

@onpaws there was hostname:8545/api/health but this was actually belonged to a section in the actual wallet part of the code, which has been removed in the meantime.

However there are now two RPC calls which may help:
eth_syncing for sync state
parity_netPeers for network state

See: #9119

@ddorgan ddorgan closed this as completed Oct 17, 2018
@onpaws
Copy link
Author

onpaws commented Oct 17, 2018

Thanks @ddorgan appreciate the follow up!

@ArseniiPetrovich
Copy link

Hey, @onpaws. Could you, please, share your final Parity deployment manifest with autoscaling being enabled?

How do you deal with the scaling given the fact that Ethereum client need to be synchronized before operational, which, probably, takes a looot of time?

Appreciate any answer.
Thanks, Arsenii.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
M2-config 📂 Chain specifications and node configurations. Z1-question 🙋‍♀️ Issue is a question. Closer should answer.
Projects
None yet
Development

No branches or pull requests

5 participants