-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes hosting integration #6707
Conversation
cfbb5f2
to
fd3b6b6
Compare
fd3b6b6
to
2b0ed3d
Compare
2b0ed3d
to
cba5723
Compare
Love this! :D |
TODO: annotations instead of labels, include sample |
} | ||
|
||
public SiloAddress SiloAddress { get; } | ||
public SiloStatus Status { get; } | ||
public string Name { get; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be a breaking change for 3.3, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't be, since this isn't sent over the wire
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha, you get the name from MembershipEntry
* Add IClusterMembershipService.TryKill method for unilaterally declaring a silo dead * Add Orleans.Hosting.Kubernetes extension * Use PostConfigure for configuring EndpointOptions * Extras * Scope pod labels * Mark Orleans.Hosting.Kubernetes package as beta Co-authored-by: Benjamin Petit <[email protected]>
(cherry picked from commit 147b214)
* Add IClusterMembershipService.TryKill method for unilaterally declaring a silo dead * Add Orleans.Hosting.Kubernetes extension * Use PostConfigure for configuring EndpointOptions * Extras * Scope pod labels * Mark Orleans.Hosting.Kubernetes package as beta Co-authored-by: Benjamin Petit <[email protected]> Co-authored-by: Reuben Bond <[email protected]>
Might also want to mention that a role binding is required for this on RBAC-enabled clusters, e.g. kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [ "" ]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader-binding
subjects:
- kind: ServiceAccount
name: default
apiGroup: ''
roleRef:
kind: Role
name: pod-reader
apiGroup: '' |
That works for me in this case! |
This PR adds experimental support for integrating more deeply with Kubernetes.
Users make a call to
siloBuilder.UseKubernetesHosting()
to enable the following:ClusterOptions.ServiceId
andClusterOptions.ClusterId
based on labels set on the pod in KubernetesSiloOptions.SiloName
based on the pod nameEndpointOptions.AdvertisedIPAddress
,EndpointOptions.SiloListeningEndpoint
, andEndpointOptions.GatewayListeningEndpoint
based upon the pod's PodIP and the configured SiloPort and GatewayPort (default values 11111, 30000)In a future update, we could consider optionally instructing Kubernetes to delete pods which correspond to dead silos (eg, because the pod is not responding - perhaps the process has become a zombie).
To reduce load on Kubernetes' API server (which is apparently a big issue), only a subset of silos will monitor Kubernetes. The default value is 2 silos per cluster, and to reduce churn, the two oldest silos in the cluster are chosen.
There are some requirements on how Orleans is deployed into Kubernetes when using this plugin. For example:
serviceId
andclusterId
label which corresponds to the silo's ServiceId and ClusterId. The abovementioned method will propagate those labels into the corresponding options in Orleans from Env vars.Here is an example YAML file to deploy such a silo:
A minimal, do-nothing program then looks like this: