Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for StatefulSets #22

Closed
JoelSpeed opened this issue Nov 12, 2018 · 9 comments
Closed

Support for StatefulSets #22

JoelSpeed opened this issue Nov 12, 2018 · 9 comments

Comments

@JoelSpeed
Copy link
Collaborator

Add support for updating StatefulSets as well as Deployments

@edwardstudy
Copy link
Contributor

@JoelSpeed Hi, Joel. Do we have any plan or design for a more common way to watch StatefulSets/Jobs/DaemonSets' configurations but not implement one controller for each resource type?

@JoelSpeed
Copy link
Collaborator Author

JoelSpeed commented Feb 19, 2019

I have a plan but it is very much in my head right now 😅

Basically what we need to do is define an interface with two methods

type PodController interface {
  runtime.Object
  metav1.Object
  GetPodSpec() *corev1.PodSpec
  SetPodSpec(*corev1.PodSpec)
}

We then create the types Deployment, StatefulSet and Daemonset which internally hold their respective Kubernetes runtime.Object and implement the interface above, eg:

type Deployment struct {
  *appsv1.Deployment
}

func (d *Deployment) GetPodSpec() *corev1.Deployment {
  return d.Deployment.Spec.Template.Spec
}

func (d *Deployment) SetPodSpec(spec *corev1.Deployment) {
  d.Deployment.Spec.Template.Spec = spec
}

We should then be able to replace the appsv1.Deployment references in the core package with the PodController interface and re-use most of that code.

We would then rename HandleDeployment to HandlePodController in pkg/core/handler.go and create the respective controllers in pkg/controllers that call the HandlePodController method as the Deployment controller does now.

@edwardstudy
Copy link
Contributor

@JoelSpeed Thanks.

I just think about watching all configMaps, secrets and check if they are referenced by kube objects, then add an owner, and what we currently did.

Is it ok to watch these kube resources? I concern about is this watching efforts too much?

@JoelSpeed
Copy link
Collaborator Author

Is it ok to watch these kube resources? I concern about is this watching efforts too much?

We watch all configmaps, secrets and deployments already, adding more will increase the required memory to run and of course this scales with the size of your clusters. There are plans to make Wave work in a namespaced manner (#37) which would help to combat this. We could also add an option to only enable a subset of the controllers if people only wanted to watch for Deployments and Daemonsets for instance or wanted to run different controllers for each type, there are many ways we can split this up

@edwardstudy
Copy link
Contributor

That sounds great.

@rmb938
Copy link

rmb938 commented Mar 6, 2019

@JoelSpeed Do you have any ETA on starting the implementation of this or would you like any help? I run a lot of statefulsets on my clusters so having them supported would be awesome.

@JoelSpeed
Copy link
Collaborator Author

Unfortunately not, this isn't a majorly important feature internally so likely won't be done any time within the next couple of months.

I described the vision I have for the implementation above, if you wanted to have a go at starting to implement it (writing interface and converting existing first, following up with other PodController types) then do feel free!

@JoelSpeed
Copy link
Collaborator Author

The ground-work for this has now been done, thanks to @SteveKMin for that 😄

@JoelSpeed
Copy link
Collaborator Author

Thanks to @SteveKMin and #44 this is now in and will be released in the next release 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants