Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment Delete #62

Closed
therynamo opened this issue Jan 25, 2017 · 5 comments
Closed

Deployment Delete #62

therynamo opened this issue Jan 25, 2017 · 5 comments

Comments

@therynamo
Copy link

After looking at the k8s docs on deployment delete, it seems like passing orphanDependents: false should delete "everything" associated with the deployment.

Currently, I am trying to use the kubernetes-client api to accomplish full deletes of items associated with a deployment. For example, when using the following api:

const deployment = {
  {
  "apiVersion": "extensions/v1beta1",
  "kind": "Deployment",
  "metadata": {
    "name": "<insert_deployment_name_here>"
  },
  "spec": {
    "replicas": 2,
  ... // etc.
}

// Assume k8s is an instance of `kubernetes-api` v3.1
k8s.group(deployment).ns.deployments.delete({ body: deployment, orphanDependents: false }
// or instead using: 
k8s.group(deployment).ns.deployments.delete({ body: deployment, preservePods: false }

I would expect the above to delete an already existing deployment with the given deployment name, and all pods, and replicasets. However, the observed behavior is that the deployment does get deleted, though the pods and replicasets are orphaned (i.e. not deleted). If you run a kubectl delete deployment <deployment_name>, by default, all resources associated with the deployment will be deleted.

Perhaps you could let me know if I'm using the api wrong, or if kubernetes-client does not support this.

What I'm trying to avoid is having to have 3 callbacks, each deleting one of the above resources, in order to create a deployment with the same name.

Another nicety would be being able to do something like kubectl apply -f, programatically. So instead of seeing, Error: deployment "insert_deployment_name_here" already exists, it would just overwrite with the new config. However, I may have yet again misread the docs.

Any help is appreciated! Thanks for maintaining this project, its helped a lot and has allowed me to not have to write my own library to do exactly this. 👍 x 💯

@silasbw
Copy link
Contributor

silasbw commented Jan 26, 2017

Does this work for you?

k8s.group(deployment).ns.deployments.delete({
  name: deployment.metadata.name,
  qs: { orphanDependents: false }
});

Since this is a recent feature, and might be Alpha on the version of kubernetes you're running, ensure you're running with it explicitly enabled. In the past kubectl implemented this cleanup client side (e.g., set replicas to 0, then delete the resource), so I'd expect cleanup behavior if you're using kubectl, regardless of your kubernetes version.

Our README.md had a typo, which might have been misleading (#63)

I created an issue for your kubectl apply suggestion (#64). Thanks for the feedback and suggestion.

@therynamo
Copy link
Author

Hey @silasbw thanks for the quick reply.

I tried what you suggested above:

# in Node Repl
k8.group(deployment).ns.deployments.delete({ name: deployment.metadata.name, qs: { orphanDependents: false }}, (err, res) => console.log(err||res))

> { kind: 'Status',
  apiVersion: 'v1',
  metadata: {},
  status: 'Success',
  code: 200 }

Resulting in:

# kc being an alias for kubectl

# Check deployments
kc get deployments

No resources found.

# Check Replicasets
kc get replicasets

NAME                    DESIRED   CURRENT   READY     AGE
replicaset-12345  2         2         2         14m

# Check Pods
kc get po

NAME                          READY     STATUS    RESTARTS   AGE
podname-12345-7q281   1/1       Running   0          13m
podname-12345-jq9cw   1/1       Running   0          13m
# Check Minikube Version
minikube version

minikube version: v0.14.0

Minikube v0.14.0 Changelog: TL;DR -> K8s version 1.5.1. So deployment should be available.


In the past kubectl implemented this cleanup client side (e.g., set replicas to 0, then delete the resource), so I'd expect cleanup behavior if you're using kubectl, regardless of your kubernetes version.

Ah, that makes more sense. I was wondering if there was some kubectl magic that wasn't happening via the API. If that is the case, then I 3 callbacks will have to suffice for now.

Thank you for updating the docs!

I created an issue for your kubectl apply suggestion (#64). Thanks for the feedback and suggestion.

Sweet, thank you very much. If I get some time, perhaps I could find a way to put something together.

Thank you again for your quick feedback!

@silasbw
Copy link
Contributor

silasbw commented Jan 26, 2017

I'd expect Deployments to be available, I'm not sure about the garbage collection behavior implied by orphanDependents -- that is a recent addition and might still be in alpha for the version of k8s you're running. If that's the case you need to explicitly enable that alpha feature before getting the garbage collection you're looking for.

@silasbw
Copy link
Contributor

silasbw commented Jan 27, 2017

I'm going to close -- re-open if it you suspect a bug with kubernetes-client, or you have an ask a feature.

@silasbw silasbw closed this as completed Jan 27, 2017
@therynamo
Copy link
Author

therynamo commented Jan 27, 2017

I'm not sure about the garbage collection behavior implied by orphanDependents

Ah, ok, I see what you mean now. I will check on that. Thanks for the suggestion.

I'm going to close

👍 sounds good. Thanks for your feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants