Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v2] skaffold delete followed by skaffold run does not work. #7054

Closed
tejal29 opened this issue Jan 25, 2022 · 13 comments
Closed

[v2] skaffold delete followed by skaffold run does not work. #7054

tejal29 opened this issue Jan 25, 2022 · 13 comments
Assignees
Labels
2.0.0 Issues related to the 2.0.0 release kind/bug Something isn't working priority/p0 Highest priority. We are actively looking at delivering it.
Milestone

Comments

@tejal29
Copy link
Contributor

tejal29 commented Jan 25, 2022

Skaffold v2:

skaffold delete does not work after skaffold run on the v2 branch.

Steps to reproduce.

  1. Run skaffold run on v2 branch
DEBU[0001] v2 runner invoked                             subtask=-1 task=DevLoop
INFO[0001] manifests hydration will take place in /Users/tejaldesai/workspace/skaffold/integration/examples/getting-started/.kpt-pipeline  subtask=-1 task=DevLoop
  1. Run ../../../out/skaffold delete
Cleaning up...
WARN[0001] /Users/tejaldesai/workspace/skaffold/integration/examples/getting-started/.kpt-pipeline/* did not match any file  subtask=-1 task=DevLoop
 - No resources found
  1. There are no files in the kpt hydration directory.
 ls -al .kpt-pipeline
total 0
drwxr-xr-x  2 tejaldesai  primarygroup   64 Jan 25 17:38 .
drwxr-xr-x  8 tejaldesai  primarygroup  256 Jan 25 17:40 ..

The pod is getting deployed.

kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
getting-started               1/1     Running   0          12m
@tejal29 tejal29 added the 2.0.0 Issues related to the 2.0.0 release label Jan 25, 2022
@tejal29 tejal29 added this to the 2.0.0-alpha milestone Jan 25, 2022
@tejal29 tejal29 added kind/bug Something isn't working priority/p0 Highest priority. We are actively looking at delivering it. labels Jan 25, 2022
@aaron-prindle aaron-prindle self-assigned this Apr 11, 2022
@tejal29
Copy link
Contributor Author

tejal29 commented Apr 12, 2022

The TestDeployTailDefaultNamespace is failing on the branch due to same reason.

/integration/TestDeployTailDefaultNamespace
    deploy_test.go:152: Running [skaffold deploy --default-repo gcr.io/k8s-skaffold --tail --images busybox:latest --default-repo=] in testdata/deploy-hello-tail
time="2022-04-12T03:17:29Z" level=warning msg="/home/runner/work/skaffold/skaffold/integration/testdata/deploy-hello-tail/.kpt-pipeline/* did not match any file" subtask=-1 task=DevLoop
    util.go:423: timeout
    panic.go:642: Running [skaffold delete --default-repo gcr.io/k8s-skaffold] in testdata/deploy-hello-tail
    helper.go:198: Skaffold log:
         
    helper.go:240: Ran [skaffold delete --default-repo gcr.io/k8s-skaffold] in 354.803µs
--- FAIL: TestDeployTailDefaultNamespace (90.01s)

@aaron-prindle
Copy link
Contributor

aaron-prindle commented Apr 12, 2022

Not sure if something was changed recently but I am currently unable to repro this using integration/examples/getting-started (as done in the initial issue text). Below a skaffold run and then a skaffold delete works as expected:

aprindle@aprindle ~/skaffold/integration/examples/getting-started  [v2-v1.37.0]$ skaff run
Generating tags...
 - skaffold-example -> skaffold-example:v1.37.0-110-g3c097ec44
Checking cache...
 - skaffold-example: Found Locally
Starting test...
Tags used in deployment:
 - skaffold-example -> skaffold-example:716aa26dfba5604cd7d6bc4dbcbcb2821dbf520ed51401b392ed075df490a81d
Starting deploy...
WARN[0002] `deploy.kubectl.manifests` (DEPRECATED) are given, skaffold will skip the `manifests` field. If you expect skaffold to render the resources from the `manifests`, please delete the `deploy.kubectl.manifests` field.  subtask=0 task=Deploy
 - pod/getting-started created
Waiting for deployments to stabilize...
 - pods is ready.
Deployments stabilized in 3.107 seconds
You can also run [skaffold run --tail] to get the logs

aprindle@aprindle ~/skaffold/integration/examples/getting-started  [v2-v1.37.0]$ kubectl get po
NAME              READY   STATUS    RESTARTS   AGE
getting-started   1/1     Running   0          12s
aprindle@aprindle ~/skaffold/integration/examples/getting-started  [v2-v1.37.0]$ skaff delete
Cleaning up...
WARN[0000] `deploy.kubectl.manifests` (DEPRECATED) are given, skaffold will skip the `manifests` field. If you expect skaffold to render the resources from the `manifests`, please delete the `deploy.kubectl.manifests` field.  subtask=-1 task=DevLoop
 - pod "getting-started" deleted
aprindle@aprindle ~/skaffold/integration/examples/getting-started  [v2-v1.37.0]$ kubectl get po
No resources found in default namespace.

@aaron-prindle
Copy link
Contributor

aaron-prindle commented Apr 12, 2022

@tejal29 do you have a link to the logs for the failing integration test run? Thanks

@aaron-prindle
Copy link
Contributor

I am able to repro using the command in the test but actually doing a vanilla skaffold run + skaffold delete seems to work. Should likely update the bug title, will dig in more to see what is happening

@aaron-prindle
Copy link
Contributor

aaron-prindle commented Apr 12, 2022

This bug is related to the fact that some kind of yaml defaults, must have changed from v1 -> v2. I did not think the current yaml in the is a correct skaffold.yaml so not sure what the intended behavior is. Currently the test yaml is:

apiVersion: skaffold/v3alpha2
kind: Config

Changing that yaml to the below resolves the error seen:

apiVersion: skaffold/v3alpha2
kind: Config
deploy:
  kubectl:
    manifests:
      - k8s/pod.yaml

@aaron-prindle
Copy link
Contributor

The current test yaml is from a very old skaffold version:

aprindle@aprindle ~/skaffold/integration/testdata/deploy-hello-tail  [main]$ cat skaffold.yaml 
apiVersion: skaffold/v1beta16
kind: Config

Is it intended that this should work w/o specifying the manifest(s)?

@tejal29
Copy link
Contributor Author

tejal29 commented Apr 12, 2022

The old style deploy manifests should be removed. I think I have an issue open to delete it. We now have a kubectl renderer.

deploy:
  kubectl:
     manifests:

The expectation is the below manifest should work.

apiVersion: skaffold/v3alpha2
kind: Config
manifests:
   rawYaml:
      - k8s/pod.yaml

@aaron-prindle
Copy link
Contributor

Ah yes I see. It is broken currently for the new style manifest. I will see what is happening

@aaron-prindle
Copy link
Contributor

aaron-prindle commented Apr 12, 2022

Additionally it is not the delete that is broken necessarily, the deploy actually doesn't occur which I think is the root issue

@aaron-prindle
Copy link
Contributor

Hmm, ok so after switching to use a GKE cluster everything appears to be working and actually the repro steps are no longer working. My logs were identical when using minikube. I think this might be related to using minikube or a local k8s env atm

@aaron-prindle
Copy link
Contributor

The issue I saw is that skaffold run and skaffold deploy do not actually do any rendering. As such the .kpt-pipeline directory is empty when running both of these commands and the commands do not behave as expected when used in v1.

I have made the following issues from these observations which are the root cause of this issue:

@aaron-prindle aaron-prindle added the triage/discuss Items for discussion label Apr 18, 2022
@aaron-prindle
Copy link
Contributor

TestDeployTailDefaultNamespace failing because logs not correctly found (even w/ render phase added):

aprindle@aprindle ~/skaffold/integration/testdata/deploy-hello-tail  [v2-fix-7054]$ skaffold deploy --tail
Starting render...
Tags used in deployment:
Starting deploy...
 - pod/busybox-hello created
Waiting for deployments to stabilize...
Deployments stabilized in 414.974841ms
Press Ctrl+C to exit

^^ hangs forever w/ no logs even though logs coming from Pod:

aprindle@aprindle ~/verify-test $ kubectl logs busybox-hello
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!

@aaron-prindle
Copy link
Contributor

This error was not related to deploy + delete but a combination of:

Closing as this issue is actually encapsulated in those issues

@aaron-prindle aaron-prindle removed the triage/discuss Items for discussion label May 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2.0.0 Issues related to the 2.0.0 release kind/bug Something isn't working priority/p0 Highest priority. We are actively looking at delivering it.
Projects
None yet
Development

No branches or pull requests

2 participants