Skip to content
This repository has been archived by the owner on Sep 5, 2019. It is now read-only.

Following Readme does not lead to logs #232

Open
sebgoa opened this issue Jul 9, 2018 · 11 comments
Open

Following Readme does not lead to logs #232

sebgoa opened this issue Jul 9, 2018 · 11 comments
Labels
kind/doc Documentation

Comments

@sebgoa
Copy link
Contributor

sebgoa commented Jul 9, 2018

/area docs
/kind doc

I followed the readme step by step and could not get the logs of the simple builds. Apparently k8s does not keep logs of init-containers that succeed, so you need to artificially make the build fail (one step) to get the log.

sebair: ~ $ cat build.yaml 
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
  name: hello-build
spec:
  steps:
  - name: hello
    image: busybox
    args: ['echo', 'hello', 'build']
  - name: fail
    image: busybox
    args: ['foobar']
sebair: ~ $ kubectl get builds
NAME          CREATED AT
hello-build   5m
sebair: ~ $ kubectl get builds hello-build -o yaml
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"build.knative.dev/v1alpha1","kind":"Build","metadata":{"annotations":{},"name":"hello-build","namespace":"default"},"spec":{"steps":[{"args":["echo","hello","build"],"image":"busybox","name":"hello"},{"args":["foobar"],"image":"busybox","name":"fail"}]}}
  clusterName: ""
  creationTimestamp: 2018-07-09T14:07:19Z
  generation: 1
  name: hello-build
  namespace: default
  resourceVersion: "5700"
  selfLink: /apis/build.knative.dev/v1alpha1/namespaces/default/builds/hello-build
  uid: 64521424-8381-11e8-ad77-42010a800038
spec:
  generation: 1
  steps:
  - args:
    - echo
    - hello
    - build
    image: busybox
    name: hello
    resources: {}
  - args:
    - foobar
    image: busybox
    name: fail
    resources: {}
status:
  builder: Cluster
  cluster:
    namespace: default
    podName: hello-build-h6tg5
  completionTime: 2018-07-09T14:07:23Z
  conditions:
  - message: 'build step "build-step-fail" exited with code 127 (image: "docker-pullable://busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335");
      for logs run: kubectl -n default logs hello-build-h6tg5 -c build-step-fail'
    state: Succeeded
    status: "False"
  startTime: 2018-07-09T14:07:19Z
  stepStates:
  - terminated:
      containerID: docker://e575341bcf6ee547f8ed6999799342599f0961238c3e681b86531a58575c2127
      exitCode: 0
      finishedAt: 2018-07-09T14:07:20Z
      reason: Completed
      startedAt: 2018-07-09T14:07:20Z
  - terminated:
      containerID: docker://d097937398429a1e511815414845ea5276dfbdbe1c25e95b32ecbb6c28306f2e
      exitCode: 0
      finishedAt: 2018-07-09T14:07:22Z
      reason: Completed
      startedAt: 2018-07-09T14:07:22Z
  - terminated:
      containerID: docker://b68a30aac544ab6c2fecd0b0389c9d55626aeed1d5201d5d96478f537e610359
      exitCode: 127
      finishedAt: 2018-07-09T14:07:22Z
      message: |
        oci runtime error: container_linux.go:247: starting container process caused "exec: \"foobar\": executable file not found in $PATH"
      reason: ContainerCannotRun
      startedAt: 2018-07-09T14:07:22Z
$ kubectl -n default logs hello-build-h6tg5 -c build-step-hello
hello build

I can fix the README but this seems suboptimal. We should have a better demo in the README or make the logs available more easily.

@bobcatfish
Copy link
Contributor

Related issue: #9

@tejal29
Copy link

tejal29 commented Oct 10, 2018

I can see the logs for completed steps

(demo) kubectl get builds hello-build -o yaml
apiVersion: build.knative.dev/v1alpha1
kind: Build
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"build.knative.dev/v1alpha1","kind":"Build","metadata":{"annotations":{},"name":"hello-build","namespace":"default"},"spec":{"steps":[{"args":["echo","hello","build"],"image":"gcr.io/pipeline-crd-demo/busybox","name":"hello"}]}}
  clusterName: ""
  creationTimestamp: 2018-10-10T19:56:39Z
  generation: 0
  name: hello-build
  namespace: default
  resourceVersion: "3011271"
  selfLink: /apis/build.knative.dev/v1alpha1/namespaces/default/builds/hello-build
  uid: 9a1c41f6-ccc6-11e8-9370-42010a800102
spec:
  generation: 1
  steps:
  - args:
    - echo
    - hello
    - build
    image: gcr.io/pipeline-crd-demo/busybox
    name: hello
    resources: {}
status:
  builder: Cluster
  cluster:
    namespace: default
    podName: hello-build-htwrl
  completionTime: 2018-10-10T19:58:51Z
  conditions:
  - state: Succeeded
    status: "True"
  startTime: 2018-10-10T19:56:39Z
  stepStates:
  - terminated:
      containerID: docker://8e3c5183601cbd29f1fdc4370fbadb0a6a7984a30304917fb53ffba58b82b65e
      exitCode: 0
      finishedAt: 2018-10-10T19:58:48Z
      reason: Completed
      startedAt: 2018-10-10T19:58:48Z
  - terminated:
      containerID: docker://4e2dc1f7a2fe7e41c9e918758819963007c34ad59f01bd39d80055cba50e7a3d
      exitCode: 0
      finishedAt: 2018-10-10T19:58:49Z
      reason: Completed
      startedAt: 2018-10-10T19:58:49Z
(demo)kubectl logs $(kubectl get build hello-build --output jsonpath={.status.cluster.podName}) --container build-step-hello
hello build
(demo) tejaldesai@cloudshell:~ (pipeline-crd-demo)$ kubectl get pod $(kubectl get build hello-build --output jsonpath={.status.cluster.podName})
NAME                READY     STATUS      RESTARTS   AGE
hello-build-htwrl   0/1       Completed   0          1h
tejaldesai@cloudshell:~ (pipeline-crd-demo)$

I followed the instructions here to install build crd.
https://github.com/knative/docs/blob/master/install/Knative-with-GKE.md

@tejal29
Copy link

tejal29 commented Oct 10, 2018

I installed the knative serving and knative build components.
I uninstalled everything and installed just the knative build , and now i see the error you are seeing.

@bobcatfish
Copy link
Contributor

@tejal29 i think it might be a race condition, if the pod hasn't been completely destroyed then you might be able to get the logs from the init container

@tejal29
Copy link

tejal29 commented Oct 10, 2018

Looking in to release.yaml further, i found the image sha for the build controller were different in knative serving release.yaml and in the knative build release.yaml were different,

  1. In knative serving, it points to:
gcr.io/knative-releases/github.com/knative/build/cmd/controller@sha256:6c88fa5ae54a41182d9a7e9795c3a56f7ef716701137095a08f24ff6a3cca37d
  1. In build only instructions, they point to
 gcr.io/knative-releases/github.com/knative/build/cmd/controller@sha256:5d12da76b8ba36548a97b07866fdc9c13c1cb0e499dfdcca97af731b1ad2c488

I changed the image to what knative serving had to see if that fixes it. However it does not.

@tejal29
Copy link

tejal29 commented Oct 10, 2018

@bobcatfish what do you mean by "if it hasn't been completely destroyed". I could access the logs 30 min after the build was completed.

@bobcatfish
Copy link
Contributor

@tejal29 I think that if there is a reference to the pod sticking around (or the container? not sure!) (i.e. some controller is still watching these resources) that the pod/container/logs will not be destroyed either

my understanding how this works is super hazy tho!

@dlorenc
Copy link
Contributor

dlorenc commented Oct 12, 2018

@fejta pointed me at a cool solution for this that Prow uses: https://github.com/kubernetes/test-infra/tree/master/prow/cmd/entrypoint

Basically we can hijack the specified cmd/args of each step and replace them with this utility. It then executes the specified process, but with stdout/stderr logged to a real file. We can then dump that at the end, or do whatever we need to with it.

@fejta
Copy link

fejta commented Oct 12, 2018

Note that an unfortunate side effect of this strategy is that it ignores any ENTRYPOINT defined by the image. The user will have to be careful to add the entrypoint command to the front the command/args list.

If you figure out a clean way to extract the entrypoint so we can add auto-add it into the entrypoint arg list, please let me know 😁

@imjasonh
Copy link
Member

imjasonh commented Oct 12, 2018

That is a pretty unfortunate side effect, most of the builder images we use and recommend rely on the entrypoint to reduce stutter. It might be a deal-breaker. 😞

Another approach I think we should consider is having the build controller stream logs from the build while it runs (to a build-specified location*), then have the final nop image block until it gets a signal from the controller that logs are done**.

* need to design how to specify a logs destination
** need to authenticate this request

@sebgoa
Copy link
Contributor Author

sebgoa commented Oct 13, 2018

fwiw, we (triggermesh) are indeed able to get the logs from elasticsearch, so the logs can be retrieved.

from a getting started perpective however I think the dosc/readmes need to be modified so that people know they can't get the logs from kubectl

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/doc Documentation
Projects
None yet
Development

No branches or pull requests

7 participants