diff --git a/v1.23/eks-a/PRODUCT.yaml b/v1.23/eks-a/PRODUCT.yaml new file mode 100644 index 0000000000..fe2551de59 --- /dev/null +++ b/v1.23/eks-a/PRODUCT.yaml @@ -0,0 +1,9 @@ +vendor: Amazon Web Services +name: Amazon Elastic Kubernetes Service Anywhere (Amazon EKS Anywhere) +version: v1.23.7 +website_url: https://aws.amazon.com/eks/eks-anywhere +repo_url: https://github.com/aws/eks-anywhere +documentation_url: https://anywhere.eks.amazonaws.com/ +product_logo_url: https://raw.githubusercontent.com/aws/eks-anywhere/main/docs/static/AWS_logo_RGB.svg +type: installer +description: Amazon EKS Anywhere is a new deployment option for Amazon EKS that allows customers to create and operate Kubernetes clusters on customer-managed infrastructure, supported by AWS. diff --git a/v1.23/eks-a/README.md b/v1.23/eks-a/README.md new file mode 100644 index 0000000000..c7730d050a --- /dev/null +++ b/v1.23/eks-a/README.md @@ -0,0 +1,212 @@ +# Conformance testing Amazon EKS Anywhere + +## Setup EKS Anywhere Cluster + +Setup EKS Anywhere cluster according to the [EKS Anywhere documentation](https://anywhere.eks.amazonaws.com/). + +Create an [EKS Anywhere production cluster](https://anywhere.eks.amazonaws.com/docs/getting-started/production-environment/) to reproduce the EKS Anywhere Conformance e2e results. + + +## Requirements +Create a Kubernetes cluster on a target workload environment with EKS Anywhere run on an administrative machine. + +### Target Workload Environment + +The target workload environment will need: + +* A vSphere 7+ environment running vCenter +* Capacity to deploy 6-10VMs +* DHCP service running in vSphere environment in the primary VM network for your workload cluster +* One network in vSphere to use for the cluster. This network must have inbound access into vCenter +* A OVA imported into vSphere and converted into template for the workload VMs +* User credentials to [create vms and attach networks, etc](https://anywhere.eks.amazonaws.com/docs/reference/vsphere/user-permissions/) + +Each VM will require: + +* 2 vCPU +* 8GB RAM +* 25GB Disk + +### Administrative Machine + +The administrative machine will need: + +* Docker 20.x.x +* Mac OS (10.15) / Ubuntu (20.04.2 LTS) +* 4 CPU cores +* 16GB memory +* 30GB free disk space + +#### Kubectl + +On the administrative machine, install and configure the Kubernetes command-line tool +[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) + +#### Docker + +The method to [install Docker](https://docs.docker.com/get-docker/) depends on your operating system and architecture. +If you are using Ubuntu use the [Docker CE](https://docs.docker.com/engine/install/ubuntu/) installation instructions to install Docker and not the Snap installation. + +#### EKS Anywhere + +Install [EKS Anywhere](https://anywhere.eks.amazonaws.com/docs/getting-started/install/) on your administrative machine. + +#### Sonobuoy + +Download a binary release of [sonobuoy](https://github.com/vmware-tanzu/sonobuoy/releases/). + +If you are on a Mac, you many need to open the Security & Privacy and approve sonobuoy for +execution. + +```shell +if [[ "$(uname)" == "Darwin" ]] +then + SONOBUOY=https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.50.0/sonobuoy_0.50.0_darwin_amd64.tar.gz +else + SONOBUOY=https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.50.0/sonobuoy_0.50.0_linux_386.tar.gz +fi +wget -qO- ${SONOBUOY} |tar -xz sonobuoy +chmod 755 sonobuoy +``` + +## Create EKS Anywhere Cluster + +1. Generate a cluster configuration: + + ```shell + CLUSTER_NAME=prod + eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider vsphere >cluster.yaml + ``` + +1. Populate cluster configuration. For example: + + ```yaml + apiVersion: anywhere.eks.amazonaws.com/v1alpha1 + kind: Cluster + metadata: + name: prod + spec: + clusterNetwork: + cni: cilium + pods: + cidrBlocks: + - 192.168.0.0/16 + services: + cidrBlocks: + - 10.96.0.0/12 + controlPlaneConfiguration: + count: 2 + endpoint: + host: "198.18.100.79" + machineGroupRef: + kind: VSphereMachineConfig + name: prod-cp + datacenterRef: + kind: VSphereDatacenterConfig + name: prod + externalEtcdConfiguration: + count: 3 + machineGroupRef: + kind: VSphereMachineConfig + name: prod-etcd + kubernetesVersion: "1.23" + managementCluster: + name: prod + workerNodeGroupConfigurations: + - count: 2 + machineGroupRef: + kind: VSphereMachineConfig + name: prod + --- + apiVersion: anywhere.eks.amazonaws.com/v1alpha1 + kind: VSphereDatacenterConfig + metadata: + name: prod + spec: + datacenter: "SDDC-Datacenter" + insecure: false + network: "/SDDC-Datacenter/network/sddc-cgw-network-1" + server: "vcenter.sddc-44-239-186-141.vmwarevmc.com" + thumbprint: "" + --- + apiVersion: anywhere.eks.amazonaws.com/v1alpha1 + kind: VSphereMachineConfig + metadata: + name: prod-cp + spec: + datastore: "/SDDC-Datacenter/datastore/WorkloadDatastore" + diskGiB: 25 + folder: "/SDDC-Datacenter/vm/capv/prod" + memoryMiB: 8192 + numCPUs: 2 + osFamily: bottlerocket + resourcePool: "*/Resources/Compute-ResourcePool" + users: + - name: ec2-user + sshAuthorizedKeys: + - "ssh-rsa AAAA..." + --- + apiVersion: anywhere.eks.amazonaws.com/v1alpha1 + kind: VSphereMachineConfig + metadata: + name: prod + spec: + datastore: "/SDDC-Datacenter/datastore/WorkloadDatastore" + diskGiB: 25 + folder: "/SDDC-Datacenter/vm/capv/prod" + memoryMiB: 8192 + numCPUs: 2 + osFamily: bottlerocket + resourcePool: "*/Resources/Compute-ResourcePool" + users: + - name: ec2-user + sshAuthorizedKeys: + - "ssh-rsa AAAA..." + --- + apiVersion: anywhere.eks.amazonaws.com/v1alpha1 + kind: VSphereMachineConfig + metadata: + name: prod-etcd + spec: + datastore: "/SDDC-Datacenter/datastore/WorkloadDatastore" + diskGiB: 25 + folder: "/SDDC-Datacenter/vm/capv/prod" + memoryMiB: 8192 + numCPUs: 2 + osFamily: bottlerocket + resourcePool: "*/Resources/Compute-ResourcePool" + users: + - name: ec2-user + sshAuthorizedKeys: + - "ssh-rsa AAAA..." + ``` + +1. Set credential environment variables + + ```shell + export EKSA_VSPHERE_USERNAME='billy' + export EKSA_VSPHERE_PASSWORD='t0p$ecret' + ``` + +1. Create a cluster + + ```shell + eksctl anywhere create cluster -f cluster.yaml -v 4 + ``` + + +## Run Sonobuoy e2e +``` +./sonobuoy run --mode=certified-conformance --wait --kube-conformance-image k8s.gcr.io/conformance:v1.23.7 +results=$(./sonobuoy retrieve) +mkdir ./results +tar xzf $results -C ./results +./sonobuoy e2e ${results} +mv results/plugins/e2e/results/global/* . +``` + +## Cleanup +```shell +eksctl anywhere delete cluster prod -v 4 +rm -rf cluster.yaml prod *tar.gz results +``` diff --git a/v1.23/eks-a/e2e.log b/v1.23/eks-a/e2e.log new file mode 100644 index 0000000000..ce0ebb3068 --- /dev/null +++ b/v1.23/eks-a/e2e.log @@ -0,0 +1,15213 @@ +I0817 22:38:43.098533 20 e2e.go:132] Starting e2e run "18b3b74a-d7eb-485e-bb7a-38080b026820" on Ginkgo node 1 +{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1660775922E0817 22:38:45.945157 20 progress.go:119] Failed to post progress update to http://localhost:8099/progress: Post "http://localhost:8099/progress": dial tcp [::1]:8099: connect: connection refused + - Will randomize all specs +Will run 346 of 7044 specs + +Aug 17 22:38:45.945: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 22:38:45.946: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Aug 17 22:38:45.965: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Aug 17 22:38:46.063: INFO: 29 / 29 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Aug 17 22:38:46.063: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready. +Aug 17 22:38:46.063: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Aug 17 22:38:46.071: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'cilium' (0 seconds elapsed) +Aug 17 22:38:46.071: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Aug 17 22:38:46.071: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'vsphere-cloud-controller-manager' (0 seconds elapsed) +Aug 17 22:38:46.071: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'vsphere-csi-node' (0 seconds elapsed) +Aug 17 22:38:46.071: INFO: e2e test version: v1.23.7 +Aug 17 22:38:46.072: INFO: kube-apiserver version: v1.23.7-eks-7709a84 +Aug 17 22:38:46.072: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 22:38:46.075: INFO: Cluster IP family: ipv4 +SSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:38:46.075: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +W0817 22:38:46.106313 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ +Aug 17 22:38:46.106: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap configmap-449/configmap-test-7cc2fa46-50e0-4615-977c-17ac713bf718 +STEP: Creating a pod to test consume configMaps +Aug 17 22:38:46.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2" in namespace "configmap-449" to be "Succeeded or Failed" +Aug 17 22:38:46.134: INFO: Pod "pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.497785ms +Aug 17 22:38:48.139: INFO: Pod "pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008724306s +Aug 17 22:38:50.143: INFO: Pod "pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2": Phase="Running", Reason="", readiness=false. Elapsed: 4.013095898s +Aug 17 22:38:52.148: INFO: Pod "pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018032705s +STEP: Saw pod success +Aug 17 22:38:52.149: INFO: Pod "pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2" satisfied condition "Succeeded or Failed" +Aug 17 22:38:52.152: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2 container env-test: +STEP: delete the pod +Aug 17 22:38:52.188: INFO: Waiting for pod pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2 to disappear +Aug 17 22:38:52.192: INFO: Pod pod-configmaps-9e340e56-db97-4f5a-a1c0-f7a50e3ac0a2 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:38:52.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-449" for this suite. + +• [SLOW TEST:6.128 seconds] +[sig-node] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":1,"skipped":12,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:38:52.204: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:38:52.249: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Aug 17 22:38:57.258: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Aug 17 22:38:59.267: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Aug 17 22:39:01.273: INFO: Creating deployment "test-rollover-deployment" +Aug 17 22:39:01.285: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Aug 17 22:39:03.295: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Aug 17 22:39:03.302: INFO: Ensure that both replica sets have 1 created replica +Aug 17 22:39:03.308: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Aug 17 22:39:03.319: INFO: Updating deployment test-rollover-deployment +Aug 17 22:39:03.319: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Aug 17 22:39:05.328: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Aug 17 22:39:05.336: INFO: Make sure deployment "test-rollover-deployment" is complete +Aug 17 22:39:05.343: INFO: all replica sets need to contain the pod-template-hash label +Aug 17 22:39:05.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 22:39:07.351: INFO: all replica sets need to contain the pod-template-hash label +Aug 17 22:39:07.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 3, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 22:39:09.353: INFO: all replica sets need to contain the pod-template-hash label +Aug 17 22:39:09.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 22:39:11.355: INFO: all replica sets need to contain the pod-template-hash label +Aug 17 22:39:11.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 22:39:13.354: INFO: all replica sets need to contain the pod-template-hash label +Aug 17 22:39:13.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 22:39:15.354: INFO: all replica sets need to contain the pod-template-hash label +Aug 17 22:39:15.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 22:39:17.354: INFO: all replica sets need to contain the pod-template-hash label +Aug 17 22:39:17.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 39, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 39, 1, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 22:39:19.353: INFO: +Aug 17 22:39:19.353: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 17 22:39:19.363: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-833 bd21ea9c-8d82-4203-b472-c7d903fcc475 13240 2 2022-08-17 22:39:01 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-08-17 22:39:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 22:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002784138 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-08-17 22:39:01 +0000 UTC,LastTransitionTime:2022-08-17 22:39:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668b7f667d" has successfully progressed.,LastUpdateTime:2022-08-17 22:39:18 +0000 UTC,LastTransitionTime:2022-08-17 22:39:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Aug 17 22:39:19.366: INFO: New ReplicaSet "test-rollover-deployment-668b7f667d" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-668b7f667d deployment-833 8cc081ef-f000-492d-8a7b-3bcbe007f9b5 13228 2 2022-08-17 22:39:03 +0000 UTC map[name:rollover-pod pod-template-hash:668b7f667d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment bd21ea9c-8d82-4203-b472-c7d903fcc475 0xc002784607 0xc002784608}] [] [{kube-controller-manager Update apps/v1 2022-08-17 22:39:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bd21ea9c-8d82-4203-b472-c7d903fcc475\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 22:39:17 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668b7f667d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668b7f667d] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0027846b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 17 22:39:19.367: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Aug 17 22:39:19.367: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-833 8e11d781-1113-467b-80fd-b735a2ad0adc 13239 2 2022-08-17 22:38:52 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment bd21ea9c-8d82-4203-b472-c7d903fcc475 0xc0027844d7 0xc0027844d8}] [] [{e2e.test Update apps/v1 2022-08-17 22:38:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 22:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bd21ea9c-8d82-4203-b472-c7d903fcc475\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-08-17 22:39:18 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002784598 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 17 22:39:19.367: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-833 b4ce9f33-086a-4521-a781-7f4475e88ea9 13061 2 2022-08-17 22:39:01 +0000 UTC map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment bd21ea9c-8d82-4203-b472-c7d903fcc475 0xc002784727 0xc002784728}] [] [{kube-controller-manager Update apps/v1 2022-08-17 22:39:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bd21ea9c-8d82-4203-b472-c7d903fcc475\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 22:39:03 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0027847d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 17 22:39:19.370: INFO: Pod "test-rollover-deployment-668b7f667d-zbqqf" is available: +&Pod{ObjectMeta:{test-rollover-deployment-668b7f667d-zbqqf test-rollover-deployment-668b7f667d- deployment-833 d96f10b6-8247-41ad-b227-0f2e2fee3290 13127 0 2022-08-17 22:39:03 +0000 UTC map[name:rollover-pod pod-template-hash:668b7f667d] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668b7f667d 8cc081ef-f000-492d-8a7b-3bcbe007f9b5 0xc002784d27 0xc002784d28}] [] [{kube-controller-manager Update v1 2022-08-17 22:39:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8cc081ef-f000-492d-8a7b-3bcbe007f9b5\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-17 22:39:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.199\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bwzqq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bwzqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:39:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:39:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:192.168.2.199,StartTime:2022-08-17 22:39:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-17 22:39:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://cc0bc74be60e91de1db214fd45f604d25cea920f4f69f1ea4b2a498a9ecdbf30,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.199,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:39:19.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-833" for this suite. + +• [SLOW TEST:27.182 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support rollover [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":2,"skipped":20,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:39:19.390: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-7621 +[It] should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating statefulset ss in namespace statefulset-7621 +Aug 17 22:39:19.446: INFO: Found 0 stateful pods, waiting for 1 +Aug 17 22:39:29.453: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label +STEP: Getting /status +Aug 17 22:39:29.495: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status +Aug 17 22:39:29.510: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated +Aug 17 22:39:29.512: INFO: Observed &StatefulSet event: ADDED +Aug 17 22:39:29.512: INFO: Found Statefulset ss in namespace statefulset-7621 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 17 22:39:29.512: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status +Aug 17 22:39:29.512: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Aug 17 22:39:29.521: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched +Aug 17 22:39:29.523: INFO: Observed &StatefulSet event: ADDED +Aug 17 22:39:29.523: INFO: Observed Statefulset ss in namespace statefulset-7621 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 17 22:39:29.524: INFO: Observed &StatefulSet event: MODIFIED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 17 22:39:29.524: INFO: Deleting all statefulset in ns statefulset-7621 +Aug 17 22:39:29.527: INFO: Scaling statefulset ss to 0 +Aug 17 22:39:39.550: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 22:39:39.553: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:39:39.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-7621" for this suite. + +• [SLOW TEST:20.189 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should validate Statefulset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":3,"skipped":42,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:39:39.585: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run through the lifecycle of a ServiceAccount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a ServiceAccount +STEP: watching for the ServiceAccount to be added +STEP: patching the ServiceAccount +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) +STEP: deleting the ServiceAccount +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:39:39.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-1620" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":4,"skipped":82,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:39:39.662: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-volume-map-c3c42165-e3c8-442f-9163-34d55d16a10c +STEP: Creating a pod to test consume configMaps +Aug 17 22:39:39.705: INFO: Waiting up to 5m0s for pod "pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212" in namespace "configmap-5096" to be "Succeeded or Failed" +Aug 17 22:39:39.709: INFO: Pod "pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212": Phase="Pending", Reason="", readiness=false. Elapsed: 3.946395ms +Aug 17 22:39:41.716: INFO: Pod "pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010563845s +Aug 17 22:39:43.727: INFO: Pod "pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212": Phase="Running", Reason="", readiness=true. Elapsed: 4.022016101s +Aug 17 22:39:45.736: INFO: Pod "pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212": Phase="Running", Reason="", readiness=false. Elapsed: 6.031321879s +Aug 17 22:39:47.743: INFO: Pod "pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037911024s +STEP: Saw pod success +Aug 17 22:39:47.743: INFO: Pod "pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212" satisfied condition "Succeeded or Failed" +Aug 17 22:39:47.747: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212 container agnhost-container: +STEP: delete the pod +Aug 17 22:39:47.771: INFO: Waiting for pod pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212 to disappear +Aug 17 22:39:47.774: INFO: Pod pod-configmaps-6796d16d-378f-49d1-a491-9b3ef132e212 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:39:47.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5096" for this suite. + +• [SLOW TEST:8.123 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":5,"skipped":108,"failed":0} +SSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:39:47.786: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename prestop +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 +[It] should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating server pod server in namespace prestop-5334 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-5334 +STEP: Deleting pre-stop pod +Aug 17 22:39:58.881: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [sig-node] PreStop + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:39:58.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-5334" for this suite. + +• [SLOW TEST:11.127 seconds] +[sig-node] PreStop +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + should call prestop when killing a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":346,"completed":6,"skipped":113,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:39:58.916: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 22:39:58.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bda8989e-761a-4cff-9286-fb1e9ecc7061" in namespace "downward-api-6396" to be "Succeeded or Failed" +Aug 17 22:39:58.963: INFO: Pod "downwardapi-volume-bda8989e-761a-4cff-9286-fb1e9ecc7061": Phase="Pending", Reason="", readiness=false. Elapsed: 4.67385ms +Aug 17 22:40:00.969: INFO: Pod "downwardapi-volume-bda8989e-761a-4cff-9286-fb1e9ecc7061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010501055s +Aug 17 22:40:02.975: INFO: Pod "downwardapi-volume-bda8989e-761a-4cff-9286-fb1e9ecc7061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016724504s +STEP: Saw pod success +Aug 17 22:40:02.976: INFO: Pod "downwardapi-volume-bda8989e-761a-4cff-9286-fb1e9ecc7061" satisfied condition "Succeeded or Failed" +Aug 17 22:40:02.980: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-bda8989e-761a-4cff-9286-fb1e9ecc7061 container client-container: +STEP: delete the pod +Aug 17 22:40:03.001: INFO: Waiting for pod downwardapi-volume-bda8989e-761a-4cff-9286-fb1e9ecc7061 to disappear +Aug 17 22:40:03.004: INFO: Pod downwardapi-volume-bda8989e-761a-4cff-9286-fb1e9ecc7061 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:40:03.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6396" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":7,"skipped":137,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:40:03.016: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:40:03.044: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:40:04.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-1804" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":346,"completed":8,"skipped":142,"failed":0} +S +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:40:04.208: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +Aug 17 22:40:04.236: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:40:07.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-3256" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":9,"skipped":143,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:40:07.779: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:40:07.836: INFO: Create a RollingUpdate DaemonSet +Aug 17 22:40:07.845: INFO: Check that daemon pods launch on every node of the cluster +Aug 17 22:40:07.852: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:07.852: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:07.856: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 22:40:07.856: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 22:40:08.866: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:08.866: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:08.870: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 22:40:08.870: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 22:40:09.863: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:09.863: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:09.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 17 22:40:09.867: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 22:40:10.863: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:10.863: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:10.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 17 22:40:10.867: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 22:40:11.865: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:11.865: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:11.870: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 17 22:40:11.870: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 22:40:12.862: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:12.862: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:12.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 17 22:40:12.867: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 22:40:13.866: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:13.866: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:13.869: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 17 22:40:13.870: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +Aug 17 22:40:13.870: INFO: Update the DaemonSet to trigger a rollout +Aug 17 22:40:13.880: INFO: Updating DaemonSet daemon-set +Aug 17 22:40:14.904: INFO: Roll back the DaemonSet before rollout is complete +Aug 17 22:40:14.916: INFO: Updating DaemonSet daemon-set +Aug 17 22:40:14.916: INFO: Make sure DaemonSet rollback is complete +Aug 17 22:40:14.922: INFO: Wrong image for pod: daemon-set-2qkbm. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2, got: foo:non-existent. +Aug 17 22:40:14.922: INFO: Pod daemon-set-2qkbm is not available +Aug 17 22:40:14.930: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:14.930: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:15.942: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:15.942: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:16.945: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:16.945: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:17.937: INFO: Pod daemon-set-cmmx2 is not available +Aug 17 22:40:17.943: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:40:17.943: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1222, will wait for the garbage collector to delete the pods +Aug 17 22:40:18.012: INFO: Deleting DaemonSet.extensions daemon-set took: 8.726076ms +Aug 17 22:40:18.112: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.432058ms +Aug 17 22:40:20.819: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 22:40:20.819: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 17 22:40:20.825: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"14284"},"items":null} + +Aug 17 22:40:20.828: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"14284"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:40:20.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1222" for this suite. + +• [SLOW TEST:13.090 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should rollback without unnecessary restarts [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":10,"skipped":150,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:40:20.869: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a ResourceQuota with best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not best effort scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a best-effort pod +STEP: Ensuring resource quota with best effort scope captures the pod usage +STEP: Ensuring resource quota with not best effort ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a not best-effort pod +STEP: Ensuring resource quota with not best effort scope captures the pod usage +STEP: Ensuring resource quota with best effort scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:40:37.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-2092" for this suite. + +• [SLOW TEST:16.164 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":11,"skipped":156,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:40:37.037: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-5738 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace statefulset-5738 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5738 +Aug 17 22:40:37.094: INFO: Found 0 stateful pods, waiting for 1 +Aug 17 22:40:47.100: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Aug 17 22:40:47.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-5738 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 22:40:47.564: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 22:40:47.564: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 22:40:47.564: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 22:40:47.569: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Aug 17 22:40:57.577: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Aug 17 22:40:57.577: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 22:40:57.593: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999734s +Aug 17 22:40:58.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996387456s +Aug 17 22:40:59.611: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987691765s +Aug 17 22:41:00.617: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.978460143s +Aug 17 22:41:01.623: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.972513703s +Aug 17 22:41:02.630: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.967105284s +Aug 17 22:41:03.635: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.958286427s +Aug 17 22:41:04.644: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.953839912s +Aug 17 22:41:05.649: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.945268091s +Aug 17 22:41:06.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 940.865124ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5738 +Aug 17 22:41:07.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-5738 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 22:41:07.810: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 17 22:41:07.810: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 22:41:07.810: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 17 22:41:07.815: INFO: Found 1 stateful pods, waiting for 3 +Aug 17 22:41:17.825: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 22:41:17.825: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 22:41:17.825: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Aug 17 22:41:17.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-5738 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 22:41:17.998: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 22:41:17.998: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 22:41:17.998: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 22:41:17.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-5738 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 22:41:18.156: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 22:41:18.156: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 22:41:18.156: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 22:41:18.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-5738 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 22:41:18.295: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 22:41:18.295: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 22:41:18.295: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 22:41:18.295: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 22:41:18.301: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 +Aug 17 22:41:28.310: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Aug 17 22:41:28.310: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Aug 17 22:41:28.310: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Aug 17 22:41:28.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999758s +Aug 17 22:41:29.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994619527s +Aug 17 22:41:30.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988198686s +Aug 17 22:41:31.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981872377s +Aug 17 22:41:32.354: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974610446s +Aug 17 22:41:33.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.966785562s +Aug 17 22:41:34.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.957753249s +Aug 17 22:41:35.376: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.952849785s +Aug 17 22:41:36.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.945516474s +Aug 17 22:41:37.390: INFO: Verifying statefulset ss doesn't scale past 3 for another 937.464503ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5738 +Aug 17 22:41:38.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-5738 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 22:41:38.527: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 17 22:41:38.527: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 22:41:38.527: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 17 22:41:38.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-5738 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 22:41:38.664: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 17 22:41:38.664: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 22:41:38.664: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 17 22:41:38.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-5738 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 22:41:38.797: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 17 22:41:38.797: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 22:41:38.797: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 17 22:41:38.797: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 17 22:41:48.816: INFO: Deleting all statefulset in ns statefulset-5738 +Aug 17 22:41:48.820: INFO: Scaling statefulset ss to 0 +Aug 17 22:41:48.832: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 22:41:48.834: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:41:48.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5738" for this suite. + +• [SLOW TEST:71.821 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":12,"skipped":213,"failed":0} +SSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:41:48.859: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:41:50.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-6587" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":13,"skipped":221,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:41:51.003: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 22:41:51.049: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29e7517b-ee02-4ca1-b02d-36dc9b8e0f9c" in namespace "projected-3075" to be "Succeeded or Failed" +Aug 17 22:41:51.054: INFO: Pod "downwardapi-volume-29e7517b-ee02-4ca1-b02d-36dc9b8e0f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.291634ms +Aug 17 22:41:53.061: INFO: Pod "downwardapi-volume-29e7517b-ee02-4ca1-b02d-36dc9b8e0f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011881755s +Aug 17 22:41:55.071: INFO: Pod "downwardapi-volume-29e7517b-ee02-4ca1-b02d-36dc9b8e0f9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021948011s +STEP: Saw pod success +Aug 17 22:41:55.071: INFO: Pod "downwardapi-volume-29e7517b-ee02-4ca1-b02d-36dc9b8e0f9c" satisfied condition "Succeeded or Failed" +Aug 17 22:41:55.075: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-29e7517b-ee02-4ca1-b02d-36dc9b8e0f9c container client-container: +STEP: delete the pod +Aug 17 22:41:55.106: INFO: Waiting for pod downwardapi-volume-29e7517b-ee02-4ca1-b02d-36dc9b8e0f9c to disappear +Aug 17 22:41:55.109: INFO: Pod downwardapi-volume-29e7517b-ee02-4ca1-b02d-36dc9b8e0f9c no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:41:55.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3075" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":14,"skipped":230,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:41:55.121: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating projection with secret that has name projected-secret-test-4f377434-f424-49db-9a24-6b21c5cfce61 +STEP: Creating a pod to test consume secrets +Aug 17 22:41:55.158: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e5c274c7-65e0-4383-9e6d-e799fff34fd3" in namespace "projected-9668" to be "Succeeded or Failed" +Aug 17 22:41:55.161: INFO: Pod "pod-projected-secrets-e5c274c7-65e0-4383-9e6d-e799fff34fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.544362ms +Aug 17 22:41:57.165: INFO: Pod "pod-projected-secrets-e5c274c7-65e0-4383-9e6d-e799fff34fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007268758s +Aug 17 22:41:59.172: INFO: Pod "pod-projected-secrets-e5c274c7-65e0-4383-9e6d-e799fff34fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014069932s +STEP: Saw pod success +Aug 17 22:41:59.172: INFO: Pod "pod-projected-secrets-e5c274c7-65e0-4383-9e6d-e799fff34fd3" satisfied condition "Succeeded or Failed" +Aug 17 22:41:59.176: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-secrets-e5c274c7-65e0-4383-9e6d-e799fff34fd3 container projected-secret-volume-test: +STEP: delete the pod +Aug 17 22:41:59.202: INFO: Waiting for pod pod-projected-secrets-e5c274c7-65e0-4383-9e6d-e799fff34fd3 to disappear +Aug 17 22:41:59.205: INFO: Pod pod-projected-secrets-e5c274c7-65e0-4383-9e6d-e799fff34fd3 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:41:59.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9668" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":15,"skipped":239,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:41:59.221: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 22:41:59.811: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 22:42:02.853: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:42:02.859: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8122-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:42:06.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-9420" for this suite. +STEP: Destroying namespace "webhook-9420-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:6.878 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":16,"skipped":267,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:42:06.099: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0777 on node default medium +Aug 17 22:42:06.149: INFO: Waiting up to 5m0s for pod "pod-31930f12-df1d-412c-8613-0515608cf36c" in namespace "emptydir-3764" to be "Succeeded or Failed" +Aug 17 22:42:06.159: INFO: Pod "pod-31930f12-df1d-412c-8613-0515608cf36c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.430613ms +Aug 17 22:42:08.168: INFO: Pod "pod-31930f12-df1d-412c-8613-0515608cf36c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018099737s +Aug 17 22:42:10.172: INFO: Pod "pod-31930f12-df1d-412c-8613-0515608cf36c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022840482s +STEP: Saw pod success +Aug 17 22:42:10.172: INFO: Pod "pod-31930f12-df1d-412c-8613-0515608cf36c" satisfied condition "Succeeded or Failed" +Aug 17 22:42:10.175: INFO: Trying to get logs from node 195.17.65.231 pod pod-31930f12-df1d-412c-8613-0515608cf36c container test-container: +STEP: delete the pod +Aug 17 22:42:10.201: INFO: Waiting for pod pod-31930f12-df1d-412c-8613-0515608cf36c to disappear +Aug 17 22:42:10.204: INFO: Pod pod-31930f12-df1d-412c-8613-0515608cf36c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:42:10.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3764" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":17,"skipped":295,"failed":0} +SSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:42:10.214: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name projected-configmap-test-volume-4590520b-bc6f-476c-a41b-53b435409b0a +STEP: Creating a pod to test consume configMaps +Aug 17 22:42:10.257: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a34ce347-a130-4e89-819f-cbf44a67561c" in namespace "projected-9473" to be "Succeeded or Failed" +Aug 17 22:42:10.263: INFO: Pod "pod-projected-configmaps-a34ce347-a130-4e89-819f-cbf44a67561c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035254ms +Aug 17 22:42:12.270: INFO: Pod "pod-projected-configmaps-a34ce347-a130-4e89-819f-cbf44a67561c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013022084s +Aug 17 22:42:14.276: INFO: Pod "pod-projected-configmaps-a34ce347-a130-4e89-819f-cbf44a67561c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019656293s +STEP: Saw pod success +Aug 17 22:42:14.277: INFO: Pod "pod-projected-configmaps-a34ce347-a130-4e89-819f-cbf44a67561c" satisfied condition "Succeeded or Failed" +Aug 17 22:42:14.280: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-configmaps-a34ce347-a130-4e89-819f-cbf44a67561c container agnhost-container: +STEP: delete the pod +Aug 17 22:42:14.308: INFO: Waiting for pod pod-projected-configmaps-a34ce347-a130-4e89-819f-cbf44a67561c to disappear +Aug 17 22:42:14.311: INFO: Pod pod-projected-configmaps-a34ce347-a130-4e89-819f-cbf44a67561c no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:42:14.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9473" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":18,"skipped":301,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:42:14.324: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0777 on node default medium +Aug 17 22:42:14.361: INFO: Waiting up to 5m0s for pod "pod-7de068d8-3007-4268-bdce-92dbb0b2d533" in namespace "emptydir-5854" to be "Succeeded or Failed" +Aug 17 22:42:14.369: INFO: Pod "pod-7de068d8-3007-4268-bdce-92dbb0b2d533": Phase="Pending", Reason="", readiness=false. Elapsed: 7.406337ms +Aug 17 22:42:16.378: INFO: Pod "pod-7de068d8-3007-4268-bdce-92dbb0b2d533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016022458s +Aug 17 22:42:18.384: INFO: Pod "pod-7de068d8-3007-4268-bdce-92dbb0b2d533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022693844s +STEP: Saw pod success +Aug 17 22:42:18.384: INFO: Pod "pod-7de068d8-3007-4268-bdce-92dbb0b2d533" satisfied condition "Succeeded or Failed" +Aug 17 22:42:18.387: INFO: Trying to get logs from node 195.17.65.231 pod pod-7de068d8-3007-4268-bdce-92dbb0b2d533 container test-container: +STEP: delete the pod +Aug 17 22:42:18.413: INFO: Waiting for pod pod-7de068d8-3007-4268-bdce-92dbb0b2d533 to disappear +Aug 17 22:42:18.415: INFO: Pod pod-7de068d8-3007-4268-bdce-92dbb0b2d533 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:42:18.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-5854" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":19,"skipped":310,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:42:18.427: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should receive events on concurrent watches in same order [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: getting a starting resourceVersion +STEP: starting a background goroutine to produce watch events +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:42:21.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-5206" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":20,"skipped":314,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:42:21.288: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name cm-test-opt-del-0ed4b67f-2c8f-42fd-995e-4a575abf4bd6 +STEP: Creating configMap with name cm-test-opt-upd-0df7c586-f7b4-41f3-9abb-742fd1327750 +STEP: Creating the pod +Aug 17 22:42:21.349: INFO: The status of Pod pod-projected-configmaps-6c7350c2-f13b-4167-8643-aa73bb6071c9 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:42:23.356: INFO: The status of Pod pod-projected-configmaps-6c7350c2-f13b-4167-8643-aa73bb6071c9 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-0ed4b67f-2c8f-42fd-995e-4a575abf4bd6 +STEP: Updating configmap cm-test-opt-upd-0df7c586-f7b4-41f3-9abb-742fd1327750 +STEP: Creating configMap with name cm-test-opt-create-d30398cb-bebe-4c3b-bf69-be06717c2a42 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:42:25.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9505" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":21,"skipped":387,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:42:25.445: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename taint-multiple-pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 +Aug 17 22:42:25.472: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 17 22:43:25.518: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:43:25.522: INFO: Starting informer... +STEP: Starting pods... +Aug 17 22:43:25.745: INFO: Pod1 is running on 195.17.65.231. Tainting Node +Aug 17 22:43:27.969: INFO: Pod2 is running on 195.17.65.231. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting for Pod1 and Pod2 to be deleted +Aug 17 22:43:33.674: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Aug 17 22:43:53.432: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:43:53.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-multiple-pods-5460" for this suite. + +• [SLOW TEST:88.037 seconds] +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":22,"skipped":403,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:43:53.483: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the rc +STEP: delete the rc +STEP: wait for all pods to be garbage collected +STEP: Gathering metrics +Aug 17 22:44:03.586: INFO: The status of Pod kube-controller-manager-195.17.32.244 is Running (Ready = true) +Aug 17 22:44:03.651: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:44:03.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-5571" for this suite. + +• [SLOW TEST:10.185 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":23,"skipped":427,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:44:03.669: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating pod +Aug 17 22:44:03.713: INFO: The status of Pod pod-hostip-167264bf-b099-42b2-bfc3-8cec0943c71c is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:44:05.721: INFO: The status of Pod pod-hostip-167264bf-b099-42b2-bfc3-8cec0943c71c is Running (Ready = true) +Aug 17 22:44:05.728: INFO: Pod pod-hostip-167264bf-b099-42b2-bfc3-8cec0943c71c has hostIP: 195.17.65.231 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:44:05.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7609" for this suite. +•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":24,"skipped":461,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:44:05.741: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota +Aug 17 22:44:05.773: INFO: Pod name sample-pod: Found 0 pods out of 1 +Aug 17 22:44:10.780: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the replicaset Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:44:10.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3610" for this suite. + +• [SLOW TEST:5.087 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Replicaset should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":25,"skipped":495,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:44:10.828: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should update/patch PodDisruptionBudget status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Waiting for the pdb to be processed +STEP: Updating PodDisruptionBudget status +STEP: Waiting for all pods to be running +Aug 17 22:44:12.910: INFO: running pods: 0 < 1 +STEP: locating a running pod +STEP: Waiting for the pdb to be processed +STEP: Patching PodDisruptionBudget status +STEP: Waiting for the pdb to be processed +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:44:14.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-5960" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":26,"skipped":502,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:44:14.962: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 22:44:15.298: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 22:44:18.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:44:18.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6192" for this suite. +STEP: Destroying namespace "webhook-6192-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":27,"skipped":511,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:44:18.641: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Aug 17 22:44:18.693: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 17 22:45:18.734: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Create pods that use 4/5 of node resources. +Aug 17 22:45:18.766: INFO: Created pod: pod0-0-sched-preemption-low-priority +Aug 17 22:45:18.773: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Aug 17 22:45:18.795: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Aug 17 22:45:18.807: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a critical pod that use same resources as that of a lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:45:36.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-1960" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:78.298 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":28,"skipped":568,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:45:36.940: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test substitution in container's command +Aug 17 22:45:36.981: INFO: Waiting up to 5m0s for pod "var-expansion-ccc75260-fba1-4c76-a342-3c873afbf0a6" in namespace "var-expansion-9210" to be "Succeeded or Failed" +Aug 17 22:45:36.989: INFO: Pod "var-expansion-ccc75260-fba1-4c76-a342-3c873afbf0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.13098ms +Aug 17 22:45:38.997: INFO: Pod "var-expansion-ccc75260-fba1-4c76-a342-3c873afbf0a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015761184s +Aug 17 22:45:41.007: INFO: Pod "var-expansion-ccc75260-fba1-4c76-a342-3c873afbf0a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025094528s +STEP: Saw pod success +Aug 17 22:45:41.007: INFO: Pod "var-expansion-ccc75260-fba1-4c76-a342-3c873afbf0a6" satisfied condition "Succeeded or Failed" +Aug 17 22:45:41.012: INFO: Trying to get logs from node 195.17.65.231 pod var-expansion-ccc75260-fba1-4c76-a342-3c873afbf0a6 container dapi-container: +STEP: delete the pod +Aug 17 22:45:41.052: INFO: Waiting for pod var-expansion-ccc75260-fba1-4c76-a342-3c873afbf0a6 to disappear +Aug 17 22:45:41.055: INFO: Pod var-expansion-ccc75260-fba1-4c76-a342-3c873afbf0a6 no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:45:41.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9210" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":29,"skipped":590,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:45:41.070: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 22:45:41.464: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 22:45:44.497: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Listing all of the created validation webhooks +STEP: Creating a configMap that should be mutated +STEP: Deleting the collection of validation webhooks +STEP: Creating a configMap that should not be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:45:44.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-846" for this suite. +STEP: Destroying namespace "webhook-846-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":30,"skipped":594,"failed":0} +S +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:45:44.760: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating the pod +Aug 17 22:45:44.821: INFO: The status of Pod labelsupdateffe7d265-722b-4ab2-8f38-32d9fa0bf93b is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:45:46.829: INFO: The status of Pod labelsupdateffe7d265-722b-4ab2-8f38-32d9fa0bf93b is Running (Ready = true) +Aug 17 22:45:47.358: INFO: Successfully updated pod "labelsupdateffe7d265-722b-4ab2-8f38-32d9fa0bf93b" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:45:51.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6083" for this suite. + +• [SLOW TEST:6.636 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":31,"skipped":595,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:45:51.396: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:45:53.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-9908" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":32,"skipped":604,"failed":0} + +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:45:53.471: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should find a service from listing all namespaces [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: fetching services +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:45:53.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8965" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":33,"skipped":604,"failed":0} +SS +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:45:53.513: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should validate Deployment Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a Deployment +Aug 17 22:45:53.540: INFO: Creating simple deployment test-deployment-m4s8k +Aug 17 22:45:53.553: INFO: deployment "test-deployment-m4s8k" doesn't have the required revision set +STEP: Getting /status +Aug 17 22:45:55.573: INFO: Deployment test-deployment-m4s8k has Conditions: [{Available True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m4s8k-764bc7c4b7" has successfully progressed.}] +STEP: updating Deployment Status +Aug 17 22:45:55.583: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 45, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 45, 54, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 22, 45, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 22, 45, 53, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-m4s8k-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated +Aug 17 22:45:55.585: INFO: Observed &Deployment event: ADDED +Aug 17 22:45:55.585: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-m4s8k-764bc7c4b7"} +Aug 17 22:45:55.585: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.585: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-m4s8k-764bc7c4b7"} +Aug 17 22:45:55.585: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Aug 17 22:45:55.585: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.585: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Aug 17 22:45:55.585: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-m4s8k-764bc7c4b7" is progressing.} +Aug 17 22:45:55.586: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.586: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Aug 17 22:45:55.586: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m4s8k-764bc7c4b7" has successfully progressed.} +Aug 17 22:45:55.586: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.586: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Aug 17 22:45:55.586: INFO: Observed Deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m4s8k-764bc7c4b7" has successfully progressed.} +Aug 17 22:45:55.586: INFO: Found Deployment test-deployment-m4s8k in namespace deployment-6304 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 17 22:45:55.586: INFO: Deployment test-deployment-m4s8k has an updated status +STEP: patching the Statefulset Status +Aug 17 22:45:55.586: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Aug 17 22:45:55.593: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched +Aug 17 22:45:55.595: INFO: Observed &Deployment event: ADDED +Aug 17 22:45:55.595: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-m4s8k-764bc7c4b7"} +Aug 17 22:45:55.595: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.595: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-m4s8k-764bc7c4b7"} +Aug 17 22:45:55.595: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Aug 17 22:45:55.596: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.596: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Aug 17 22:45:55.596: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:53 +0000 UTC 2022-08-17 22:45:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-m4s8k-764bc7c4b7" is progressing.} +Aug 17 22:45:55.596: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.596: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Aug 17 22:45:55.596: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m4s8k-764bc7c4b7" has successfully progressed.} +Aug 17 22:45:55.596: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.596: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Aug 17 22:45:55.596: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-08-17 22:45:54 +0000 UTC 2022-08-17 22:45:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m4s8k-764bc7c4b7" has successfully progressed.} +Aug 17 22:45:55.596: INFO: Observed deployment test-deployment-m4s8k in namespace deployment-6304 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 17 22:45:55.597: INFO: Observed &Deployment event: MODIFIED +Aug 17 22:45:55.597: INFO: Found deployment test-deployment-m4s8k in namespace deployment-6304 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Aug 17 22:45:55.597: INFO: Deployment test-deployment-m4s8k has a patched status +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 17 22:45:55.601: INFO: Deployment "test-deployment-m4s8k": +&Deployment{ObjectMeta:{test-deployment-m4s8k deployment-6304 dcea082b-da0e-4592-a442-f40c56e6a66a 19200 1 2022-08-17 22:45:53 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-08-17 22:45:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 22:45:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2022-08-17 22:45:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0041574a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Aug 17 22:45:55.604: INFO: New ReplicaSet "test-deployment-m4s8k-764bc7c4b7" of Deployment "test-deployment-m4s8k": +&ReplicaSet{ObjectMeta:{test-deployment-m4s8k-764bc7c4b7 deployment-6304 a76892d4-5d0d-4a0d-823e-dfae6089d3e0 19190 1 2022-08-17 22:45:53 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-m4s8k dcea082b-da0e-4592-a442-f40c56e6a66a 0xc004157850 0xc004157851}] [] [{kube-controller-manager Update apps/v1 2022-08-17 22:45:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dcea082b-da0e-4592-a442-f40c56e6a66a\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 22:45:54 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0041578f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 17 22:45:55.607: INFO: Pod "test-deployment-m4s8k-764bc7c4b7-8dgqp" is available: +&Pod{ObjectMeta:{test-deployment-m4s8k-764bc7c4b7-8dgqp test-deployment-m4s8k-764bc7c4b7- deployment-6304 af4dd31e-c63f-4282-a1c2-bb6a35d34598 19189 0 2022-08-17 22:45:53 +0000 UTC map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-m4s8k-764bc7c4b7 a76892d4-5d0d-4a0d-823e-dfae6089d3e0 0xc004157ca0 0xc004157ca1}] [] [{kube-controller-manager Update v1 2022-08-17 22:45:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a76892d4-5d0d-4a0d-823e-dfae6089d3e0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-17 22:45:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bvlwb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bvlwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:45:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:45:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:45:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:45:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:192.168.1.39,StartTime:2022-08-17 22:45:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-17 22:45:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://9b72e6e2420a8e5a8c684dec13e450219e6f13cc542cba94ebda28da999560b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:45:55.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-6304" for this suite. +•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":34,"skipped":606,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:45:55.618: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name secret-test-fe1b1403-6218-4048-87ab-8949c4e27458 +STEP: Creating a pod to test consume secrets +Aug 17 22:45:55.655: INFO: Waiting up to 5m0s for pod "pod-secrets-b6ac43b0-a7c5-4714-8da5-1b45734a428c" in namespace "secrets-6030" to be "Succeeded or Failed" +Aug 17 22:45:55.662: INFO: Pod "pod-secrets-b6ac43b0-a7c5-4714-8da5-1b45734a428c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550179ms +Aug 17 22:45:57.668: INFO: Pod "pod-secrets-b6ac43b0-a7c5-4714-8da5-1b45734a428c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012716132s +Aug 17 22:45:59.673: INFO: Pod "pod-secrets-b6ac43b0-a7c5-4714-8da5-1b45734a428c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017551905s +STEP: Saw pod success +Aug 17 22:45:59.673: INFO: Pod "pod-secrets-b6ac43b0-a7c5-4714-8da5-1b45734a428c" satisfied condition "Succeeded or Failed" +Aug 17 22:45:59.676: INFO: Trying to get logs from node 195.17.65.231 pod pod-secrets-b6ac43b0-a7c5-4714-8da5-1b45734a428c container secret-volume-test: +STEP: delete the pod +Aug 17 22:45:59.695: INFO: Waiting for pod pod-secrets-b6ac43b0-a7c5-4714-8da5-1b45734a428c to disappear +Aug 17 22:45:59.700: INFO: Pod pod-secrets-b6ac43b0-a7c5-4714-8da5-1b45734a428c no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:45:59.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-6030" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":35,"skipped":615,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:45:59.713: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 22:45:59.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa" in namespace "projected-9329" to be "Succeeded or Failed" +Aug 17 22:45:59.754: INFO: Pod "downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.271289ms +Aug 17 22:46:01.763: INFO: Pod "downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa": Phase="Running", Reason="", readiness=true. Elapsed: 2.012016369s +Aug 17 22:46:03.768: INFO: Pod "downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa": Phase="Running", Reason="", readiness=false. Elapsed: 4.016693933s +Aug 17 22:46:05.780: INFO: Pod "downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028697771s +STEP: Saw pod success +Aug 17 22:46:05.780: INFO: Pod "downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa" satisfied condition "Succeeded or Failed" +Aug 17 22:46:05.784: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa container client-container: +STEP: delete the pod +Aug 17 22:46:05.809: INFO: Waiting for pod downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa to disappear +Aug 17 22:46:05.812: INFO: Pod downwardapi-volume-1f70c82b-7b50-4890-bcd5-7cbbfae2b8fa no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:46:05.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9329" for this suite. + +• [SLOW TEST:6.112 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":36,"skipped":620,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:46:05.826: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:46:05.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-4529" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":37,"skipped":659,"failed":0} +SSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:46:05.901: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating projection with secret that has name projected-secret-test-map-9d112dae-f377-4355-aa2b-917ce08a07a6 +STEP: Creating a pod to test consume secrets +Aug 17 22:46:05.946: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00724152-9ab2-43f2-826b-41cee267ca8d" in namespace "projected-7827" to be "Succeeded or Failed" +Aug 17 22:46:05.955: INFO: Pod "pod-projected-secrets-00724152-9ab2-43f2-826b-41cee267ca8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.518657ms +Aug 17 22:46:07.962: INFO: Pod "pod-projected-secrets-00724152-9ab2-43f2-826b-41cee267ca8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015284079s +Aug 17 22:46:09.968: INFO: Pod "pod-projected-secrets-00724152-9ab2-43f2-826b-41cee267ca8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022026309s +STEP: Saw pod success +Aug 17 22:46:09.969: INFO: Pod "pod-projected-secrets-00724152-9ab2-43f2-826b-41cee267ca8d" satisfied condition "Succeeded or Failed" +Aug 17 22:46:09.971: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-secrets-00724152-9ab2-43f2-826b-41cee267ca8d container projected-secret-volume-test: +STEP: delete the pod +Aug 17 22:46:09.999: INFO: Waiting for pod pod-projected-secrets-00724152-9ab2-43f2-826b-41cee267ca8d to disappear +Aug 17 22:46:10.002: INFO: Pod pod-projected-secrets-00724152-9ab2-43f2-826b-41cee267ca8d no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:46:10.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7827" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":38,"skipped":664,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:46:10.017: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a test externalName service +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1158.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1158.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local; sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 17 22:46:18.086: INFO: DNS probes using dns-test-97038eb8-055b-497c-aad5-6a8bfa26f04a succeeded + +STEP: deleting the pod +STEP: changing the externalName to bar.example.com +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1158.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1158.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local; sleep 1; done + +STEP: creating a second pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 17 22:46:20.138: INFO: File wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:20.142: INFO: File jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:20.142: INFO: Lookups using dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e failed for: [wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local] + +Aug 17 22:46:25.149: INFO: File wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:25.153: INFO: File jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:25.153: INFO: Lookups using dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e failed for: [wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local] + +Aug 17 22:46:30.149: INFO: File wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:30.153: INFO: File jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:30.153: INFO: Lookups using dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e failed for: [wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local] + +Aug 17 22:46:35.147: INFO: File wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:35.151: INFO: File jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:35.151: INFO: Lookups using dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e failed for: [wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local] + +Aug 17 22:46:40.147: INFO: File wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:40.152: INFO: File jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local from pod dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e contains 'foo.example.com. +' instead of 'bar.example.com.' +Aug 17 22:46:40.152: INFO: Lookups using dns-1158/dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e failed for: [wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local] + +Aug 17 22:46:45.157: INFO: DNS probes using dns-test-2c8b848e-d78a-41e8-9ed2-79429abc371e succeeded + +STEP: deleting the pod +STEP: changing the service to type=ClusterIP +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1158.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1158.svc.cluster.local; sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1158.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1158.svc.cluster.local; sleep 1; done + +STEP: creating a third pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 17 22:46:47.236: INFO: DNS probes using dns-test-e9440109-ef58-404f-8ae8-950b3f6af615 succeeded + +STEP: deleting the pod +STEP: deleting the test externalName service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:46:47.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-1158" for this suite. + +• [SLOW TEST:37.278 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for ExternalName services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":39,"skipped":675,"failed":0} +SSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:46:47.295: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:46:47.322: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota +STEP: Checking rc "condition-test" has the desired failure condition set +STEP: Scaling down rc "condition-test" to satisfy pod quota +Aug 17 22:46:49.375: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:46:50.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-2988" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":40,"skipped":678,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:46:50.401: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 22:46:50.778: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 22:46:53.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod that should be denied by the webhook +STEP: create a pod that causes the webhook to hang +STEP: create a configmap that should be denied by the webhook +STEP: create a configmap that should be admitted by the webhook +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook +STEP: create a namespace that bypass the webhook +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:47:03.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5145" for this suite. +STEP: Destroying namespace "webhook-5145-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:13.623 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":41,"skipped":728,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:47:04.025: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name secret-test-map-bbddc1e2-a601-4e87-bec0-95d662c7751a +STEP: Creating a pod to test consume secrets +Aug 17 22:47:04.082: INFO: Waiting up to 5m0s for pod "pod-secrets-12dd06e6-0151-46de-bbf5-351339eeae21" in namespace "secrets-376" to be "Succeeded or Failed" +Aug 17 22:47:04.085: INFO: Pod "pod-secrets-12dd06e6-0151-46de-bbf5-351339eeae21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.733448ms +Aug 17 22:47:06.090: INFO: Pod "pod-secrets-12dd06e6-0151-46de-bbf5-351339eeae21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00770253s +Aug 17 22:47:08.096: INFO: Pod "pod-secrets-12dd06e6-0151-46de-bbf5-351339eeae21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014235598s +STEP: Saw pod success +Aug 17 22:47:08.096: INFO: Pod "pod-secrets-12dd06e6-0151-46de-bbf5-351339eeae21" satisfied condition "Succeeded or Failed" +Aug 17 22:47:08.100: INFO: Trying to get logs from node 195.17.65.231 pod pod-secrets-12dd06e6-0151-46de-bbf5-351339eeae21 container secret-volume-test: +STEP: delete the pod +Aug 17 22:47:08.123: INFO: Waiting for pod pod-secrets-12dd06e6-0151-46de-bbf5-351339eeae21 to disappear +Aug 17 22:47:08.127: INFO: Pod pod-secrets-12dd06e6-0151-46de-bbf5-351339eeae21 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:47:08.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-376" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":42,"skipped":787,"failed":0} +S +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:47:08.140: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 +STEP: create the container to handle the HTTPGet hook request. +Aug 17 22:47:08.184: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:47:10.190: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the pod with lifecycle hook +Aug 17 22:47:10.202: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:47:12.207: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Aug 17 22:47:12.227: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Aug 17 22:47:12.230: INFO: Pod pod-with-poststart-exec-hook still exists +Aug 17 22:47:14.230: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Aug 17 22:47:14.236: INFO: Pod pod-with-poststart-exec-hook still exists +Aug 17 22:47:16.230: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Aug 17 22:47:16.235: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:47:16.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-6820" for this suite. + +• [SLOW TEST:8.116 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":43,"skipped":788,"failed":0} +SSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:47:16.256: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete a collection of pod templates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Create set of pod templates +Aug 17 22:47:16.294: INFO: created test-podtemplate-1 +Aug 17 22:47:16.300: INFO: created test-podtemplate-2 +Aug 17 22:47:16.304: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace +STEP: delete collection of pod templates +Aug 17 22:47:16.308: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity +Aug 17 22:47:16.327: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:47:16.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-7877" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":44,"skipped":791,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:47:16.341: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicaSet +STEP: Ensuring resource quota status captures replicaset creation +STEP: Deleting a ReplicaSet +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:47:27.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4666" for this suite. + +• [SLOW TEST:11.100 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":45,"skipped":863,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:47:27.442: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service multi-endpoint-test in namespace services-2763 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2763 to expose endpoints map[] +Aug 17 22:47:27.503: INFO: successfully validated that service multi-endpoint-test in namespace services-2763 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-2763 +Aug 17 22:47:27.517: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:47:29.528: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2763 to expose endpoints map[pod1:[100]] +Aug 17 22:47:29.541: INFO: successfully validated that service multi-endpoint-test in namespace services-2763 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-2763 +Aug 17 22:47:29.553: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:47:31.565: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2763 to expose endpoints map[pod1:[100] pod2:[101]] +Aug 17 22:47:31.583: INFO: successfully validated that service multi-endpoint-test in namespace services-2763 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods +Aug 17 22:47:31.583: INFO: Creating new exec pod +Aug 17 22:47:34.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-2763 exec execpodxkjlp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' +Aug 17 22:47:34.745: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Aug 17 22:47:34.745: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:47:34.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-2763 exec execpodxkjlp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.101.202.144 80' +Aug 17 22:47:34.896: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 10.101.202.144 80\nConnection to 10.101.202.144 80 port [tcp/http] succeeded!\n" +Aug 17 22:47:34.896: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:47:34.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-2763 exec execpodxkjlp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' +Aug 17 22:47:35.030: INFO: stderr: "+ nc -v -t+ -w 2echo multi-endpoint-test hostName 81\n\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Aug 17 22:47:35.030: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:47:35.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-2763 exec execpodxkjlp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.101.202.144 81' +Aug 17 22:47:35.164: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.101.202.144 81\nConnection to 10.101.202.144 81 port [tcp/*] succeeded!\n" +Aug 17 22:47:35.165: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-2763 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2763 to expose endpoints map[pod2:[101]] +Aug 17 22:47:36.207: INFO: successfully validated that service multi-endpoint-test in namespace services-2763 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-2763 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2763 to expose endpoints map[] +Aug 17 22:47:37.235: INFO: successfully validated that service multi-endpoint-test in namespace services-2763 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:47:37.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-2763" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:9.835 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should serve multiport endpoints from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":46,"skipped":868,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:47:37.278: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name projected-configmap-test-volume-map-13318caf-5542-4173-9203-a4309c278e94 +STEP: Creating a pod to test consume configMaps +Aug 17 22:47:37.320: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f7e22e9-edd8-46c5-96a3-b6dd29be05a5" in namespace "projected-2626" to be "Succeeded or Failed" +Aug 17 22:47:37.323: INFO: Pod "pod-projected-configmaps-0f7e22e9-edd8-46c5-96a3-b6dd29be05a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.549945ms +Aug 17 22:47:39.331: INFO: Pod "pod-projected-configmaps-0f7e22e9-edd8-46c5-96a3-b6dd29be05a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010033433s +Aug 17 22:47:41.338: INFO: Pod "pod-projected-configmaps-0f7e22e9-edd8-46c5-96a3-b6dd29be05a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017042329s +STEP: Saw pod success +Aug 17 22:47:41.338: INFO: Pod "pod-projected-configmaps-0f7e22e9-edd8-46c5-96a3-b6dd29be05a5" satisfied condition "Succeeded or Failed" +Aug 17 22:47:41.340: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-configmaps-0f7e22e9-edd8-46c5-96a3-b6dd29be05a5 container agnhost-container: +STEP: delete the pod +Aug 17 22:47:41.371: INFO: Waiting for pod pod-projected-configmaps-0f7e22e9-edd8-46c5-96a3-b6dd29be05a5 to disappear +Aug 17 22:47:41.376: INFO: Pod pod-projected-configmaps-0f7e22e9-edd8-46c5-96a3-b6dd29be05a5 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:47:41.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2626" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":47,"skipped":882,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:47:41.391: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name secret-test-map-2a6c1342-9dbd-41d8-a311-471eae927f47 +STEP: Creating a pod to test consume secrets +Aug 17 22:47:41.435: INFO: Waiting up to 5m0s for pod "pod-secrets-4f4a8332-625f-40bc-bf85-aea155d919e0" in namespace "secrets-136" to be "Succeeded or Failed" +Aug 17 22:47:41.442: INFO: Pod "pod-secrets-4f4a8332-625f-40bc-bf85-aea155d919e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472469ms +Aug 17 22:47:43.450: INFO: Pod "pod-secrets-4f4a8332-625f-40bc-bf85-aea155d919e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014549313s +Aug 17 22:47:45.459: INFO: Pod "pod-secrets-4f4a8332-625f-40bc-bf85-aea155d919e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024135627s +STEP: Saw pod success +Aug 17 22:47:45.459: INFO: Pod "pod-secrets-4f4a8332-625f-40bc-bf85-aea155d919e0" satisfied condition "Succeeded or Failed" +Aug 17 22:47:45.466: INFO: Trying to get logs from node 195.17.65.231 pod pod-secrets-4f4a8332-625f-40bc-bf85-aea155d919e0 container secret-volume-test: +STEP: delete the pod +Aug 17 22:47:45.496: INFO: Waiting for pod pod-secrets-4f4a8332-625f-40bc-bf85-aea155d919e0 to disappear +Aug 17 22:47:45.500: INFO: Pod pod-secrets-4f4a8332-625f-40bc-bf85-aea155d919e0 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:47:45.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-136" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":48,"skipped":902,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:47:45.513: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename taint-single-pod +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 +Aug 17 22:47:45.542: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 17 22:48:45.583: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:48:45.588: INFO: Starting informer... +STEP: Starting pod... +Aug 17 22:48:45.806: INFO: Pod is running on 195.17.65.231. Tainting Node +STEP: Trying to apply a taint on the Node +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting short time to make sure Pod is queued for deletion +Aug 17 22:48:45.832: INFO: Pod wasn't evicted. Proceeding +Aug 17 22:48:45.832: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute +STEP: Waiting some time to make sure that toleration time passed. +Aug 17 22:50:00.860: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:50:00.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "taint-single-pod-1338" for this suite. + +• [SLOW TEST:135.368 seconds] +[sig-node] NoExecuteTaintManager Single Pod [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 + removing taint cancels eviction [Disruptive] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":49,"skipped":919,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:50:00.885: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:50:00.915: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Aug 17 22:50:09.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7802 --namespace=crd-publish-openapi-7802 create -f -' +Aug 17 22:50:10.968: INFO: stderr: "" +Aug 17 22:50:10.968: INFO: stdout: "e2e-test-crd-publish-openapi-835-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Aug 17 22:50:10.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7802 --namespace=crd-publish-openapi-7802 delete e2e-test-crd-publish-openapi-835-crds test-cr' +Aug 17 22:50:11.048: INFO: stderr: "" +Aug 17 22:50:11.048: INFO: stdout: "e2e-test-crd-publish-openapi-835-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Aug 17 22:50:11.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7802 --namespace=crd-publish-openapi-7802 apply -f -' +Aug 17 22:50:11.319: INFO: stderr: "" +Aug 17 22:50:11.319: INFO: stdout: "e2e-test-crd-publish-openapi-835-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Aug 17 22:50:11.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7802 --namespace=crd-publish-openapi-7802 delete e2e-test-crd-publish-openapi-835-crds test-cr' +Aug 17 22:50:11.393: INFO: stderr: "" +Aug 17 22:50:11.393: INFO: stdout: "e2e-test-crd-publish-openapi-835-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Aug 17 22:50:11.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7802 explain e2e-test-crd-publish-openapi-835-crds' +Aug 17 22:50:11.634: INFO: stderr: "" +Aug 17 22:50:11.634: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-835-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:50:19.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7802" for this suite. + +• [SLOW TEST:18.255 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":50,"skipped":925,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:50:19.142: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod pod-subpath-test-projected-8g7m +STEP: Creating a pod to test atomic-volume-subpath +Aug 17 22:50:19.195: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8g7m" in namespace "subpath-7155" to be "Succeeded or Failed" +Aug 17 22:50:19.198: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.979921ms +Aug 17 22:50:21.208: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 2.013309996s +Aug 17 22:50:23.218: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 4.022791476s +Aug 17 22:50:25.227: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 6.031957833s +Aug 17 22:50:27.235: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 8.040613171s +Aug 17 22:50:29.245: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 10.0504105s +Aug 17 22:50:31.255: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 12.059982553s +Aug 17 22:50:33.264: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 14.069374827s +Aug 17 22:50:35.272: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 16.077524193s +Aug 17 22:50:37.279: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 18.083967723s +Aug 17 22:50:39.288: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=true. Elapsed: 20.093072844s +Aug 17 22:50:41.294: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Running", Reason="", readiness=false. Elapsed: 22.099069475s +Aug 17 22:50:43.304: INFO: Pod "pod-subpath-test-projected-8g7m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.108838218s +STEP: Saw pod success +Aug 17 22:50:43.304: INFO: Pod "pod-subpath-test-projected-8g7m" satisfied condition "Succeeded or Failed" +Aug 17 22:50:43.307: INFO: Trying to get logs from node 195.17.65.231 pod pod-subpath-test-projected-8g7m container test-container-subpath-projected-8g7m: +STEP: delete the pod +Aug 17 22:50:43.339: INFO: Waiting for pod pod-subpath-test-projected-8g7m to disappear +Aug 17 22:50:43.343: INFO: Pod pod-subpath-test-projected-8g7m no longer exists +STEP: Deleting pod pod-subpath-test-projected-8g7m +Aug 17 22:50:43.343: INFO: Deleting pod "pod-subpath-test-projected-8g7m" in namespace "subpath-7155" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:50:43.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-7155" for this suite. + +• [SLOW TEST:24.216 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":346,"completed":51,"skipped":970,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:50:43.359: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service in namespace services-7636 +STEP: creating service affinity-clusterip in namespace services-7636 +STEP: creating replication controller affinity-clusterip in namespace services-7636 +I0817 22:50:43.420626 20 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-7636, replica count: 3 +I0817 22:50:46.473307 20 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0817 22:50:49.473664 20 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 22:50:49.482: INFO: Creating new exec pod +Aug 17 22:50:52.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-7636 exec execpod-affinity4qtw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' +Aug 17 22:50:52.651: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Aug 17 22:50:52.651: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:50:52.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-7636 exec execpod-affinity4qtw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.108.233.185 80' +Aug 17 22:50:52.789: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.108.233.185 80\nConnection to 10.108.233.185 80 port [tcp/http] succeeded!\n" +Aug 17 22:50:52.789: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:50:52.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-7636 exec execpod-affinity4qtw6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.233.185:80/ ; done' +Aug 17 22:50:52.984: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.233.185:80/\n" +Aug 17 22:50:52.984: INFO: stdout: "\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh\naffinity-clusterip-cljgh" +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Received response from host: affinity-clusterip-cljgh +Aug 17 22:50:52.984: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-7636, will wait for the garbage collector to delete the pods +Aug 17 22:50:53.080: INFO: Deleting ReplicationController affinity-clusterip took: 8.501935ms +Aug 17 22:50:53.181: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.676303ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:50:54.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-7636" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:11.571 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":52,"skipped":1028,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:50:54.932: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 22:50:55.383: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 22:50:58.418: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: fetching the /apis discovery document +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/admissionregistration.k8s.io discovery document +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:50:58.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5935" for this suite. +STEP: Destroying namespace "webhook-5935-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":53,"skipped":1044,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:50:58.496: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test override command +Aug 17 22:50:58.534: INFO: Waiting up to 5m0s for pod "client-containers-124d23a4-222c-4fbc-a0b4-c9e30c58dfa3" in namespace "containers-5031" to be "Succeeded or Failed" +Aug 17 22:50:58.542: INFO: Pod "client-containers-124d23a4-222c-4fbc-a0b4-c9e30c58dfa3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.393206ms +Aug 17 22:51:00.549: INFO: Pod "client-containers-124d23a4-222c-4fbc-a0b4-c9e30c58dfa3": Phase="Running", Reason="", readiness=true. Elapsed: 2.014939875s +Aug 17 22:51:02.557: INFO: Pod "client-containers-124d23a4-222c-4fbc-a0b4-c9e30c58dfa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022794756s +STEP: Saw pod success +Aug 17 22:51:02.557: INFO: Pod "client-containers-124d23a4-222c-4fbc-a0b4-c9e30c58dfa3" satisfied condition "Succeeded or Failed" +Aug 17 22:51:02.560: INFO: Trying to get logs from node 195.17.65.231 pod client-containers-124d23a4-222c-4fbc-a0b4-c9e30c58dfa3 container agnhost-container: +STEP: delete the pod +Aug 17 22:51:02.599: INFO: Waiting for pod client-containers-124d23a4-222c-4fbc-a0b4-c9e30c58dfa3 to disappear +Aug 17 22:51:02.606: INFO: Pod client-containers-124d23a4-222c-4fbc-a0b4-c9e30c58dfa3 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:02.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-5031" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":54,"skipped":1060,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:02.622: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating the pod +Aug 17 22:51:02.672: INFO: The status of Pod labelsupdated7fe533d-8bbf-4d27-8be3-e7c52383a69c is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:51:04.678: INFO: The status of Pod labelsupdated7fe533d-8bbf-4d27-8be3-e7c52383a69c is Running (Ready = true) +Aug 17 22:51:05.205: INFO: Successfully updated pod "labelsupdated7fe533d-8bbf-4d27-8be3-e7c52383a69c" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:09.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9095" for this suite. + +• [SLOW TEST:6.630 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":55,"skipped":1148,"failed":0} +SS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:09.253: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: validating cluster-info +Aug 17 22:51:09.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-602 cluster-info' +Aug 17 22:51:09.349: INFO: stderr: "" +Aug 17 22:51:09.349: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:09.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-602" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":56,"skipped":1150,"failed":0} +SSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:09.360: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should provide secure master service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:09.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5761" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":57,"skipped":1156,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:09.422: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Aug 17 22:51:09.470: INFO: The status of Pod pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:51:11.479: INFO: The status of Pod pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Aug 17 22:51:12.016: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85" +Aug 17 22:51:12.016: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85" in namespace "pods-6360" to be "terminated due to deadline exceeded" +Aug 17 22:51:12.020: INFO: Pod "pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85": Phase="Running", Reason="", readiness=true. Elapsed: 3.912982ms +Aug 17 22:51:14.026: INFO: Pod "pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85": Phase="Running", Reason="", readiness=true. Elapsed: 2.00980802s +Aug 17 22:51:16.031: INFO: Pod "pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85": Phase="Running", Reason="", readiness=true. Elapsed: 4.015040952s +Aug 17 22:51:18.039: INFO: Pod "pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.023142601s +Aug 17 22:51:18.039: INFO: Pod "pod-update-activedeadlineseconds-0367388b-e7ff-4c34-998a-b65462afdb85" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:18.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-6360" for this suite. + +• [SLOW TEST:8.635 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":58,"skipped":1163,"failed":0} +S +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:18.057: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename security-context +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Aug 17 22:51:18.091: INFO: Waiting up to 5m0s for pod "security-context-1033d0f5-7612-4218-ba99-d7f315ca426b" in namespace "security-context-3337" to be "Succeeded or Failed" +Aug 17 22:51:18.098: INFO: Pod "security-context-1033d0f5-7612-4218-ba99-d7f315ca426b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.449828ms +Aug 17 22:51:20.104: INFO: Pod "security-context-1033d0f5-7612-4218-ba99-d7f315ca426b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013021954s +Aug 17 22:51:22.110: INFO: Pod "security-context-1033d0f5-7612-4218-ba99-d7f315ca426b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019155028s +STEP: Saw pod success +Aug 17 22:51:22.110: INFO: Pod "security-context-1033d0f5-7612-4218-ba99-d7f315ca426b" satisfied condition "Succeeded or Failed" +Aug 17 22:51:22.117: INFO: Trying to get logs from node 195.17.65.231 pod security-context-1033d0f5-7612-4218-ba99-d7f315ca426b container test-container: +STEP: delete the pod +Aug 17 22:51:22.142: INFO: Waiting for pod security-context-1033d0f5-7612-4218-ba99-d7f315ca426b to disappear +Aug 17 22:51:22.145: INFO: Pod security-context-1033d0f5-7612-4218-ba99-d7f315ca426b no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:22.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-3337" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":59,"skipped":1164,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:22.157: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 22:51:22.713: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 22:51:25.751: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API +STEP: create a configmap that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:25.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2454" for this suite. +STEP: Destroying namespace "webhook-2454-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":60,"skipped":1169,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:26.096: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating replication controller my-hostname-basic-669008e1-0f2e-49a0-a848-e19bb953f27c +Aug 17 22:51:26.135: INFO: Pod name my-hostname-basic-669008e1-0f2e-49a0-a848-e19bb953f27c: Found 0 pods out of 1 +Aug 17 22:51:31.142: INFO: Pod name my-hostname-basic-669008e1-0f2e-49a0-a848-e19bb953f27c: Found 1 pods out of 1 +Aug 17 22:51:31.142: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-669008e1-0f2e-49a0-a848-e19bb953f27c" are running +Aug 17 22:51:31.145: INFO: Pod "my-hostname-basic-669008e1-0f2e-49a0-a848-e19bb953f27c-jwhql" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-17 22:51:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-17 22:51:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-17 22:51:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-17 22:51:26 +0000 UTC Reason: Message:}]) +Aug 17 22:51:31.146: INFO: Trying to dial the pod +Aug 17 22:51:36.162: INFO: Controller my-hostname-basic-669008e1-0f2e-49a0-a848-e19bb953f27c: Got expected result from replica 1 [my-hostname-basic-669008e1-0f2e-49a0-a848-e19bb953f27c-jwhql]: "my-hostname-basic-669008e1-0f2e-49a0-a848-e19bb953f27c-jwhql", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:36.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-2885" for this suite. + +• [SLOW TEST:10.077 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":61,"skipped":1185,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:36.175: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name projected-configmap-test-volume-map-0bb7b78c-fe36-4943-a933-527ce1ab1145 +STEP: Creating a pod to test consume configMaps +Aug 17 22:51:36.214: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb9ad7a4-5a03-4888-8544-24fdbf38871e" in namespace "projected-7617" to be "Succeeded or Failed" +Aug 17 22:51:36.217: INFO: Pod "pod-projected-configmaps-cb9ad7a4-5a03-4888-8544-24fdbf38871e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.600188ms +Aug 17 22:51:38.223: INFO: Pod "pod-projected-configmaps-cb9ad7a4-5a03-4888-8544-24fdbf38871e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008531488s +Aug 17 22:51:40.229: INFO: Pod "pod-projected-configmaps-cb9ad7a4-5a03-4888-8544-24fdbf38871e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014347522s +STEP: Saw pod success +Aug 17 22:51:40.229: INFO: Pod "pod-projected-configmaps-cb9ad7a4-5a03-4888-8544-24fdbf38871e" satisfied condition "Succeeded or Failed" +Aug 17 22:51:40.233: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-configmaps-cb9ad7a4-5a03-4888-8544-24fdbf38871e container agnhost-container: +STEP: delete the pod +Aug 17 22:51:40.257: INFO: Waiting for pod pod-projected-configmaps-cb9ad7a4-5a03-4888-8544-24fdbf38871e to disappear +Aug 17 22:51:40.262: INFO: Pod pod-projected-configmaps-cb9ad7a4-5a03-4888-8544-24fdbf38871e no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:51:40.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7617" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":62,"skipped":1208,"failed":0} +SSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:51:40.275: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation +Aug 17 22:51:40.302: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation +Aug 17 22:52:06.857: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 22:52:14.525: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:52:42.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1175" for this suite. + +• [SLOW TEST:62.463 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":63,"skipped":1211,"failed":0} +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:52:42.740: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 22:52:42.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e64f3b1-56e9-4173-8bea-f956f4d46ab9" in namespace "downward-api-4400" to be "Succeeded or Failed" +Aug 17 22:52:42.783: INFO: Pod "downwardapi-volume-6e64f3b1-56e9-4173-8bea-f956f4d46ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.059674ms +Aug 17 22:52:44.793: INFO: Pod "downwardapi-volume-6e64f3b1-56e9-4173-8bea-f956f4d46ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012602402s +Aug 17 22:52:46.798: INFO: Pod "downwardapi-volume-6e64f3b1-56e9-4173-8bea-f956f4d46ab9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017558626s +STEP: Saw pod success +Aug 17 22:52:46.798: INFO: Pod "downwardapi-volume-6e64f3b1-56e9-4173-8bea-f956f4d46ab9" satisfied condition "Succeeded or Failed" +Aug 17 22:52:46.801: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-6e64f3b1-56e9-4173-8bea-f956f4d46ab9 container client-container: +STEP: delete the pod +Aug 17 22:52:46.837: INFO: Waiting for pod downwardapi-volume-6e64f3b1-56e9-4173-8bea-f956f4d46ab9 to disappear +Aug 17 22:52:46.840: INFO: Pod downwardapi-volume-6e64f3b1-56e9-4173-8bea-f956f4d46ab9 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:52:46.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4400" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":64,"skipped":1218,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:52:46.852: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation +Aug 17 22:52:46.874: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 22:52:53.784: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:20.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4908" for this suite. + +• [SLOW TEST:33.851 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":65,"skipped":1224,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:20.718: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating the pod +Aug 17 22:53:20.758: INFO: The status of Pod annotationupdate2723815e-272a-4fc9-985e-6dc9a1b2f8a9 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:53:22.767: INFO: The status of Pod annotationupdate2723815e-272a-4fc9-985e-6dc9a1b2f8a9 is Running (Ready = true) +Aug 17 22:53:23.302: INFO: Successfully updated pod "annotationupdate2723815e-272a-4fc9-985e-6dc9a1b2f8a9" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:27.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8599" for this suite. + +• [SLOW TEST:6.621 seconds] +[sig-storage] Projected downwardAPI +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":66,"skipped":1388,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:27.340: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl can dry-run update Pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Aug 17 22:53:27.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7932 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Aug 17 22:53:27.449: INFO: stderr: "" +Aug 17 22:53:27.449: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run +Aug 17 22:53:27.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7932 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' +Aug 17 22:53:28.871: INFO: stderr: "" +Aug 17 22:53:28.871: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Aug 17 22:53:28.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7932 delete pods e2e-test-httpd-pod' +Aug 17 22:53:30.010: INFO: stderr: "" +Aug 17 22:53:30.010: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:30.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7932" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":67,"skipped":1424,"failed":0} +S +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:30.032: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-volume-map-c055e39b-c2f4-4ae0-b805-e922388b86a4 +STEP: Creating a pod to test consume configMaps +Aug 17 22:53:30.074: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4e69137-83dc-43b8-80ba-e3498cabbceb" in namespace "configmap-1531" to be "Succeeded or Failed" +Aug 17 22:53:30.077: INFO: Pod "pod-configmaps-c4e69137-83dc-43b8-80ba-e3498cabbceb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914918ms +Aug 17 22:53:32.086: INFO: Pod "pod-configmaps-c4e69137-83dc-43b8-80ba-e3498cabbceb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01121293s +Aug 17 22:53:34.095: INFO: Pod "pod-configmaps-c4e69137-83dc-43b8-80ba-e3498cabbceb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020974615s +STEP: Saw pod success +Aug 17 22:53:34.095: INFO: Pod "pod-configmaps-c4e69137-83dc-43b8-80ba-e3498cabbceb" satisfied condition "Succeeded or Failed" +Aug 17 22:53:34.098: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-c4e69137-83dc-43b8-80ba-e3498cabbceb container agnhost-container: +STEP: delete the pod +Aug 17 22:53:34.121: INFO: Waiting for pod pod-configmaps-c4e69137-83dc-43b8-80ba-e3498cabbceb to disappear +Aug 17 22:53:34.124: INFO: Pod pod-configmaps-c4e69137-83dc-43b8-80ba-e3498cabbceb no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:34.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1531" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":68,"skipped":1425,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:34.135: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name secret-test-92851c9d-324d-4918-8418-09b5cbf00f68 +STEP: Creating a pod to test consume secrets +Aug 17 22:53:34.198: INFO: Waiting up to 5m0s for pod "pod-secrets-2343881b-fbbf-48f0-863e-a973e90018ec" in namespace "secrets-9365" to be "Succeeded or Failed" +Aug 17 22:53:34.203: INFO: Pod "pod-secrets-2343881b-fbbf-48f0-863e-a973e90018ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476377ms +Aug 17 22:53:36.212: INFO: Pod "pod-secrets-2343881b-fbbf-48f0-863e-a973e90018ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01413039s +Aug 17 22:53:38.217: INFO: Pod "pod-secrets-2343881b-fbbf-48f0-863e-a973e90018ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019182249s +STEP: Saw pod success +Aug 17 22:53:38.217: INFO: Pod "pod-secrets-2343881b-fbbf-48f0-863e-a973e90018ec" satisfied condition "Succeeded or Failed" +Aug 17 22:53:38.222: INFO: Trying to get logs from node 195.17.65.231 pod pod-secrets-2343881b-fbbf-48f0-863e-a973e90018ec container secret-volume-test: +STEP: delete the pod +Aug 17 22:53:38.239: INFO: Waiting for pod pod-secrets-2343881b-fbbf-48f0-863e-a973e90018ec to disappear +Aug 17 22:53:38.244: INFO: Pod pod-secrets-2343881b-fbbf-48f0-863e-a973e90018ec no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:38.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-9365" for this suite. +STEP: Destroying namespace "secret-namespace-3505" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":69,"skipped":1437,"failed":0} + +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:38.264: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Aug 17 22:53:42.337: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:42.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-9758" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":70,"skipped":1437,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:42.375: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0644 on node default medium +Aug 17 22:53:42.411: INFO: Waiting up to 5m0s for pod "pod-cf110469-6b91-4c74-9ec7-a4c50ff0ad29" in namespace "emptydir-6569" to be "Succeeded or Failed" +Aug 17 22:53:42.433: INFO: Pod "pod-cf110469-6b91-4c74-9ec7-a4c50ff0ad29": Phase="Pending", Reason="", readiness=false. Elapsed: 22.048051ms +Aug 17 22:53:44.443: INFO: Pod "pod-cf110469-6b91-4c74-9ec7-a4c50ff0ad29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031656772s +Aug 17 22:53:46.450: INFO: Pod "pod-cf110469-6b91-4c74-9ec7-a4c50ff0ad29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039005844s +STEP: Saw pod success +Aug 17 22:53:46.450: INFO: Pod "pod-cf110469-6b91-4c74-9ec7-a4c50ff0ad29" satisfied condition "Succeeded or Failed" +Aug 17 22:53:46.454: INFO: Trying to get logs from node 195.17.65.231 pod pod-cf110469-6b91-4c74-9ec7-a4c50ff0ad29 container test-container: +STEP: delete the pod +Aug 17 22:53:46.475: INFO: Waiting for pod pod-cf110469-6b91-4c74-9ec7-a4c50ff0ad29 to disappear +Aug 17 22:53:46.478: INFO: Pod pod-cf110469-6b91-4c74-9ec7-a4c50ff0ad29 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:46.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6569" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":71,"skipped":1476,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:46.491: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:53:46.537: INFO: Waiting up to 5m0s for pod "busybox-user-65534-dba1a337-af72-44ae-8bdb-b9fb5d07c780" in namespace "security-context-test-6336" to be "Succeeded or Failed" +Aug 17 22:53:46.539: INFO: Pod "busybox-user-65534-dba1a337-af72-44ae-8bdb-b9fb5d07c780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523002ms +Aug 17 22:53:48.546: INFO: Pod "busybox-user-65534-dba1a337-af72-44ae-8bdb-b9fb5d07c780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009280318s +Aug 17 22:53:50.554: INFO: Pod "busybox-user-65534-dba1a337-af72-44ae-8bdb-b9fb5d07c780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017597506s +Aug 17 22:53:50.555: INFO: Pod "busybox-user-65534-dba1a337-af72-44ae-8bdb-b9fb5d07c780" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:50.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-6336" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":72,"skipped":1490,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:50.569: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:53:50.613: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Aug 17 22:53:55.623: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Aug 17 22:53:55.623: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 17 22:53:55.649: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-8122 8b879ffc-8e75-4147-a655-7aedc97d9eb9 25708 1 2022-08-17 22:53:55 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-08-17 22:53:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003dd9b78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Aug 17 22:53:55.652: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. +Aug 17 22:53:55.652: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Aug 17 22:53:55.652: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8122 53a97d7b-c33f-421c-a8e2-fbf96bf08383 25710 1 2022-08-17 22:53:50 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 8b879ffc-8e75-4147-a655-7aedc97d9eb9 0xc000671087 0xc000671088}] [] [{e2e.test Update apps/v1 2022-08-17 22:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 22:53:52 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2022-08-17 22:53:55 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"8b879ffc-8e75-4147-a655-7aedc97d9eb9\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0041f0088 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 17 22:53:55.655: INFO: Pod "test-cleanup-controller-q6pjb" is available: +&Pod{ObjectMeta:{test-cleanup-controller-q6pjb test-cleanup-controller- deployment-8122 9e5a90bc-deb5-4f7a-a423-fcc237e98ed0 25665 0 2022-08-17 22:53:50 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 53a97d7b-c33f-421c-a8e2-fbf96bf08383 0xc003b76e47 0xc003b76e48}] [] [{kube-controller-manager Update v1 2022-08-17 22:53:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53a97d7b-c33f-421c-a8e2-fbf96bf08383\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-17 22:53:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.49\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gmdnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gmdnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:53:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:53:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:53:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 22:53:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:192.168.1.49,StartTime:2022-08-17 22:53:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-17 22:53:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://9285b3ad6d625f2aa3f15a2d42dde355aa2c8a03ccae1cd09d22147eb540ab86,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:55.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-8122" for this suite. + +• [SLOW TEST:5.102 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":73,"skipped":1519,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:55.674: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:53:57.727: INFO: Deleting pod "var-expansion-de2dbb66-400e-4c8f-bb30-264ca45abbbb" in namespace "var-expansion-3972" +Aug 17 22:53:57.740: INFO: Wait up to 5m0s for pod "var-expansion-de2dbb66-400e-4c8f-bb30-264ca45abbbb" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:59.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3972" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":74,"skipped":1615,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:59.768: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename ingress +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support creating Ingress API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 17 22:53:59.828: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Aug 17 22:53:59.833: INFO: starting watch +STEP: patching +STEP: updating +Aug 17 22:53:59.848: INFO: waiting for watch events with expected annotations +Aug 17 22:53:59.849: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] Ingress API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:53:59.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingress-6665" for this suite. +•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":75,"skipped":1628,"failed":0} + +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:53:59.908: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 17 22:54:03.985: INFO: DNS probes using dns-9770/dns-test-82d6dd33-5957-4343-9196-992c94879d79 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:54:04.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9770" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":346,"completed":76,"skipped":1628,"failed":0} +SSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:54:04.022: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name cm-test-opt-del-b1a252ca-7235-4318-b731-d9582d3d0d21 +STEP: Creating configMap with name cm-test-opt-upd-33add91e-e27c-4a4c-8b78-6969652f9ece +STEP: Creating the pod +Aug 17 22:54:04.077: INFO: The status of Pod pod-configmaps-5a9d061c-ff9b-4c04-a364-002866669cb6 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:54:06.084: INFO: The status of Pod pod-configmaps-5a9d061c-ff9b-4c04-a364-002866669cb6 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:54:08.083: INFO: The status of Pod pod-configmaps-5a9d061c-ff9b-4c04-a364-002866669cb6 is Running (Ready = true) +STEP: Deleting configmap cm-test-opt-del-b1a252ca-7235-4318-b731-d9582d3d0d21 +STEP: Updating configmap cm-test-opt-upd-33add91e-e27c-4a4c-8b78-6969652f9ece +STEP: Creating configMap with name cm-test-opt-create-661ce2d1-b730-430e-a3d6-09b729555405 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:55:12.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7196" for this suite. + +• [SLOW TEST:68.424 seconds] +[sig-storage] ConfigMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":77,"skipped":1633,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:55:12.446: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should be updated [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Aug 17 22:55:12.492: INFO: The status of Pod pod-update-b88f8751-4d0c-42ef-b757-ad55337d3017 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:55:14.498: INFO: The status of Pod pod-update-b88f8751-4d0c-42ef-b757-ad55337d3017 is Running (Ready = true) +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Aug 17 22:55:15.024: INFO: Successfully updated pod "pod-update-b88f8751-4d0c-42ef-b757-ad55337d3017" +STEP: verifying the updated pod is in kubernetes +Aug 17 22:55:15.038: INFO: Pod update OK +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:55:15.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-6663" for this suite. +•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":78,"skipped":1680,"failed":0} +SSS +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:55:15.050: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename security-context +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser +Aug 17 22:55:15.099: INFO: Waiting up to 5m0s for pod "security-context-5011c74f-9a26-407b-be2a-90d68f808ed5" in namespace "security-context-6489" to be "Succeeded or Failed" +Aug 17 22:55:15.108: INFO: Pod "security-context-5011c74f-9a26-407b-be2a-90d68f808ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.606388ms +Aug 17 22:55:17.117: INFO: Pod "security-context-5011c74f-9a26-407b-be2a-90d68f808ed5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017434627s +Aug 17 22:55:19.125: INFO: Pod "security-context-5011c74f-9a26-407b-be2a-90d68f808ed5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025826962s +STEP: Saw pod success +Aug 17 22:55:19.125: INFO: Pod "security-context-5011c74f-9a26-407b-be2a-90d68f808ed5" satisfied condition "Succeeded or Failed" +Aug 17 22:55:19.128: INFO: Trying to get logs from node 195.17.65.231 pod security-context-5011c74f-9a26-407b-be2a-90d68f808ed5 container test-container: +STEP: delete the pod +Aug 17 22:55:19.151: INFO: Waiting for pod security-context-5011c74f-9a26-407b-be2a-90d68f808ed5 to disappear +Aug 17 22:55:19.154: INFO: Pod security-context-5011c74f-9a26-407b-be2a-90d68f808ed5 no longer exists +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:55:19.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-6489" for this suite. +•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":79,"skipped":1683,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:55:19.173: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8662 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8662;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8662 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8662;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8662.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8662.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8662.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8662.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8662.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8662.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8662.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8662.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8662.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8662.svc;check="$$(dig +notcp +noall +answer +search 202.47.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.47.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.47.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.47.202_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8662 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8662;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8662 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8662;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8662.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8662.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8662.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8662.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8662.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8662.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8662.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8662.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8662.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8662.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8662.svc;check="$$(dig +notcp +noall +answer +search 202.47.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.47.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.47.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.47.202_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 17 22:55:23.268: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.272: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.277: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.280: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.283: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.287: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.290: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.293: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.310: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.313: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.317: INFO: Unable to read jessie_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.320: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.323: INFO: Unable to read jessie_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.326: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.329: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.332: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:23.346: INFO: Lookups using dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8662 wheezy_tcp@dns-test-service.dns-8662 wheezy_udp@dns-test-service.dns-8662.svc wheezy_tcp@dns-test-service.dns-8662.svc wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8662 jessie_tcp@dns-test-service.dns-8662 jessie_udp@dns-test-service.dns-8662.svc jessie_tcp@dns-test-service.dns-8662.svc jessie_udp@_http._tcp.dns-test-service.dns-8662.svc jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc] + +Aug 17 22:55:28.351: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.356: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.359: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.364: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.368: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.371: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.374: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.378: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.396: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.400: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.403: INFO: Unable to read jessie_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.412: INFO: Unable to read jessie_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.415: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.419: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.423: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:28.436: INFO: Lookups using dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8662 wheezy_tcp@dns-test-service.dns-8662 wheezy_udp@dns-test-service.dns-8662.svc wheezy_tcp@dns-test-service.dns-8662.svc wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8662 jessie_tcp@dns-test-service.dns-8662 jessie_udp@dns-test-service.dns-8662.svc jessie_tcp@dns-test-service.dns-8662.svc jessie_udp@_http._tcp.dns-test-service.dns-8662.svc jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc] + +Aug 17 22:55:33.350: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.354: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.358: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.362: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.366: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.369: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.373: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.376: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.395: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.399: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.402: INFO: Unable to read jessie_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.405: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.408: INFO: Unable to read jessie_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.415: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.419: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:33.433: INFO: Lookups using dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8662 wheezy_tcp@dns-test-service.dns-8662 wheezy_udp@dns-test-service.dns-8662.svc wheezy_tcp@dns-test-service.dns-8662.svc wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8662 jessie_tcp@dns-test-service.dns-8662 jessie_udp@dns-test-service.dns-8662.svc jessie_tcp@dns-test-service.dns-8662.svc jessie_udp@_http._tcp.dns-test-service.dns-8662.svc jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc] + +Aug 17 22:55:38.351: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.356: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.361: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.365: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.373: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.376: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.381: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.400: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.405: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.409: INFO: Unable to read jessie_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.419: INFO: Unable to read jessie_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.425: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.428: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.431: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:38.447: INFO: Lookups using dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8662 wheezy_tcp@dns-test-service.dns-8662 wheezy_udp@dns-test-service.dns-8662.svc wheezy_tcp@dns-test-service.dns-8662.svc wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8662 jessie_tcp@dns-test-service.dns-8662 jessie_udp@dns-test-service.dns-8662.svc jessie_tcp@dns-test-service.dns-8662.svc jessie_udp@_http._tcp.dns-test-service.dns-8662.svc jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc] + +Aug 17 22:55:43.352: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.356: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.360: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.364: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.371: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.375: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.378: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.395: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.399: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.402: INFO: Unable to read jessie_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.406: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.409: INFO: Unable to read jessie_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.416: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.419: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:43.435: INFO: Lookups using dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8662 wheezy_tcp@dns-test-service.dns-8662 wheezy_udp@dns-test-service.dns-8662.svc wheezy_tcp@dns-test-service.dns-8662.svc wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8662 jessie_tcp@dns-test-service.dns-8662 jessie_udp@dns-test-service.dns-8662.svc jessie_tcp@dns-test-service.dns-8662.svc jessie_udp@_http._tcp.dns-test-service.dns-8662.svc jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc] + +Aug 17 22:55:48.351: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.355: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.359: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.362: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.366: INFO: Unable to read wheezy_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.370: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.375: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.379: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.396: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.400: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.403: INFO: Unable to read jessie_udp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.407: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662 from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.414: INFO: Unable to read jessie_udp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.418: INFO: Unable to read jessie_tcp@dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.422: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.425: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc from pod dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9: the server could not find the requested resource (get pods dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9) +Aug 17 22:55:48.439: INFO: Lookups using dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8662 wheezy_tcp@dns-test-service.dns-8662 wheezy_udp@dns-test-service.dns-8662.svc wheezy_tcp@dns-test-service.dns-8662.svc wheezy_udp@_http._tcp.dns-test-service.dns-8662.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8662.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8662 jessie_tcp@dns-test-service.dns-8662 jessie_udp@dns-test-service.dns-8662.svc jessie_tcp@dns-test-service.dns-8662.svc jessie_udp@_http._tcp.dns-test-service.dns-8662.svc jessie_tcp@_http._tcp.dns-test-service.dns-8662.svc] + +Aug 17 22:55:53.432: INFO: DNS probes using dns-8662/dns-test-c24a10af-11da-4756-a93c-a20bb3263fd9 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:55:53.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8662" for this suite. + +• [SLOW TEST:34.343 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":80,"skipped":1699,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:55:53.517: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-volume-c1e25705-bfea-4bbc-9d29-b7c283435ed2 +STEP: Creating a pod to test consume configMaps +Aug 17 22:55:53.562: INFO: Waiting up to 5m0s for pod "pod-configmaps-29c353de-c6fc-4179-9870-469cbee5caca" in namespace "configmap-651" to be "Succeeded or Failed" +Aug 17 22:55:53.569: INFO: Pod "pod-configmaps-29c353de-c6fc-4179-9870-469cbee5caca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.826873ms +Aug 17 22:55:55.578: INFO: Pod "pod-configmaps-29c353de-c6fc-4179-9870-469cbee5caca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015167435s +Aug 17 22:55:57.586: INFO: Pod "pod-configmaps-29c353de-c6fc-4179-9870-469cbee5caca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023117382s +STEP: Saw pod success +Aug 17 22:55:57.586: INFO: Pod "pod-configmaps-29c353de-c6fc-4179-9870-469cbee5caca" satisfied condition "Succeeded or Failed" +Aug 17 22:55:57.589: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-29c353de-c6fc-4179-9870-469cbee5caca container agnhost-container: +STEP: delete the pod +Aug 17 22:55:57.611: INFO: Waiting for pod pod-configmaps-29c353de-c6fc-4179-9870-469cbee5caca to disappear +Aug 17 22:55:57.614: INFO: Pod pod-configmaps-29c353de-c6fc-4179-9870-469cbee5caca no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:55:57.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-651" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":81,"skipped":1719,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:55:57.624: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Aug 17 22:56:37.749: INFO: The status of Pod kube-controller-manager-195.17.32.244 is Running (Ready = true) +Aug 17 22:56:37.819: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Aug 17 22:56:37.819: INFO: Deleting pod "simpletest.rc-2gw8q" in namespace "gc-5744" +Aug 17 22:56:37.884: INFO: Deleting pod "simpletest.rc-2lz7p" in namespace "gc-5744" +Aug 17 22:56:37.981: INFO: Deleting pod "simpletest.rc-2ws2g" in namespace "gc-5744" +Aug 17 22:56:38.003: INFO: Deleting pod "simpletest.rc-44g8k" in namespace "gc-5744" +Aug 17 22:56:38.023: INFO: Deleting pod "simpletest.rc-47v8f" in namespace "gc-5744" +Aug 17 22:56:38.041: INFO: Deleting pod "simpletest.rc-4hrdc" in namespace "gc-5744" +Aug 17 22:56:38.060: INFO: Deleting pod "simpletest.rc-5clnp" in namespace "gc-5744" +Aug 17 22:56:38.080: INFO: Deleting pod "simpletest.rc-5hgss" in namespace "gc-5744" +Aug 17 22:56:38.097: INFO: Deleting pod "simpletest.rc-5qxqn" in namespace "gc-5744" +Aug 17 22:56:38.120: INFO: Deleting pod "simpletest.rc-65v2b" in namespace "gc-5744" +Aug 17 22:56:38.140: INFO: Deleting pod "simpletest.rc-6p9xt" in namespace "gc-5744" +Aug 17 22:56:38.160: INFO: Deleting pod "simpletest.rc-6r2qh" in namespace "gc-5744" +Aug 17 22:56:38.186: INFO: Deleting pod "simpletest.rc-6t25f" in namespace "gc-5744" +Aug 17 22:56:38.199: INFO: Deleting pod "simpletest.rc-7bvxw" in namespace "gc-5744" +Aug 17 22:56:38.216: INFO: Deleting pod "simpletest.rc-7d9db" in namespace "gc-5744" +Aug 17 22:56:38.229: INFO: Deleting pod "simpletest.rc-7f9r4" in namespace "gc-5744" +Aug 17 22:56:38.247: INFO: Deleting pod "simpletest.rc-7m2tw" in namespace "gc-5744" +Aug 17 22:56:38.266: INFO: Deleting pod "simpletest.rc-7s5rm" in namespace "gc-5744" +Aug 17 22:56:38.281: INFO: Deleting pod "simpletest.rc-8dbhm" in namespace "gc-5744" +Aug 17 22:56:38.300: INFO: Deleting pod "simpletest.rc-96kns" in namespace "gc-5744" +Aug 17 22:56:38.315: INFO: Deleting pod "simpletest.rc-b48hs" in namespace "gc-5744" +Aug 17 22:56:38.335: INFO: Deleting pod "simpletest.rc-bbvk8" in namespace "gc-5744" +Aug 17 22:56:38.349: INFO: Deleting pod "simpletest.rc-bd8tw" in namespace "gc-5744" +Aug 17 22:56:38.364: INFO: Deleting pod "simpletest.rc-blrvl" in namespace "gc-5744" +Aug 17 22:56:38.377: INFO: Deleting pod "simpletest.rc-bq8f2" in namespace "gc-5744" +Aug 17 22:56:38.392: INFO: Deleting pod "simpletest.rc-bscnn" in namespace "gc-5744" +Aug 17 22:56:38.416: INFO: Deleting pod "simpletest.rc-bwz85" in namespace "gc-5744" +Aug 17 22:56:38.432: INFO: Deleting pod "simpletest.rc-bxhxt" in namespace "gc-5744" +Aug 17 22:56:38.450: INFO: Deleting pod "simpletest.rc-c58pg" in namespace "gc-5744" +Aug 17 22:56:38.467: INFO: Deleting pod "simpletest.rc-cdqmv" in namespace "gc-5744" +Aug 17 22:56:38.480: INFO: Deleting pod "simpletest.rc-cqj45" in namespace "gc-5744" +Aug 17 22:56:38.497: INFO: Deleting pod "simpletest.rc-czzjg" in namespace "gc-5744" +Aug 17 22:56:38.515: INFO: Deleting pod "simpletest.rc-d77tl" in namespace "gc-5744" +Aug 17 22:56:38.530: INFO: Deleting pod "simpletest.rc-dbwvz" in namespace "gc-5744" +Aug 17 22:56:38.547: INFO: Deleting pod "simpletest.rc-dhhvh" in namespace "gc-5744" +Aug 17 22:56:38.563: INFO: Deleting pod "simpletest.rc-f7m99" in namespace "gc-5744" +Aug 17 22:56:38.580: INFO: Deleting pod "simpletest.rc-ftzwk" in namespace "gc-5744" +Aug 17 22:56:38.598: INFO: Deleting pod "simpletest.rc-fx9dc" in namespace "gc-5744" +Aug 17 22:56:38.609: INFO: Deleting pod "simpletest.rc-g7r6m" in namespace "gc-5744" +Aug 17 22:56:38.622: INFO: Deleting pod "simpletest.rc-gzjt7" in namespace "gc-5744" +Aug 17 22:56:38.642: INFO: Deleting pod "simpletest.rc-h9svz" in namespace "gc-5744" +Aug 17 22:56:38.656: INFO: Deleting pod "simpletest.rc-hc4bh" in namespace "gc-5744" +Aug 17 22:56:38.677: INFO: Deleting pod "simpletest.rc-hd8dn" in namespace "gc-5744" +Aug 17 22:56:38.695: INFO: Deleting pod "simpletest.rc-hfsd4" in namespace "gc-5744" +Aug 17 22:56:38.710: INFO: Deleting pod "simpletest.rc-hjnsz" in namespace "gc-5744" +Aug 17 22:56:38.728: INFO: Deleting pod "simpletest.rc-hm8hg" in namespace "gc-5744" +Aug 17 22:56:38.741: INFO: Deleting pod "simpletest.rc-hswmz" in namespace "gc-5744" +Aug 17 22:56:38.754: INFO: Deleting pod "simpletest.rc-hw2kh" in namespace "gc-5744" +Aug 17 22:56:38.772: INFO: Deleting pod "simpletest.rc-hzqrz" in namespace "gc-5744" +Aug 17 22:56:38.790: INFO: Deleting pod "simpletest.rc-j8knh" in namespace "gc-5744" +Aug 17 22:56:38.805: INFO: Deleting pod "simpletest.rc-jmnzf" in namespace "gc-5744" +Aug 17 22:56:38.819: INFO: Deleting pod "simpletest.rc-jzvss" in namespace "gc-5744" +Aug 17 22:56:38.831: INFO: Deleting pod "simpletest.rc-kmdtn" in namespace "gc-5744" +Aug 17 22:56:38.847: INFO: Deleting pod "simpletest.rc-kw284" in namespace "gc-5744" +Aug 17 22:56:38.861: INFO: Deleting pod "simpletest.rc-l44zj" in namespace "gc-5744" +Aug 17 22:56:38.875: INFO: Deleting pod "simpletest.rc-l585w" in namespace "gc-5744" +Aug 17 22:56:38.893: INFO: Deleting pod "simpletest.rc-ljkqh" in namespace "gc-5744" +Aug 17 22:56:38.911: INFO: Deleting pod "simpletest.rc-lwqwn" in namespace "gc-5744" +Aug 17 22:56:38.924: INFO: Deleting pod "simpletest.rc-m5r2s" in namespace "gc-5744" +Aug 17 22:56:38.940: INFO: Deleting pod "simpletest.rc-m8wv7" in namespace "gc-5744" +Aug 17 22:56:38.959: INFO: Deleting pod "simpletest.rc-mb9ll" in namespace "gc-5744" +Aug 17 22:56:38.972: INFO: Deleting pod "simpletest.rc-mjnfg" in namespace "gc-5744" +Aug 17 22:56:38.995: INFO: Deleting pod "simpletest.rc-mxk5r" in namespace "gc-5744" +Aug 17 22:56:39.011: INFO: Deleting pod "simpletest.rc-mxwks" in namespace "gc-5744" +Aug 17 22:56:39.030: INFO: Deleting pod "simpletest.rc-mzt2w" in namespace "gc-5744" +Aug 17 22:56:39.049: INFO: Deleting pod "simpletest.rc-nhgq2" in namespace "gc-5744" +Aug 17 22:56:39.067: INFO: Deleting pod "simpletest.rc-nsbzx" in namespace "gc-5744" +Aug 17 22:56:39.084: INFO: Deleting pod "simpletest.rc-nzb7m" in namespace "gc-5744" +Aug 17 22:56:39.104: INFO: Deleting pod "simpletest.rc-p8j9c" in namespace "gc-5744" +Aug 17 22:56:39.131: INFO: Deleting pod "simpletest.rc-pq2hr" in namespace "gc-5744" +Aug 17 22:56:39.151: INFO: Deleting pod "simpletest.rc-pstz5" in namespace "gc-5744" +Aug 17 22:56:39.168: INFO: Deleting pod "simpletest.rc-ptgql" in namespace "gc-5744" +Aug 17 22:56:39.184: INFO: Deleting pod "simpletest.rc-q5hnk" in namespace "gc-5744" +Aug 17 22:56:39.198: INFO: Deleting pod "simpletest.rc-qg5hw" in namespace "gc-5744" +Aug 17 22:56:39.211: INFO: Deleting pod "simpletest.rc-qmxmw" in namespace "gc-5744" +Aug 17 22:56:39.231: INFO: Deleting pod "simpletest.rc-qs5sg" in namespace "gc-5744" +Aug 17 22:56:39.251: INFO: Deleting pod "simpletest.rc-r25px" in namespace "gc-5744" +Aug 17 22:56:39.268: INFO: Deleting pod "simpletest.rc-rv6fc" in namespace "gc-5744" +Aug 17 22:56:39.308: INFO: Deleting pod "simpletest.rc-s6x4x" in namespace "gc-5744" +Aug 17 22:56:39.364: INFO: Deleting pod "simpletest.rc-shcqq" in namespace "gc-5744" +Aug 17 22:56:39.418: INFO: Deleting pod "simpletest.rc-sk672" in namespace "gc-5744" +Aug 17 22:56:39.471: INFO: Deleting pod "simpletest.rc-swjwm" in namespace "gc-5744" +Aug 17 22:56:39.517: INFO: Deleting pod "simpletest.rc-thmt2" in namespace "gc-5744" +Aug 17 22:56:39.563: INFO: Deleting pod "simpletest.rc-tkbn8" in namespace "gc-5744" +Aug 17 22:56:39.616: INFO: Deleting pod "simpletest.rc-tmtn4" in namespace "gc-5744" +Aug 17 22:56:39.668: INFO: Deleting pod "simpletest.rc-tsjms" in namespace "gc-5744" +Aug 17 22:56:39.713: INFO: Deleting pod "simpletest.rc-txsjt" in namespace "gc-5744" +Aug 17 22:56:39.762: INFO: Deleting pod "simpletest.rc-vdm65" in namespace "gc-5744" +Aug 17 22:56:39.815: INFO: Deleting pod "simpletest.rc-vhcgk" in namespace "gc-5744" +Aug 17 22:56:39.865: INFO: Deleting pod "simpletest.rc-vvqbl" in namespace "gc-5744" +Aug 17 22:56:39.924: INFO: Deleting pod "simpletest.rc-wj74z" in namespace "gc-5744" +Aug 17 22:56:39.970: INFO: Deleting pod "simpletest.rc-wxdfs" in namespace "gc-5744" +Aug 17 22:56:40.014: INFO: Deleting pod "simpletest.rc-x9d4x" in namespace "gc-5744" +Aug 17 22:56:40.064: INFO: Deleting pod "simpletest.rc-x9rkw" in namespace "gc-5744" +Aug 17 22:56:40.109: INFO: Deleting pod "simpletest.rc-xbpxx" in namespace "gc-5744" +Aug 17 22:56:40.168: INFO: Deleting pod "simpletest.rc-xht6p" in namespace "gc-5744" +Aug 17 22:56:40.215: INFO: Deleting pod "simpletest.rc-xjqg2" in namespace "gc-5744" +Aug 17 22:56:40.267: INFO: Deleting pod "simpletest.rc-xpx75" in namespace "gc-5744" +Aug 17 22:56:40.312: INFO: Deleting pod "simpletest.rc-zhwtq" in namespace "gc-5744" +Aug 17 22:56:40.365: INFO: Deleting pod "simpletest.rc-znj9x" in namespace "gc-5744" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:56:40.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-5744" for this suite. + +• [SLOW TEST:42.885 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":82,"skipped":1748,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:56:40.510: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service in namespace services-1108 +STEP: creating service affinity-nodeport-transition in namespace services-1108 +STEP: creating replication controller affinity-nodeport-transition in namespace services-1108 +I0817 22:56:40.586327 20 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-1108, replica count: 3 +I0817 22:56:43.637772 20 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0817 22:56:46.638036 20 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0817 22:56:49.639318 20 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 22:56:49.656: INFO: Creating new exec pod +Aug 17 22:56:54.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-1108 exec execpod-affinitycktwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' +Aug 17 22:56:54.841: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Aug 17 22:56:54.842: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:56:54.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-1108 exec execpod-affinitycktwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.106.36.202 80' +Aug 17 22:56:54.979: INFO: stderr: "+ + echonc hostName -v\n -t -w 2 10.106.36.202 80\nConnection to 10.106.36.202 80 port [tcp/http] succeeded!\n" +Aug 17 22:56:54.979: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:56:54.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-1108 exec execpod-affinitycktwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.131.205 31193' +Aug 17 22:56:55.121: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 195.17.131.205 31193\nConnection to 195.17.131.205 31193 port [tcp/*] succeeded!\n" +Aug 17 22:56:55.121: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:56:55.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-1108 exec execpod-affinitycktwk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.65.231 31193' +Aug 17 22:56:55.257: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.65.231 31193\nConnection to 195.17.65.231 31193 port [tcp/*] succeeded!\n" +Aug 17 22:56:55.257: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 22:56:55.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-1108 exec execpod-affinitycktwk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://195.17.131.205:31193/ ; done' +Aug 17 22:56:55.497: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n" +Aug 17 22:56:55.497: INFO: stdout: "\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-4krkh\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-4krkh\naffinity-nodeport-transition-4krkh\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-4krkh\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-9rwb4\naffinity-nodeport-transition-9rwb4\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-9rwb4\naffinity-nodeport-transition-wl6q9" +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-4krkh +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-4krkh +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-4krkh +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-4krkh +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-9rwb4 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-9rwb4 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-9rwb4 +Aug 17 22:56:55.497: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-1108 exec execpod-affinitycktwk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://195.17.131.205:31193/ ; done' +Aug 17 22:56:55.768: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31193/\n" +Aug 17 22:56:55.768: INFO: stdout: "\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9\naffinity-nodeport-transition-wl6q9" +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Received response from host: affinity-nodeport-transition-wl6q9 +Aug 17 22:56:55.768: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1108, will wait for the garbage collector to delete the pods +Aug 17 22:56:55.848: INFO: Deleting ReplicationController affinity-nodeport-transition took: 7.914888ms +Aug 17 22:56:55.949: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.740257ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:56:58.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-1108" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:17.591 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":83,"skipped":1753,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:56:58.101: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 22:56:59.073: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 22:57:02.119: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a validating webhook configuration +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Updating a validating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +STEP: Patching a validating webhook configuration's rules to include the create operation +STEP: Creating a configMap that does not comply to the validation webhook rules +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:02.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3648" for this suite. +STEP: Destroying namespace "webhook-3648-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":84,"skipped":1759,"failed":0} +SSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:02.283: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 22:57:02.696: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 22:57:05.734: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Setting timeout (1s) shorter than webhook latency (5s) +STEP: Registering slow webhook via the AdmissionRegistration API +Aug 17 22:57:05.762: INFO: Waiting for webhook configuration to be ready... +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is longer than webhook latency +STEP: Registering slow webhook via the AdmissionRegistration API +STEP: Having no error when timeout is empty (defaulted to 10s in v1) +STEP: Registering slow webhook via the AdmissionRegistration API +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:17.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-4059" for this suite. +STEP: Destroying namespace "webhook-4059-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:15.781 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":85,"skipped":1763,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:18.068: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:57:20.129: INFO: Deleting pod "var-expansion-6a5d758c-8433-4fdb-b3d4-f028bb56feeb" in namespace "var-expansion-9707" +Aug 17 22:57:20.141: INFO: Wait up to 5m0s for pod "var-expansion-6a5d758c-8433-4fdb-b3d4-f028bb56feeb" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:22.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9707" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":86,"skipped":1779,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:22.176: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should test the lifecycle of an Endpoint [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating an Endpoint +STEP: waiting for available Endpoint +STEP: listing all Endpoints +STEP: updating the Endpoint +STEP: fetching the Endpoint +STEP: patching the Endpoint +STEP: fetching the Endpoint +STEP: deleting the Endpoint by Collection +STEP: waiting for Endpoint deletion +STEP: fetching the Endpoint +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:22.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8316" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":87,"skipped":1808,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:22.262: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a replication controller +Aug 17 22:57:22.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 create -f -' +Aug 17 22:57:23.580: INFO: stderr: "" +Aug 17 22:57:23.580: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Aug 17 22:57:23.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 17 22:57:23.664: INFO: stderr: "" +Aug 17 22:57:23.664: INFO: stdout: "update-demo-nautilus-6q67j update-demo-nautilus-kk5ns " +Aug 17 22:57:23.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get pods update-demo-nautilus-6q67j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 22:57:23.729: INFO: stderr: "" +Aug 17 22:57:23.730: INFO: stdout: "" +Aug 17 22:57:23.730: INFO: update-demo-nautilus-6q67j is created but not running +Aug 17 22:57:28.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 17 22:57:28.802: INFO: stderr: "" +Aug 17 22:57:28.802: INFO: stdout: "update-demo-nautilus-6q67j update-demo-nautilus-kk5ns " +Aug 17 22:57:28.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get pods update-demo-nautilus-6q67j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 22:57:28.867: INFO: stderr: "" +Aug 17 22:57:28.868: INFO: stdout: "true" +Aug 17 22:57:28.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get pods update-demo-nautilus-6q67j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 17 22:57:28.933: INFO: stderr: "" +Aug 17 22:57:28.934: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 17 22:57:28.934: INFO: validating pod update-demo-nautilus-6q67j +Aug 17 22:57:28.938: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 17 22:57:28.938: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 17 22:57:28.938: INFO: update-demo-nautilus-6q67j is verified up and running +Aug 17 22:57:28.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get pods update-demo-nautilus-kk5ns -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 22:57:29.005: INFO: stderr: "" +Aug 17 22:57:29.005: INFO: stdout: "true" +Aug 17 22:57:29.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get pods update-demo-nautilus-kk5ns -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 17 22:57:29.068: INFO: stderr: "" +Aug 17 22:57:29.068: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 17 22:57:29.068: INFO: validating pod update-demo-nautilus-kk5ns +Aug 17 22:57:29.074: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 17 22:57:29.074: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 17 22:57:29.074: INFO: update-demo-nautilus-kk5ns is verified up and running +STEP: using delete to clean up resources +Aug 17 22:57:29.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 delete --grace-period=0 --force -f -' +Aug 17 22:57:29.165: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 22:57:29.165: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Aug 17 22:57:29.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get rc,svc -l name=update-demo --no-headers' +Aug 17 22:57:29.251: INFO: stderr: "No resources found in kubectl-6222 namespace.\n" +Aug 17 22:57:29.251: INFO: stdout: "" +Aug 17 22:57:29.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6222 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Aug 17 22:57:29.318: INFO: stderr: "" +Aug 17 22:57:29.318: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:29.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6222" for this suite. + +• [SLOW TEST:7.069 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 + should create and stop a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":88,"skipped":1811,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:29.331: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:57:29.358: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Aug 17 22:57:29.381: INFO: The status of Pod pod-exec-websocket-6e076446-6e35-4302-9a97-494429adea27 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 22:57:31.387: INFO: The status of Pod pod-exec-websocket-6e076446-6e35-4302-9a97-494429adea27 is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:31.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8676" for this suite. +•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":89,"skipped":1826,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:31.475: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Aug 17 22:57:35.542: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:35.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-5842" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":346,"completed":90,"skipped":1854,"failed":0} +SSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:35.577: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should verify changes to a daemon set status [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Aug 17 22:57:35.642: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:57:35.642: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:57:35.646: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 22:57:35.646: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 22:57:36.653: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:57:36.653: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:57:36.660: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 22:57:36.660: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 22:57:37.654: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:57:37.654: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 22:57:37.660: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 17 22:57:37.660: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Getting /status +Aug 17 22:57:37.669: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status +Aug 17 22:57:37.680: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated +Aug 17 22:57:37.681: INFO: Observed &DaemonSet event: ADDED +Aug 17 22:57:37.682: INFO: Observed &DaemonSet event: MODIFIED +Aug 17 22:57:37.682: INFO: Observed &DaemonSet event: MODIFIED +Aug 17 22:57:37.682: INFO: Observed &DaemonSet event: MODIFIED +Aug 17 22:57:37.682: INFO: Found daemon set daemon-set in namespace daemonsets-2841 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Aug 17 22:57:37.682: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status +STEP: watching for the daemon set status to be patched +Aug 17 22:57:37.692: INFO: Observed &DaemonSet event: ADDED +Aug 17 22:57:37.692: INFO: Observed &DaemonSet event: MODIFIED +Aug 17 22:57:37.692: INFO: Observed &DaemonSet event: MODIFIED +Aug 17 22:57:37.693: INFO: Observed &DaemonSet event: MODIFIED +Aug 17 22:57:37.693: INFO: Observed daemon set daemon-set in namespace daemonsets-2841 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Aug 17 22:57:37.693: INFO: Observed &DaemonSet event: MODIFIED +Aug 17 22:57:37.693: INFO: Found daemon set daemon-set in namespace daemonsets-2841 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Aug 17 22:57:37.693: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2841, will wait for the garbage collector to delete the pods +Aug 17 22:57:37.759: INFO: Deleting DaemonSet.extensions daemon-set took: 8.879301ms +Aug 17 22:57:37.860: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.976043ms +Aug 17 22:57:40.267: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 22:57:40.267: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 17 22:57:40.271: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"30770"},"items":null} + +Aug 17 22:57:40.274: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"30770"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:40.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-2841" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":91,"skipped":1860,"failed":0} +SS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:40.302: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Aug 17 22:57:40.329: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Aug 17 22:57:40.338: INFO: Waiting for terminating namespaces to be deleted... +Aug 17 22:57:40.342: INFO: +Logging pods the apiserver thinks is on node 195.17.131.205 before test +Aug 17 22:57:40.350: INFO: capi-kubeadm-bootstrap-controller-manager-6f58b86764-4snx7 from capi-kubeadm-bootstrap-system started at 2022-08-17 22:22:29 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container manager ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: capi-kubeadm-control-plane-controller-manager-7b679446f7-x2d65 from capi-kubeadm-control-plane-system started at 2022-08-17 22:22:49 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container manager ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: capi-controller-manager-6ff75d8789-8fldg from capi-system started at 2022-08-17 22:22:22 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container manager ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: cert-manager-67565ccf5d-zf6kt from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: cert-manager-cainjector-654854cb95-cb6v8 from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: cert-manager-webhook-fc46785b4-gvkf6 from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: eks-anywhere-packages-ddfc7b44-8zssk from eksa-packages started at 2022-08-17 22:24:50 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container controller ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: etcdadm-bootstrap-provider-controller-manager-7d898b8f77-xgmtd from etcdadm-bootstrap-provider-system started at 2022-08-17 22:22:35 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container manager ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: etcdadm-controller-controller-manager-b6f674477-6lsxb from etcdadm-controller-system started at 2022-08-17 22:22:40 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container manager ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: cilium-hvkwp from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container cilium-agent ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: cilium-operator-5799bc594c-b9rnk from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container cilium-operator ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: kube-proxy-pdhjb from kube-system started at 2022-08-17 22:19:15 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container kube-proxy ready: true, restart count 0 +Aug 17 22:57:40.350: INFO: vsphere-cloud-controller-manager-s5246 from kube-system started at 2022-08-17 22:19:15 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.350: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 1 +Aug 17 22:57:40.350: INFO: vsphere-csi-controller-f67d5c78c-l8hxm from kube-system started at 2022-08-17 22:43:28 +0000 UTC (5 container statuses recorded) +Aug 17 22:57:40.351: INFO: Container csi-attacher ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: Container csi-provisioner ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: Container vsphere-csi-controller ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: Container vsphere-syncer ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: vsphere-csi-node-f9msr from kube-system started at 2022-08-17 22:19:15 +0000 UTC (3 container statuses recorded) +Aug 17 22:57:40.351: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: Container node-driver-registrar ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: Container vsphere-csi-node ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-v7n4s from sonobuoy started at 2022-08-17 22:38:36 +0000 UTC (2 container statuses recorded) +Aug 17 22:57:40.351: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: Container systemd-logs ready: true, restart count 0 +Aug 17 22:57:40.351: INFO: +Logging pods the apiserver thinks is on node 195.17.65.231 before test +Aug 17 22:57:40.360: INFO: cilium-f7vw5 from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.360: INFO: Container cilium-agent ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: cilium-operator-5799bc594c-fpwfg from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.360: INFO: Container cilium-operator ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: kube-proxy-xc469 from kube-system started at 2022-08-17 22:19:12 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.360: INFO: Container kube-proxy ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: vsphere-cloud-controller-manager-49t6p from kube-system started at 2022-08-17 22:48:46 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.360: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: vsphere-csi-node-lhjjp from kube-system started at 2022-08-17 22:19:12 +0000 UTC (3 container statuses recorded) +Aug 17 22:57:40.360: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: Container node-driver-registrar ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: Container vsphere-csi-node ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: pod-exec-websocket-6e076446-6e35-4302-9a97-494429adea27 from pods-8676 started at 2022-08-17 22:57:29 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.360: INFO: Container main ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: sonobuoy from sonobuoy started at 2022-08-17 22:38:32 +0000 UTC (1 container statuses recorded) +Aug 17 22:57:40.360: INFO: Container kube-sonobuoy ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-lppfn from sonobuoy started at 2022-08-17 22:38:36 +0000 UTC (2 container statuses recorded) +Aug 17 22:57:40.360: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 17 22:57:40.360: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: verifying the node has the label node 195.17.131.205 +STEP: verifying the node has the label node 195.17.65.231 +Aug 17 22:57:40.427: INFO: Pod capi-kubeadm-bootstrap-controller-manager-6f58b86764-4snx7 requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.427: INFO: Pod capi-kubeadm-control-plane-controller-manager-7b679446f7-x2d65 requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod capi-controller-manager-6ff75d8789-8fldg requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod cert-manager-67565ccf5d-zf6kt requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod cert-manager-cainjector-654854cb95-cb6v8 requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod cert-manager-webhook-fc46785b4-gvkf6 requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod eks-anywhere-packages-ddfc7b44-8zssk requesting resource cpu=100m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod etcdadm-bootstrap-provider-controller-manager-7d898b8f77-xgmtd requesting resource cpu=100m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod etcdadm-controller-controller-manager-b6f674477-6lsxb requesting resource cpu=100m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod cilium-f7vw5 requesting resource cpu=0m on Node 195.17.65.231 +Aug 17 22:57:40.428: INFO: Pod cilium-hvkwp requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod cilium-operator-5799bc594c-b9rnk requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod cilium-operator-5799bc594c-fpwfg requesting resource cpu=0m on Node 195.17.65.231 +Aug 17 22:57:40.428: INFO: Pod kube-proxy-pdhjb requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod kube-proxy-xc469 requesting resource cpu=0m on Node 195.17.65.231 +Aug 17 22:57:40.428: INFO: Pod vsphere-cloud-controller-manager-49t6p requesting resource cpu=200m on Node 195.17.65.231 +Aug 17 22:57:40.428: INFO: Pod vsphere-cloud-controller-manager-s5246 requesting resource cpu=200m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod vsphere-csi-controller-f67d5c78c-l8hxm requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod vsphere-csi-node-f9msr requesting resource cpu=0m on Node 195.17.131.205 +Aug 17 22:57:40.428: INFO: Pod vsphere-csi-node-lhjjp requesting resource cpu=0m on Node 195.17.65.231 +Aug 17 22:57:40.428: INFO: Pod pod-exec-websocket-6e076446-6e35-4302-9a97-494429adea27 requesting resource cpu=0m on Node 195.17.65.231 +Aug 17 22:57:40.428: INFO: Pod sonobuoy requesting resource cpu=0m on Node 195.17.65.231 +Aug 17 22:57:40.428: INFO: Pod sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-lppfn requesting resource cpu=0m on Node 195.17.65.231 +Aug 17 22:57:40.428: INFO: Pod sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-v7n4s requesting resource cpu=0m on Node 195.17.131.205 +STEP: Starting Pods to consume most of the cluster CPU. +Aug 17 22:57:40.428: INFO: Creating a pod which consumes cpu=1001m on Node 195.17.131.205 +Aug 17 22:57:40.437: INFO: Creating a pod which consumes cpu=1211m on Node 195.17.65.231 +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-cef76dbe-9973-4437-b64f-87c81a6a522b.170c43eff3723979], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8834/filler-pod-cef76dbe-9973-4437-b64f-87c81a6a522b to 195.17.65.231] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-cef76dbe-9973-4437-b64f-87c81a6a522b.170c43f0271461a7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.6" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-cef76dbe-9973-4437-b64f-87c81a6a522b.170c43f028813beb], Reason = [Created], Message = [Created container filler-pod-cef76dbe-9973-4437-b64f-87c81a6a522b] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-cef76dbe-9973-4437-b64f-87c81a6a522b.170c43f02d9b246d], Reason = [Started], Message = [Started container filler-pod-cef76dbe-9973-4437-b64f-87c81a6a522b] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-db92b51c-5f91-4786-87c6-a9f185173abb.170c43eff2ecc9ab], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8834/filler-pod-db92b51c-5f91-4786-87c6-a9f185173abb to 195.17.131.205] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-db92b51c-5f91-4786-87c6-a9f185173abb.170c43f027703a99], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.6" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-db92b51c-5f91-4786-87c6-a9f185173abb.170c43f0291eb35c], Reason = [Created], Message = [Created container filler-pod-db92b51c-5f91-4786-87c6-a9f185173abb] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-db92b51c-5f91-4786-87c6-a9f185173abb.170c43f02e2261a3], Reason = [Started], Message = [Started container filler-pod-db92b51c-5f91-4786-87c6-a9f185173abb] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.170c43f06c00c795], Reason = [FailedScheduling], Message = [0/4 nodes are available: 2 Insufficient cpu, 2 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] +STEP: removing the label node off the node 195.17.131.205 +STEP: verifying the node doesn't have the label node +STEP: removing the label node off the node 195.17.65.231 +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:43.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8834" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":346,"completed":92,"skipped":1862,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:43.535: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to update and delete ResourceQuota. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a ResourceQuota +STEP: Getting a ResourceQuota +STEP: Updating a ResourceQuota +STEP: Verifying a ResourceQuota was modified +STEP: Deleting a ResourceQuota +STEP: Verifying the deleted ResourceQuota +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:43.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3942" for this suite. +•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":93,"skipped":1877,"failed":0} +SSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:43.620: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support proxy with --port 0 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: starting the proxy server +Aug 17 22:57:43.656: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8143 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:43.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8143" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":346,"completed":94,"skipped":1884,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:43.725: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:43.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5488" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":95,"skipped":1953,"failed":0} +SSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:43.823: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:57:43.854: INFO: Creating pod... +Aug 17 22:57:43.869: INFO: Pod Quantity: 1 Status: Pending +Aug 17 22:57:44.875: INFO: Pod Quantity: 1 Status: Pending +Aug 17 22:57:45.876: INFO: Pod Status: Running +Aug 17 22:57:45.876: INFO: Creating service... +Aug 17 22:57:45.897: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/pods/agnhost/proxy/some/path/with/DELETE +Aug 17 22:57:45.902: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Aug 17 22:57:45.902: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/pods/agnhost/proxy/some/path/with/GET +Aug 17 22:57:45.912: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Aug 17 22:57:45.912: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/pods/agnhost/proxy/some/path/with/HEAD +Aug 17 22:57:45.916: INFO: http.Client request:HEAD | StatusCode:200 +Aug 17 22:57:45.916: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/pods/agnhost/proxy/some/path/with/OPTIONS +Aug 17 22:57:45.919: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Aug 17 22:57:45.919: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/pods/agnhost/proxy/some/path/with/PATCH +Aug 17 22:57:45.924: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Aug 17 22:57:45.924: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/pods/agnhost/proxy/some/path/with/POST +Aug 17 22:57:45.927: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Aug 17 22:57:45.927: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/pods/agnhost/proxy/some/path/with/PUT +Aug 17 22:57:45.931: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Aug 17 22:57:45.931: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/services/test-service/proxy/some/path/with/DELETE +Aug 17 22:57:45.939: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Aug 17 22:57:45.939: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/services/test-service/proxy/some/path/with/GET +Aug 17 22:57:45.944: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Aug 17 22:57:45.944: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/services/test-service/proxy/some/path/with/HEAD +Aug 17 22:57:45.951: INFO: http.Client request:HEAD | StatusCode:200 +Aug 17 22:57:45.951: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/services/test-service/proxy/some/path/with/OPTIONS +Aug 17 22:57:45.958: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Aug 17 22:57:45.958: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/services/test-service/proxy/some/path/with/PATCH +Aug 17 22:57:45.962: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Aug 17 22:57:45.962: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/services/test-service/proxy/some/path/with/POST +Aug 17 22:57:45.967: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Aug 17 22:57:45.967: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-7934/services/test-service/proxy/some/path/with/PUT +Aug 17 22:57:45.973: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:45.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-7934" for this suite. +•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":96,"skipped":1959,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:45.991: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run through a ConfigMap lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a ConfigMap +STEP: fetching the ConfigMap +STEP: patching the ConfigMap +STEP: listing all ConfigMaps in all namespaces with a label selector +STEP: deleting the ConfigMap by collection with a label selector +STEP: listing all ConfigMaps in test namespace +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:57:46.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6241" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":97,"skipped":2001,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:57:46.067: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: set up a multi version CRD +Aug 17 22:57:46.092: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: rename a version +STEP: check the new version name is served +STEP: check the old version name is removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:58:24.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1160" for this suite. + +• [SLOW TEST:38.004 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":98,"skipped":2021,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:58:24.071: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a ResourceQuota with terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a ResourceQuota with not terminating scope +STEP: Ensuring ResourceQuota status is calculated +STEP: Creating a long running pod +STEP: Ensuring resource quota with not terminating scope captures the pod usage +STEP: Ensuring resource quota with terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +STEP: Creating a terminating pod +STEP: Ensuring resource quota with terminating scope captures the pod usage +STEP: Ensuring resource quota with not terminating scope ignored the pod usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:58:40.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9875" for this suite. + +• [SLOW TEST:16.168 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":99,"skipped":2032,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:58:40.241: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 22:58:40.266: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Creating first CR +Aug 17 22:58:42.847: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-17T22:58:42Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-08-17T22:58:42Z]] name:name1 resourceVersion:31652 uid:1f7a23c0-cd65-46fe-9d67-9e8a67f723c4] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR +Aug 17 22:58:52.861: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-17T22:58:52Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-08-17T22:58:52Z]] name:name2 resourceVersion:31766 uid:c8b91812-75f0-4e0f-9e9a-8dbacad42bf7] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR +Aug 17 22:59:02.874: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-17T22:58:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-08-17T22:59:02Z]] name:name1 resourceVersion:31866 uid:1f7a23c0-cd65-46fe-9d67-9e8a67f723c4] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR +Aug 17 22:59:12.886: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-17T22:58:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-08-17T22:59:12Z]] name:name2 resourceVersion:31969 uid:c8b91812-75f0-4e0f-9e9a-8dbacad42bf7] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR +Aug 17 22:59:22.900: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-17T22:58:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-08-17T22:59:02Z]] name:name1 resourceVersion:32072 uid:1f7a23c0-cd65-46fe-9d67-9e8a67f723c4] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR +Aug 17 22:59:32.914: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-08-17T22:58:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-08-17T22:59:12Z]] name:name2 resourceVersion:32172 uid:c8b91812-75f0-4e0f-9e9a-8dbacad42bf7] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:59:43.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-watch-1203" for this suite. + +• [SLOW TEST:63.214 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 + watch on custom resource definition objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":100,"skipped":2065,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:59:43.469: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 22:59:43.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c67768c-1741-4d92-aea6-e8f81f4a74fa" in namespace "downward-api-5327" to be "Succeeded or Failed" +Aug 17 22:59:43.523: INFO: Pod "downwardapi-volume-7c67768c-1741-4d92-aea6-e8f81f4a74fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447296ms +Aug 17 22:59:45.532: INFO: Pod "downwardapi-volume-7c67768c-1741-4d92-aea6-e8f81f4a74fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012225555s +Aug 17 22:59:47.542: INFO: Pod "downwardapi-volume-7c67768c-1741-4d92-aea6-e8f81f4a74fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021409711s +STEP: Saw pod success +Aug 17 22:59:47.542: INFO: Pod "downwardapi-volume-7c67768c-1741-4d92-aea6-e8f81f4a74fa" satisfied condition "Succeeded or Failed" +Aug 17 22:59:47.545: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-7c67768c-1741-4d92-aea6-e8f81f4a74fa container client-container: +STEP: delete the pod +Aug 17 22:59:47.580: INFO: Waiting for pod downwardapi-volume-7c67768c-1741-4d92-aea6-e8f81f4a74fa to disappear +Aug 17 22:59:47.583: INFO: Pod downwardapi-volume-7c67768c-1741-4d92-aea6-e8f81f4a74fa no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 22:59:47.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5327" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":101,"skipped":2125,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 22:59:47.599: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:00:01.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-3972" for this suite. +STEP: Destroying namespace "nsdeletetest-5005" for this suite. +Aug 17 23:00:01.748: INFO: Namespace nsdeletetest-5005 was already deleted +STEP: Destroying namespace "nsdeletetest-9981" for this suite. + +• [SLOW TEST:14.156 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":102,"skipped":2153,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:00:01.756: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:00:01.835: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4e7290b9-e3e0-4aea-b3c1-261bc8be7960", Controller:(*bool)(0xc006dd7f2a), BlockOwnerDeletion:(*bool)(0xc006dd7f2b)}} +Aug 17 23:00:01.845: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8e8e5af8-3607-4d6d-88a1-20ce6f823143", Controller:(*bool)(0xc006dfc25e), BlockOwnerDeletion:(*bool)(0xc006dfc25f)}} +Aug 17 23:00:01.855: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4299ec09-b559-4062-8f14-bb2b70e40e37", Controller:(*bool)(0xc006dfc58e), BlockOwnerDeletion:(*bool)(0xc006dfc58f)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:00:06.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-789" for this suite. + +• [SLOW TEST:5.134 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":103,"skipped":2180,"failed":0} +SSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:00:06.890: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod pod-subpath-test-configmap-pqz6 +STEP: Creating a pod to test atomic-volume-subpath +Aug 17 23:00:06.960: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pqz6" in namespace "subpath-4290" to be "Succeeded or Failed" +Aug 17 23:00:06.964: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.794109ms +Aug 17 23:00:08.972: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 2.011719795s +Aug 17 23:00:10.976: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 4.016304682s +Aug 17 23:00:12.985: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 6.024690602s +Aug 17 23:00:14.991: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 8.0312997s +Aug 17 23:00:16.998: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 10.038187566s +Aug 17 23:00:19.005: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 12.044768562s +Aug 17 23:00:21.009: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 14.049244418s +Aug 17 23:00:23.015: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 16.055087824s +Aug 17 23:00:25.024: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 18.06358911s +Aug 17 23:00:27.032: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=true. Elapsed: 20.071941874s +Aug 17 23:00:29.038: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Running", Reason="", readiness=false. Elapsed: 22.077605793s +Aug 17 23:00:31.045: INFO: Pod "pod-subpath-test-configmap-pqz6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.084570158s +STEP: Saw pod success +Aug 17 23:00:31.045: INFO: Pod "pod-subpath-test-configmap-pqz6" satisfied condition "Succeeded or Failed" +Aug 17 23:00:31.048: INFO: Trying to get logs from node 195.17.65.231 pod pod-subpath-test-configmap-pqz6 container test-container-subpath-configmap-pqz6: +STEP: delete the pod +Aug 17 23:00:31.071: INFO: Waiting for pod pod-subpath-test-configmap-pqz6 to disappear +Aug 17 23:00:31.075: INFO: Pod pod-subpath-test-configmap-pqz6 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-pqz6 +Aug 17 23:00:31.075: INFO: Deleting pod "pod-subpath-test-configmap-pqz6" in namespace "subpath-4290" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:00:31.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4290" for this suite. + +• [SLOW TEST:24.199 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":346,"completed":104,"skipped":2185,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:00:31.090: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6258.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6258.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6258.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6258.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 17 23:00:33.160: INFO: DNS probes using dns-6258/dns-test-7819f8fd-649d-4c7b-980e-ab2b094683d2 succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:00:33.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-6258" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":105,"skipped":2220,"failed":0} +SSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:00:33.187: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should mount projected service account token [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test service account token: +Aug 17 23:00:33.222: INFO: Waiting up to 5m0s for pod "test-pod-af770777-0a97-4516-bb36-475d8a538586" in namespace "svcaccounts-5086" to be "Succeeded or Failed" +Aug 17 23:00:33.232: INFO: Pod "test-pod-af770777-0a97-4516-bb36-475d8a538586": Phase="Pending", Reason="", readiness=false. Elapsed: 9.898027ms +Aug 17 23:00:35.238: INFO: Pod "test-pod-af770777-0a97-4516-bb36-475d8a538586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016648732s +Aug 17 23:00:37.246: INFO: Pod "test-pod-af770777-0a97-4516-bb36-475d8a538586": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024561076s +STEP: Saw pod success +Aug 17 23:00:37.246: INFO: Pod "test-pod-af770777-0a97-4516-bb36-475d8a538586" satisfied condition "Succeeded or Failed" +Aug 17 23:00:37.249: INFO: Trying to get logs from node 195.17.65.231 pod test-pod-af770777-0a97-4516-bb36-475d8a538586 container agnhost-container: +STEP: delete the pod +Aug 17 23:00:37.282: INFO: Waiting for pod test-pod-af770777-0a97-4516-bb36-475d8a538586 to disappear +Aug 17 23:00:37.284: INFO: Pod test-pod-af770777-0a97-4516-bb36-475d8a538586 no longer exists +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:00:37.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-5086" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":106,"skipped":2227,"failed":0} +SSSSSSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:00:37.297: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a cronjob +STEP: Ensuring more than one job is running at a time +STEP: Ensuring at least two running jobs exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:02:01.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-1581" for this suite. + +• [SLOW TEST:84.073 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should schedule multiple jobs concurrently [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":107,"skipped":2235,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:02:01.371: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0644 on node default medium +Aug 17 23:02:01.426: INFO: Waiting up to 5m0s for pod "pod-8bdbc57e-bd6a-4a30-92f3-8a251eae011f" in namespace "emptydir-255" to be "Succeeded or Failed" +Aug 17 23:02:01.436: INFO: Pod "pod-8bdbc57e-bd6a-4a30-92f3-8a251eae011f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.467161ms +Aug 17 23:02:03.442: INFO: Pod "pod-8bdbc57e-bd6a-4a30-92f3-8a251eae011f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015973435s +Aug 17 23:02:05.448: INFO: Pod "pod-8bdbc57e-bd6a-4a30-92f3-8a251eae011f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021922564s +STEP: Saw pod success +Aug 17 23:02:05.448: INFO: Pod "pod-8bdbc57e-bd6a-4a30-92f3-8a251eae011f" satisfied condition "Succeeded or Failed" +Aug 17 23:02:05.451: INFO: Trying to get logs from node 195.17.65.231 pod pod-8bdbc57e-bd6a-4a30-92f3-8a251eae011f container test-container: +STEP: delete the pod +Aug 17 23:02:05.480: INFO: Waiting for pod pod-8bdbc57e-bd6a-4a30-92f3-8a251eae011f to disappear +Aug 17 23:02:05.483: INFO: Pod pod-8bdbc57e-bd6a-4a30-92f3-8a251eae011f no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:02:05.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-255" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":108,"skipped":2237,"failed":0} +SSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:02:05.494: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Aug 17 23:02:05.535: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 17 23:03:05.711: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:05.716: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:03:05.771: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. +Aug 17 23:03:05.775: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:05.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-2677" for this suite. +[AfterEach] PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:05.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-6759" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:60.405 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + PriorityClass endpoints + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":109,"skipped":2245,"failed":0} +SS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:05.899: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +Aug 17 23:03:05.929: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:11.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-1304" for this suite. + +• [SLOW TEST:5.891 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":110,"skipped":2247,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:11.791: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward api env vars +Aug 17 23:03:11.826: INFO: Waiting up to 5m0s for pod "downward-api-9a05bcf9-10a4-4465-9b51-8fca077cb20a" in namespace "downward-api-9724" to be "Succeeded or Failed" +Aug 17 23:03:11.828: INFO: Pod "downward-api-9a05bcf9-10a4-4465-9b51-8fca077cb20a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.722066ms +Aug 17 23:03:13.834: INFO: Pod "downward-api-9a05bcf9-10a4-4465-9b51-8fca077cb20a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00835792s +Aug 17 23:03:15.843: INFO: Pod "downward-api-9a05bcf9-10a4-4465-9b51-8fca077cb20a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017346518s +STEP: Saw pod success +Aug 17 23:03:15.843: INFO: Pod "downward-api-9a05bcf9-10a4-4465-9b51-8fca077cb20a" satisfied condition "Succeeded or Failed" +Aug 17 23:03:15.848: INFO: Trying to get logs from node 195.17.65.231 pod downward-api-9a05bcf9-10a4-4465-9b51-8fca077cb20a container dapi-container: +STEP: delete the pod +Aug 17 23:03:15.872: INFO: Waiting for pod downward-api-9a05bcf9-10a4-4465-9b51-8fca077cb20a to disappear +Aug 17 23:03:15.876: INFO: Pod downward-api-9a05bcf9-10a4-4465-9b51-8fca077cb20a no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:15.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9724" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":111,"skipped":2270,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:15.888: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service nodeport-test with type=NodePort in namespace services-5203 +STEP: creating replication controller nodeport-test in namespace services-5203 +I0817 23:03:15.946668 20 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-5203, replica count: 2 +I0817 23:03:18.998083 20 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 23:03:18.998: INFO: Creating new exec pod +Aug 17 23:03:22.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5203 exec execpodsg8pt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' +Aug 17 23:03:22.579: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Aug 17 23:03:22.579: INFO: stdout: "nodeport-test-jf6pq" +Aug 17 23:03:22.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5203 exec execpodsg8pt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.105.103.149 80' +Aug 17 23:03:22.717: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.105.103.149 80\nConnection to 10.105.103.149 80 port [tcp/http] succeeded!\n" +Aug 17 23:03:22.717: INFO: stdout: "nodeport-test-jf6pq" +Aug 17 23:03:22.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5203 exec execpodsg8pt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.131.205 32486' +Aug 17 23:03:22.854: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.131.205 32486\nConnection to 195.17.131.205 32486 port [tcp/*] succeeded!\n" +Aug 17 23:03:22.854: INFO: stdout: "" +Aug 17 23:03:23.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5203 exec execpodsg8pt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.131.205 32486' +Aug 17 23:03:23.989: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.131.205 32486\nConnection to 195.17.131.205 32486 port [tcp/*] succeeded!\n" +Aug 17 23:03:23.989: INFO: stdout: "nodeport-test-8skm7" +Aug 17 23:03:23.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5203 exec execpodsg8pt -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.65.231 32486' +Aug 17 23:03:24.129: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.65.231 32486\nConnection to 195.17.65.231 32486 port [tcp/*] succeeded!\n" +Aug 17 23:03:24.129: INFO: stdout: "nodeport-test-jf6pq" +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:24.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5203" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:8.254 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":112,"skipped":2287,"failed":0} +SSSSS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:24.143: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +STEP: listing events with field selection filtering on source +STEP: listing events with field selection filtering on reportingController +STEP: getting the test event +STEP: patching the test event +STEP: getting the test event +STEP: updating the test event +STEP: getting the test event +STEP: deleting the test event +STEP: listing events in all namespaces +STEP: listing events in test namespace +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:24.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-3452" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":113,"skipped":2292,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:24.262: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 23:03:24.293: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a71764a6-3cef-49ed-86ed-9b659092ad10" in namespace "projected-5248" to be "Succeeded or Failed" +Aug 17 23:03:24.297: INFO: Pod "downwardapi-volume-a71764a6-3cef-49ed-86ed-9b659092ad10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.685871ms +Aug 17 23:03:26.305: INFO: Pod "downwardapi-volume-a71764a6-3cef-49ed-86ed-9b659092ad10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012014902s +Aug 17 23:03:28.310: INFO: Pod "downwardapi-volume-a71764a6-3cef-49ed-86ed-9b659092ad10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01692458s +STEP: Saw pod success +Aug 17 23:03:28.310: INFO: Pod "downwardapi-volume-a71764a6-3cef-49ed-86ed-9b659092ad10" satisfied condition "Succeeded or Failed" +Aug 17 23:03:28.314: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-a71764a6-3cef-49ed-86ed-9b659092ad10 container client-container: +STEP: delete the pod +Aug 17 23:03:28.334: INFO: Waiting for pod downwardapi-volume-a71764a6-3cef-49ed-86ed-9b659092ad10 to disappear +Aug 17 23:03:28.336: INFO: Pod downwardapi-volume-a71764a6-3cef-49ed-86ed-9b659092ad10 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:28.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5248" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":114,"skipped":2314,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:28.353: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test override all +Aug 17 23:03:28.385: INFO: Waiting up to 5m0s for pod "client-containers-08c2b89e-1d5e-4ae1-b750-58ef339baf26" in namespace "containers-2136" to be "Succeeded or Failed" +Aug 17 23:03:28.389: INFO: Pod "client-containers-08c2b89e-1d5e-4ae1-b750-58ef339baf26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.660332ms +Aug 17 23:03:30.397: INFO: Pod "client-containers-08c2b89e-1d5e-4ae1-b750-58ef339baf26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012363941s +Aug 17 23:03:32.402: INFO: Pod "client-containers-08c2b89e-1d5e-4ae1-b750-58ef339baf26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017431103s +STEP: Saw pod success +Aug 17 23:03:32.403: INFO: Pod "client-containers-08c2b89e-1d5e-4ae1-b750-58ef339baf26" satisfied condition "Succeeded or Failed" +Aug 17 23:03:32.405: INFO: Trying to get logs from node 195.17.65.231 pod client-containers-08c2b89e-1d5e-4ae1-b750-58ef339baf26 container agnhost-container: +STEP: delete the pod +Aug 17 23:03:32.440: INFO: Waiting for pod client-containers-08c2b89e-1d5e-4ae1-b750-58ef339baf26 to disappear +Aug 17 23:03:32.443: INFO: Pod client-containers-08c2b89e-1d5e-4ae1-b750-58ef339baf26 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:32.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2136" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":115,"skipped":2332,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:32.458: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward api env vars +Aug 17 23:03:32.493: INFO: Waiting up to 5m0s for pod "downward-api-3a9db8cc-cb74-4aeb-8bd9-e5a918f10f24" in namespace "downward-api-6740" to be "Succeeded or Failed" +Aug 17 23:03:32.495: INFO: Pod "downward-api-3a9db8cc-cb74-4aeb-8bd9-e5a918f10f24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.916322ms +Aug 17 23:03:34.502: INFO: Pod "downward-api-3a9db8cc-cb74-4aeb-8bd9-e5a918f10f24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009385279s +Aug 17 23:03:36.509: INFO: Pod "downward-api-3a9db8cc-cb74-4aeb-8bd9-e5a918f10f24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016781018s +STEP: Saw pod success +Aug 17 23:03:36.509: INFO: Pod "downward-api-3a9db8cc-cb74-4aeb-8bd9-e5a918f10f24" satisfied condition "Succeeded or Failed" +Aug 17 23:03:36.513: INFO: Trying to get logs from node 195.17.65.231 pod downward-api-3a9db8cc-cb74-4aeb-8bd9-e5a918f10f24 container dapi-container: +STEP: delete the pod +Aug 17 23:03:36.534: INFO: Waiting for pod downward-api-3a9db8cc-cb74-4aeb-8bd9-e5a918f10f24 to disappear +Aug 17 23:03:36.538: INFO: Pod downward-api-3a9db8cc-cb74-4aeb-8bd9-e5a918f10f24 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:03:36.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6740" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":116,"skipped":2417,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:03:36.549: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9579.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9579.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 23.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.23_udp@PTR;check="$$(dig +tcp +noall +answer +search 23.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.23_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9579.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9579.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9579.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9579.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9579.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 23.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.23_udp@PTR;check="$$(dig +tcp +noall +answer +search 23.1.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.1.23_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 17 23:03:38.632: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:38.636: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:38.639: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:38.643: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:38.663: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:38.667: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:38.670: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:38.673: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:38.686: INFO: Lookups using dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local] + +Aug 17 23:03:43.692: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:43.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:43.700: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:43.703: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:43.721: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:43.726: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:43.729: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:43.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:43.748: INFO: Lookups using dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local] + +Aug 17 23:03:48.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:48.695: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:48.699: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:48.702: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:48.723: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:48.726: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:48.730: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:48.734: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:48.747: INFO: Lookups using dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local] + +Aug 17 23:03:53.692: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:53.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:53.700: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:53.703: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:53.720: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:53.723: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:53.727: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:53.730: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:53.745: INFO: Lookups using dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local] + +Aug 17 23:03:58.694: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:58.698: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:58.701: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:58.705: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:58.722: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:58.725: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:58.729: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:58.732: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:03:58.745: INFO: Lookups using dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local] + +Aug 17 23:04:03.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:04:03.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:04:03.702: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:04:03.705: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:04:03.723: INFO: Unable to read jessie_udp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:04:03.726: INFO: Unable to read jessie_tcp@dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:04:03.729: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:04:03.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local from pod dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680: the server could not find the requested resource (get pods dns-test-dd1c9215-c085-4302-baeb-159f3c849680) +Aug 17 23:04:03.748: INFO: Lookups using dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680 failed for: [wheezy_udp@dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@dns-test-service.dns-9579.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_udp@dns-test-service.dns-9579.svc.cluster.local jessie_tcp@dns-test-service.dns-9579.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9579.svc.cluster.local] + +Aug 17 23:04:08.746: INFO: DNS probes using dns-9579/dns-test-dd1c9215-c085-4302-baeb-159f3c849680 succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:08.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9579" for this suite. + +• [SLOW TEST:32.274 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":346,"completed":117,"skipped":2433,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:08.823: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-8187 +STEP: changing the ExternalName service to type=NodePort +STEP: creating replication controller externalname-service in namespace services-8187 +I0817 23:04:08.948378 20 runners.go:193] Created replication controller with name: externalname-service, namespace: services-8187, replica count: 2 +Aug 17 23:04:12.000: INFO: Creating new exec pod +I0817 23:04:12.000382 20 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 23:04:15.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:04:15.157: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:04:15.157: INFO: stdout: "" +Aug 17 23:04:16.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:04:16.310: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:04:16.310: INFO: stdout: "" +Aug 17 23:04:17.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:04:17.293: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:04:17.294: INFO: stdout: "" +Aug 17 23:04:18.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:04:18.296: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:04:18.296: INFO: stdout: "externalname-service-86sjl" +Aug 17 23:04:18.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.109.173.184 80' +Aug 17 23:04:18.437: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.109.173.184 80\nConnection to 10.109.173.184 80 port [tcp/http] succeeded!\n" +Aug 17 23:04:18.437: INFO: stdout: "externalname-service-fm92k" +Aug 17 23:04:18.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.131.205 32437' +Aug 17 23:04:18.568: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.131.205 32437\nConnection to 195.17.131.205 32437 port [tcp/*] succeeded!\n" +Aug 17 23:04:18.568: INFO: stdout: "externalname-service-86sjl" +Aug 17 23:04:18.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.65.231 32437' +Aug 17 23:04:18.717: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 195.17.65.231 32437\nConnection to 195.17.65.231 32437 port [tcp/*] succeeded!\n" +Aug 17 23:04:18.717: INFO: stdout: "" +Aug 17 23:04:19.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.65.231 32437' +Aug 17 23:04:19.851: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.65.231 32437\nConnection to 195.17.65.231 32437 port [tcp/*] succeeded!\n" +Aug 17 23:04:19.851: INFO: stdout: "" +Aug 17 23:04:20.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-8187 exec execpodmbgw4 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.65.231 32437' +Aug 17 23:04:20.846: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.65.231 32437\nConnection to 195.17.65.231 32437 port [tcp/*] succeeded!\n" +Aug 17 23:04:20.846: INFO: stdout: "externalname-service-fm92k" +Aug 17 23:04:20.846: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:20.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8187" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:12.069 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":118,"skipped":2438,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:20.892: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 23:04:21.411: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 23:04:24.455: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API +Aug 17 23:04:24.484: INFO: Waiting for webhook configuration to be ready... +STEP: create a pod that should be updated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:24.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-2427" for this suite. +STEP: Destroying namespace "webhook-2427-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":119,"skipped":2444,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:24.739: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating all guestbook components +Aug 17 23:04:24.772: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Aug 17 23:04:24.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 create -f -' +Aug 17 23:04:26.700: INFO: stderr: "" +Aug 17 23:04:26.700: INFO: stdout: "service/agnhost-replica created\n" +Aug 17 23:04:26.700: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Aug 17 23:04:26.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 create -f -' +Aug 17 23:04:28.519: INFO: stderr: "" +Aug 17 23:04:28.519: INFO: stdout: "service/agnhost-primary created\n" +Aug 17 23:04:28.519: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Aug 17 23:04:28.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 create -f -' +Aug 17 23:04:28.805: INFO: stderr: "" +Aug 17 23:04:28.805: INFO: stdout: "service/frontend created\n" +Aug 17 23:04:28.805: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: k8s.gcr.io/e2e-test-images/agnhost:2.33 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Aug 17 23:04:28.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 create -f -' +Aug 17 23:04:29.049: INFO: stderr: "" +Aug 17 23:04:29.049: INFO: stdout: "deployment.apps/frontend created\n" +Aug 17 23:04:29.049: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: k8s.gcr.io/e2e-test-images/agnhost:2.33 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Aug 17 23:04:29.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 create -f -' +Aug 17 23:04:29.309: INFO: stderr: "" +Aug 17 23:04:29.309: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Aug 17 23:04:29.309: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: k8s.gcr.io/e2e-test-images/agnhost:2.33 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Aug 17 23:04:29.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 create -f -' +Aug 17 23:04:29.559: INFO: stderr: "" +Aug 17 23:04:29.559: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app +Aug 17 23:04:29.559: INFO: Waiting for all frontend pods to be Running. +Aug 17 23:04:34.610: INFO: Waiting for frontend to serve content. +Aug 17 23:04:34.624: INFO: Trying to add a new entry to the guestbook. +Aug 17 23:04:34.640: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Aug 17 23:04:34.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 delete --grace-period=0 --force -f -' +Aug 17 23:04:34.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 23:04:34.735: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources +Aug 17 23:04:34.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 delete --grace-period=0 --force -f -' +Aug 17 23:04:34.948: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 23:04:34.948: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Aug 17 23:04:34.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 delete --grace-period=0 --force -f -' +Aug 17 23:04:35.044: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 23:04:35.044: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Aug 17 23:04:35.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 delete --grace-period=0 --force -f -' +Aug 17 23:04:35.115: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 23:04:35.115: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Aug 17 23:04:35.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 delete --grace-period=0 --force -f -' +Aug 17 23:04:35.184: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 23:04:35.184: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources +Aug 17 23:04:35.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7753 delete --grace-period=0 --force -f -' +Aug 17 23:04:35.249: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 23:04:35.249: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:35.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7753" for this suite. + +• [SLOW TEST:10.525 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Guestbook application + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339 + should create and stop a working application [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":346,"completed":120,"skipped":2454,"failed":0} +SSSSS +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:35.267: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:35.290: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename disruption-2 +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be processed +STEP: listing a collection of PDBs across all namespaces +STEP: listing a collection of PDBs in namespace disruption-1854 +STEP: deleting a collection of PDBs +STEP: Waiting for the PDB collection to be deleted +[AfterEach] Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:41.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-2-5611" for this suite. +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:41.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-1854" for this suite. + +• [SLOW TEST:6.151 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Listing PodDisruptionBudgets for all namespaces + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 + should list and delete a collection of PodDisruptionBudgets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":121,"skipped":2459,"failed":0} +SSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:41.419: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test env composition +Aug 17 23:04:41.455: INFO: Waiting up to 5m0s for pod "var-expansion-e09b4c9e-24d2-4a12-aa58-498d271200ee" in namespace "var-expansion-4434" to be "Succeeded or Failed" +Aug 17 23:04:41.464: INFO: Pod "var-expansion-e09b4c9e-24d2-4a12-aa58-498d271200ee": Phase="Pending", Reason="", readiness=false. Elapsed: 9.751712ms +Aug 17 23:04:43.469: INFO: Pod "var-expansion-e09b4c9e-24d2-4a12-aa58-498d271200ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014862422s +Aug 17 23:04:45.478: INFO: Pod "var-expansion-e09b4c9e-24d2-4a12-aa58-498d271200ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02325454s +STEP: Saw pod success +Aug 17 23:04:45.478: INFO: Pod "var-expansion-e09b4c9e-24d2-4a12-aa58-498d271200ee" satisfied condition "Succeeded or Failed" +Aug 17 23:04:45.482: INFO: Trying to get logs from node 195.17.65.231 pod var-expansion-e09b4c9e-24d2-4a12-aa58-498d271200ee container dapi-container: +STEP: delete the pod +Aug 17 23:04:45.506: INFO: Waiting for pod var-expansion-e09b4c9e-24d2-4a12-aa58-498d271200ee to disappear +Aug 17 23:04:45.512: INFO: Pod var-expansion-e09b4c9e-24d2-4a12-aa58-498d271200ee no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:45.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4434" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":122,"skipped":2465,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:45.524: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward api env vars +Aug 17 23:04:45.566: INFO: Waiting up to 5m0s for pod "downward-api-201d6b2f-d37f-4aff-a5a7-5019eb6ff8dd" in namespace "downward-api-5857" to be "Succeeded or Failed" +Aug 17 23:04:45.569: INFO: Pod "downward-api-201d6b2f-d37f-4aff-a5a7-5019eb6ff8dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.959212ms +Aug 17 23:04:47.579: INFO: Pod "downward-api-201d6b2f-d37f-4aff-a5a7-5019eb6ff8dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013038872s +Aug 17 23:04:49.587: INFO: Pod "downward-api-201d6b2f-d37f-4aff-a5a7-5019eb6ff8dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020875747s +STEP: Saw pod success +Aug 17 23:04:49.587: INFO: Pod "downward-api-201d6b2f-d37f-4aff-a5a7-5019eb6ff8dd" satisfied condition "Succeeded or Failed" +Aug 17 23:04:49.591: INFO: Trying to get logs from node 195.17.65.231 pod downward-api-201d6b2f-d37f-4aff-a5a7-5019eb6ff8dd container dapi-container: +STEP: delete the pod +Aug 17 23:04:49.621: INFO: Waiting for pod downward-api-201d6b2f-d37f-4aff-a5a7-5019eb6ff8dd to disappear +Aug 17 23:04:49.624: INFO: Pod downward-api-201d6b2f-d37f-4aff-a5a7-5019eb6ff8dd no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:49.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5857" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":123,"skipped":2476,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:49.636: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0666 on node default medium +Aug 17 23:04:49.677: INFO: Waiting up to 5m0s for pod "pod-93f66425-a4b7-4f0e-ad87-f09cd844a531" in namespace "emptydir-7219" to be "Succeeded or Failed" +Aug 17 23:04:49.685: INFO: Pod "pod-93f66425-a4b7-4f0e-ad87-f09cd844a531": Phase="Pending", Reason="", readiness=false. Elapsed: 7.947727ms +Aug 17 23:04:51.690: INFO: Pod "pod-93f66425-a4b7-4f0e-ad87-f09cd844a531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013419293s +Aug 17 23:04:53.697: INFO: Pod "pod-93f66425-a4b7-4f0e-ad87-f09cd844a531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020355808s +STEP: Saw pod success +Aug 17 23:04:53.697: INFO: Pod "pod-93f66425-a4b7-4f0e-ad87-f09cd844a531" satisfied condition "Succeeded or Failed" +Aug 17 23:04:53.701: INFO: Trying to get logs from node 195.17.65.231 pod pod-93f66425-a4b7-4f0e-ad87-f09cd844a531 container test-container: +STEP: delete the pod +Aug 17 23:04:53.727: INFO: Waiting for pod pod-93f66425-a4b7-4f0e-ad87-f09cd844a531 to disappear +Aug 17 23:04:53.731: INFO: Pod pod-93f66425-a4b7-4f0e-ad87-f09cd844a531 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:53.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7219" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":124,"skipped":2484,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:53.742: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name s-test-opt-del-7c03c789-d6d9-4270-9a59-6e3e9b2c0e6b +STEP: Creating secret with name s-test-opt-upd-1214b9bf-43f1-4d28-b7e1-895c0170ffa6 +STEP: Creating the pod +Aug 17 23:04:53.814: INFO: The status of Pod pod-secrets-c8d1e099-cbab-45c2-9163-600d3a4be430 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:04:55.823: INFO: The status of Pod pod-secrets-c8d1e099-cbab-45c2-9163-600d3a4be430 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-7c03c789-d6d9-4270-9a59-6e3e9b2c0e6b +STEP: Updating secret s-test-opt-upd-1214b9bf-43f1-4d28-b7e1-895c0170ffa6 +STEP: Creating secret with name s-test-opt-create-4a64a6b8-3a97-4b01-aa0f-630fbee105e4 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:04:57.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-801" for this suite. +•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":125,"skipped":2496,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:04:57.923: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-4515 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-4515 +STEP: creating replication controller externalsvc in namespace services-4515 +I0817 23:04:58.010921 20 runners.go:193] Created replication controller with name: externalsvc, namespace: services-4515, replica count: 2 +I0817 23:05:01.062625 20 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Aug 17 23:05:01.108: INFO: Creating new exec pod +Aug 17 23:05:03.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-4515 exec execpodw64ln -- /bin/sh -x -c nslookup nodeport-service.services-4515.svc.cluster.local' +Aug 17 23:05:03.316: INFO: stderr: "+ nslookup nodeport-service.services-4515.svc.cluster.local\n" +Aug 17 23:05:03.316: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4515.svc.cluster.local\tcanonical name = externalsvc.services-4515.svc.cluster.local.\nName:\texternalsvc.services-4515.svc.cluster.local\nAddress: 10.101.197.0\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-4515, will wait for the garbage collector to delete the pods +Aug 17 23:05:03.384: INFO: Deleting ReplicationController externalsvc took: 13.405794ms +Aug 17 23:05:03.485: INFO: Terminating ReplicationController externalsvc pods took: 100.959278ms +Aug 17 23:05:05.616: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:05.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4515" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:7.731 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":126,"skipped":2522,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:05.656: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service in namespace services-5294 +STEP: creating service affinity-clusterip-transition in namespace services-5294 +STEP: creating replication controller affinity-clusterip-transition in namespace services-5294 +I0817 23:05:05.718875 20 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-5294, replica count: 3 +I0817 23:05:08.771488 20 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 23:05:08.781: INFO: Creating new exec pod +Aug 17 23:05:11.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5294 exec execpod-affinity4gwnk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' +Aug 17 23:05:11.950: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Aug 17 23:05:11.950: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:05:11.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5294 exec execpod-affinity4gwnk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.102.222.24 80' +Aug 17 23:05:12.087: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.102.222.24 80\nConnection to 10.102.222.24 80 port [tcp/http] succeeded!\n" +Aug 17 23:05:12.087: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:05:12.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5294 exec execpod-affinity4gwnk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.222.24:80/ ; done' +Aug 17 23:05:12.302: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n" +Aug 17 23:05:12.302: INFO: stdout: "\naffinity-clusterip-transition-czh5v\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-czh5v\naffinity-clusterip-transition-4phhl\naffinity-clusterip-transition-czh5v\naffinity-clusterip-transition-4phhl\naffinity-clusterip-transition-czh5v\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-czh5v\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-czh5v\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-4phhl\naffinity-clusterip-transition-4phhl\naffinity-clusterip-transition-czh5v" +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-czh5v +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-czh5v +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-4phhl +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-czh5v +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-4phhl +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-czh5v +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-czh5v +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-czh5v +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-4phhl +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-4phhl +Aug 17 23:05:12.302: INFO: Received response from host: affinity-clusterip-transition-czh5v +Aug 17 23:05:12.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5294 exec execpod-affinity4gwnk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.222.24:80/ ; done' +Aug 17 23:05:12.529: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.222.24:80/\n" +Aug 17 23:05:12.529: INFO: stdout: "\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7\naffinity-clusterip-transition-n46j7" +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Received response from host: affinity-clusterip-transition-n46j7 +Aug 17 23:05:12.529: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-5294, will wait for the garbage collector to delete the pods +Aug 17 23:05:12.614: INFO: Deleting ReplicationController affinity-clusterip-transition took: 8.504895ms +Aug 17 23:05:12.715: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.764356ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:14.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5294" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:9.014 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":127,"skipped":2576,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:14.671: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename tables +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:14.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "tables-7820" for this suite. +•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":128,"skipped":2581,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:14.722: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 23:05:15.219: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 23:05:18.257: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:05:18.262: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:21.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7389" for this suite. +STEP: Destroying namespace "webhook-7389-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:6.872 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":129,"skipped":2588,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:21.603: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 23:05:21.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56be6ec2-8315-4efc-98fb-005694f92901" in namespace "downward-api-4505" to be "Succeeded or Failed" +Aug 17 23:05:21.697: INFO: Pod "downwardapi-volume-56be6ec2-8315-4efc-98fb-005694f92901": Phase="Pending", Reason="", readiness=false. Elapsed: 6.491428ms +Aug 17 23:05:23.703: INFO: Pod "downwardapi-volume-56be6ec2-8315-4efc-98fb-005694f92901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012570103s +Aug 17 23:05:25.711: INFO: Pod "downwardapi-volume-56be6ec2-8315-4efc-98fb-005694f92901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02085197s +STEP: Saw pod success +Aug 17 23:05:25.711: INFO: Pod "downwardapi-volume-56be6ec2-8315-4efc-98fb-005694f92901" satisfied condition "Succeeded or Failed" +Aug 17 23:05:25.715: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-56be6ec2-8315-4efc-98fb-005694f92901 container client-container: +STEP: delete the pod +Aug 17 23:05:25.736: INFO: Waiting for pod downwardapi-volume-56be6ec2-8315-4efc-98fb-005694f92901 to disappear +Aug 17 23:05:25.739: INFO: Pod downwardapi-volume-56be6ec2-8315-4efc-98fb-005694f92901 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:25.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4505" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":130,"skipped":2634,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:25.752: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test substitution in container's args +Aug 17 23:05:25.788: INFO: Waiting up to 5m0s for pod "var-expansion-ea1aa40a-0011-4e78-8aa9-a66d2e1780bf" in namespace "var-expansion-3660" to be "Succeeded or Failed" +Aug 17 23:05:25.790: INFO: Pod "var-expansion-ea1aa40a-0011-4e78-8aa9-a66d2e1780bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.474414ms +Aug 17 23:05:27.795: INFO: Pod "var-expansion-ea1aa40a-0011-4e78-8aa9-a66d2e1780bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00676977s +Aug 17 23:05:29.802: INFO: Pod "var-expansion-ea1aa40a-0011-4e78-8aa9-a66d2e1780bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014359779s +STEP: Saw pod success +Aug 17 23:05:29.802: INFO: Pod "var-expansion-ea1aa40a-0011-4e78-8aa9-a66d2e1780bf" satisfied condition "Succeeded or Failed" +Aug 17 23:05:29.808: INFO: Trying to get logs from node 195.17.65.231 pod var-expansion-ea1aa40a-0011-4e78-8aa9-a66d2e1780bf container dapi-container: +STEP: delete the pod +Aug 17 23:05:29.831: INFO: Waiting for pod var-expansion-ea1aa40a-0011-4e78-8aa9-a66d2e1780bf to disappear +Aug 17 23:05:29.834: INFO: Pod var-expansion-ea1aa40a-0011-4e78-8aa9-a66d2e1780bf no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:29.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3660" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":131,"skipped":2642,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:29.850: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 23:05:29.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0950b02d-23d0-49a9-a0e8-22a7b3e37de1" in namespace "downward-api-5370" to be "Succeeded or Failed" +Aug 17 23:05:29.902: INFO: Pod "downwardapi-volume-0950b02d-23d0-49a9-a0e8-22a7b3e37de1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.246761ms +Aug 17 23:05:31.912: INFO: Pod "downwardapi-volume-0950b02d-23d0-49a9-a0e8-22a7b3e37de1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013517327s +Aug 17 23:05:33.918: INFO: Pod "downwardapi-volume-0950b02d-23d0-49a9-a0e8-22a7b3e37de1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019432645s +STEP: Saw pod success +Aug 17 23:05:33.918: INFO: Pod "downwardapi-volume-0950b02d-23d0-49a9-a0e8-22a7b3e37de1" satisfied condition "Succeeded or Failed" +Aug 17 23:05:33.923: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-0950b02d-23d0-49a9-a0e8-22a7b3e37de1 container client-container: +STEP: delete the pod +Aug 17 23:05:33.948: INFO: Waiting for pod downwardapi-volume-0950b02d-23d0-49a9-a0e8-22a7b3e37de1 to disappear +Aug 17 23:05:33.953: INFO: Pod downwardapi-volume-0950b02d-23d0-49a9-a0e8-22a7b3e37de1 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:33.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5370" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":132,"skipped":2690,"failed":0} +SSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:33.970: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 +STEP: create the container to handle the HTTPGet hook request. +Aug 17 23:05:34.019: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:05:36.026: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the pod with lifecycle hook +Aug 17 23:05:36.038: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:05:38.044: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Aug 17 23:05:38.057: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Aug 17 23:05:38.061: INFO: Pod pod-with-prestop-exec-hook still exists +Aug 17 23:05:40.062: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Aug 17 23:05:40.071: INFO: Pod pod-with-prestop-exec-hook still exists +Aug 17 23:05:42.062: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Aug 17 23:05:42.070: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:42.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-8966" for this suite. + +• [SLOW TEST:8.119 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":133,"skipped":2697,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:42.091: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 23:05:43.104: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 23:05:46.136: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:05:46.143: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7290-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource while v1 is storage version +STEP: Patching Custom Resource Definition to set v2 as storage +STEP: Patching the custom resource while v2 is storage version +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:05:49.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-5317" for this suite. +STEP: Destroying namespace "webhook-5317-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:7.411 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":134,"skipped":2729,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:05:49.502: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-126 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating stateful set ss in namespace statefulset-126 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-126 +Aug 17 23:05:49.562: INFO: Found 0 stateful pods, waiting for 1 +Aug 17 23:05:59.572: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Aug 17 23:05:59.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-126 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 23:05:59.716: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 23:05:59.716: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 23:05:59.716: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 23:05:59.721: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Aug 17 23:06:09.784: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Aug 17 23:06:09.784: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 23:06:09.887: INFO: POD NODE PHASE GRACE CONDITIONS +Aug 17 23:06:09.887: INFO: ss-0 195.17.65.231 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:05:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:05:49 +0000 UTC }] +Aug 17 23:06:09.887: INFO: +Aug 17 23:06:09.887: INFO: StatefulSet ss has not reached scale 3, at 1 +Aug 17 23:06:10.892: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995975031s +Aug 17 23:06:11.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990296905s +Aug 17 23:06:12.906: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983296182s +Aug 17 23:06:13.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97626533s +Aug 17 23:06:14.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971509185s +Aug 17 23:06:15.928: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964764653s +Aug 17 23:06:16.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.955097076s +Aug 17 23:06:17.940: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.949484267s +Aug 17 23:06:18.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 942.654469ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-126 +Aug 17 23:06:19.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-126 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 23:06:20.087: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 17 23:06:20.088: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 23:06:20.088: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 17 23:06:20.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-126 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 23:06:20.227: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Aug 17 23:06:20.227: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 23:06:20.227: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 17 23:06:20.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-126 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 23:06:20.362: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Aug 17 23:06:20.362: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 23:06:20.362: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Aug 17 23:06:20.367: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false +Aug 17 23:06:30.377: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:06:30.377: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:06:30.377: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Aug 17 23:06:30.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-126 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 23:06:30.514: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 23:06:30.514: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 23:06:30.514: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 23:06:30.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-126 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 23:06:30.665: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 23:06:30.666: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 23:06:30.666: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 23:06:30.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-126 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 23:06:30.798: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 23:06:30.798: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 23:06:30.798: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 23:06:30.798: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 23:06:30.803: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 +Aug 17 23:06:40.814: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Aug 17 23:06:40.814: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Aug 17 23:06:40.814: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Aug 17 23:06:40.829: INFO: POD NODE PHASE GRACE CONDITIONS +Aug 17 23:06:40.830: INFO: ss-0 195.17.65.231 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:05:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:05:49 +0000 UTC }] +Aug 17 23:06:40.830: INFO: ss-1 195.17.131.205 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:09 +0000 UTC }] +Aug 17 23:06:40.830: INFO: ss-2 195.17.65.231 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:06:09 +0000 UTC }] +Aug 17 23:06:40.830: INFO: +Aug 17 23:06:40.830: INFO: StatefulSet ss has not reached scale 0, at 3 +Aug 17 23:06:41.837: INFO: Verifying statefulset ss doesn't scale past 0 for another 8.993918469s +Aug 17 23:06:42.841: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.987665401s +Aug 17 23:06:43.849: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.983073668s +Aug 17 23:06:44.854: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.975501497s +Aug 17 23:06:45.860: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.970320206s +Aug 17 23:06:46.867: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.964016332s +Aug 17 23:06:47.872: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.956428741s +Aug 17 23:06:48.877: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.95167946s +Aug 17 23:06:49.883: INFO: Verifying statefulset ss doesn't scale past 0 for another 947.630277ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-126 +Aug 17 23:06:50.888: INFO: Scaling statefulset ss to 0 +Aug 17 23:06:50.900: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 17 23:06:50.903: INFO: Deleting all statefulset in ns statefulset-126 +Aug 17 23:06:50.906: INFO: Scaling statefulset ss to 0 +Aug 17 23:06:50.916: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 23:06:50.919: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:06:50.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-126" for this suite. + +• [SLOW TEST:61.442 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":135,"skipped":2747,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:06:50.944: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-volume-e0de4d4b-8a74-48e7-b78a-63efeca31689 +STEP: Creating a pod to test consume configMaps +Aug 17 23:06:50.995: INFO: Waiting up to 5m0s for pod "pod-configmaps-03a85bc5-77d7-47b1-9057-c79b1debad98" in namespace "configmap-4235" to be "Succeeded or Failed" +Aug 17 23:06:50.997: INFO: Pod "pod-configmaps-03a85bc5-77d7-47b1-9057-c79b1debad98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.675011ms +Aug 17 23:06:53.004: INFO: Pod "pod-configmaps-03a85bc5-77d7-47b1-9057-c79b1debad98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008964431s +Aug 17 23:06:55.011: INFO: Pod "pod-configmaps-03a85bc5-77d7-47b1-9057-c79b1debad98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015787352s +STEP: Saw pod success +Aug 17 23:06:55.011: INFO: Pod "pod-configmaps-03a85bc5-77d7-47b1-9057-c79b1debad98" satisfied condition "Succeeded or Failed" +Aug 17 23:06:55.014: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-03a85bc5-77d7-47b1-9057-c79b1debad98 container agnhost-container: +STEP: delete the pod +Aug 17 23:06:55.036: INFO: Waiting for pod pod-configmaps-03a85bc5-77d7-47b1-9057-c79b1debad98 to disappear +Aug 17 23:06:55.040: INFO: Pod pod-configmaps-03a85bc5-77d7-47b1-9057-c79b1debad98 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:06:55.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4235" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":136,"skipped":2778,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:06:55.054: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 23:06:55.426: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 23:06:58.458: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a mutating webhook configuration +Aug 17 23:06:58.481: INFO: Waiting for webhook configuration to be ready... +STEP: Updating a mutating webhook configuration's rules to not include the create operation +STEP: Creating a configMap that should not be mutated +STEP: Patching a mutating webhook configuration's rules to include the create operation +STEP: Creating a configMap that should be mutated +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:06:58.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-532" for this suite. +STEP: Destroying namespace "webhook-532-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":137,"skipped":2796,"failed":0} +S +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:06:58.856: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service endpoint-test2 in namespace services-5416 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5416 to expose endpoints map[] +Aug 17 23:06:58.905: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found +Aug 17 23:06:59.916: INFO: successfully validated that service endpoint-test2 in namespace services-5416 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-5416 +Aug 17 23:06:59.928: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:07:01.935: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5416 to expose endpoints map[pod1:[80]] +Aug 17 23:07:01.948: INFO: successfully validated that service endpoint-test2 in namespace services-5416 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 +Aug 17 23:07:01.949: INFO: Creating new exec pod +Aug 17 23:07:04.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5416 exec execpodslpx7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Aug 17 23:07:05.119: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Aug 17 23:07:05.119: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:07:05.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5416 exec execpodslpx7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.107.161.98 80' +Aug 17 23:07:05.258: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 10.107.161.98 80\nConnection to 10.107.161.98 80 port [tcp/http] succeeded!\n" +Aug 17 23:07:05.258: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Creating pod pod2 in namespace services-5416 +Aug 17 23:07:05.269: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:07:07.276: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5416 to expose endpoints map[pod1:[80] pod2:[80]] +Aug 17 23:07:07.295: INFO: successfully validated that service endpoint-test2 in namespace services-5416 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 +Aug 17 23:07:08.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5416 exec execpodslpx7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Aug 17 23:07:08.426: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Aug 17 23:07:08.426: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:07:08.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5416 exec execpodslpx7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.107.161.98 80' +Aug 17 23:07:08.562: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.107.161.98 80\nConnection to 10.107.161.98 80 port [tcp/http] succeeded!\n" +Aug 17 23:07:08.562: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod1 in namespace services-5416 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5416 to expose endpoints map[pod2:[80]] +Aug 17 23:07:08.604: INFO: successfully validated that service endpoint-test2 in namespace services-5416 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 +Aug 17 23:07:09.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5416 exec execpodslpx7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' +Aug 17 23:07:09.729: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Aug 17 23:07:09.729: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:07:09.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5416 exec execpodslpx7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.107.161.98 80' +Aug 17 23:07:09.868: INFO: stderr: "+ nc -v -t -w 2 10.107.161.98 80\n+ echo hostName\nConnection to 10.107.161.98 80 port [tcp/http] succeeded!\n" +Aug 17 23:07:09.868: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +STEP: Deleting pod pod2 in namespace services-5416 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5416 to expose endpoints map[] +Aug 17 23:07:11.895: INFO: successfully validated that service endpoint-test2 in namespace services-5416 exposes endpoints map[] +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:07:11.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5416" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:13.089 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should serve a basic endpoint from pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":346,"completed":138,"skipped":2797,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:07:11.945: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating projection with secret that has name projected-secret-test-b801dba4-0b3d-4307-ab7c-0cbc702360ce +STEP: Creating a pod to test consume secrets +Aug 17 23:07:11.992: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e10d1d19-e196-4f56-8df2-9dacdf142857" in namespace "projected-6043" to be "Succeeded or Failed" +Aug 17 23:07:11.995: INFO: Pod "pod-projected-secrets-e10d1d19-e196-4f56-8df2-9dacdf142857": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258873ms +Aug 17 23:07:14.002: INFO: Pod "pod-projected-secrets-e10d1d19-e196-4f56-8df2-9dacdf142857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009726236s +Aug 17 23:07:16.009: INFO: Pod "pod-projected-secrets-e10d1d19-e196-4f56-8df2-9dacdf142857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016913872s +STEP: Saw pod success +Aug 17 23:07:16.009: INFO: Pod "pod-projected-secrets-e10d1d19-e196-4f56-8df2-9dacdf142857" satisfied condition "Succeeded or Failed" +Aug 17 23:07:16.012: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-secrets-e10d1d19-e196-4f56-8df2-9dacdf142857 container projected-secret-volume-test: +STEP: delete the pod +Aug 17 23:07:16.034: INFO: Waiting for pod pod-projected-secrets-e10d1d19-e196-4f56-8df2-9dacdf142857 to disappear +Aug 17 23:07:16.036: INFO: Pod pod-projected-secrets-e10d1d19-e196-4f56-8df2-9dacdf142857 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:07:16.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6043" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":139,"skipped":2819,"failed":0} +SSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:07:16.047: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Create set of events +STEP: get a list of Events with a label in the current namespace +STEP: delete a list of events +Aug 17 23:07:16.098: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +[AfterEach] [sig-instrumentation] Events API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:07:16.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-8053" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":140,"skipped":2825,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:07:16.142: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:07:16.164: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: client-side validation (kubectl create and apply) allows request with known and required properties +Aug 17 23:07:23.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 create -f -' +Aug 17 23:07:25.531: INFO: stderr: "" +Aug 17 23:07:25.531: INFO: stdout: "e2e-test-crd-publish-openapi-1460-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Aug 17 23:07:25.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 delete e2e-test-crd-publish-openapi-1460-crds test-foo' +Aug 17 23:07:25.609: INFO: stderr: "" +Aug 17 23:07:25.609: INFO: stdout: "e2e-test-crd-publish-openapi-1460-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Aug 17 23:07:25.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 apply -f -' +Aug 17 23:07:25.883: INFO: stderr: "" +Aug 17 23:07:25.883: INFO: stdout: "e2e-test-crd-publish-openapi-1460-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Aug 17 23:07:25.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 delete e2e-test-crd-publish-openapi-1460-crds test-foo' +Aug 17 23:07:25.956: INFO: stderr: "" +Aug 17 23:07:25.956: INFO: stdout: "e2e-test-crd-publish-openapi-1460-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: client-side validation (kubectl create and apply) rejects request with value outside defined enum values +Aug 17 23:07:25.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 create -f -' +Aug 17 23:07:27.088: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema +Aug 17 23:07:27.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 create -f -' +Aug 17 23:07:27.320: INFO: rc: 1 +Aug 17 23:07:27.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 apply -f -' +Aug 17 23:07:27.569: INFO: rc: 1 +STEP: client-side validation (kubectl create and apply) rejects request without required properties +Aug 17 23:07:27.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 create -f -' +Aug 17 23:07:27.799: INFO: rc: 1 +Aug 17 23:07:27.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 --namespace=crd-publish-openapi-7460 apply -f -' +Aug 17 23:07:28.051: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties +Aug 17 23:07:28.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 explain e2e-test-crd-publish-openapi-1460-crds' +Aug 17 23:07:28.291: INFO: stderr: "" +Aug 17 23:07:28.291: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1460-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively +Aug 17 23:07:28.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 explain e2e-test-crd-publish-openapi-1460-crds.metadata' +Aug 17 23:07:28.534: INFO: stderr: "" +Aug 17 23:07:28.534: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1460-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Aug 17 23:07:28.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 explain e2e-test-crd-publish-openapi-1460-crds.spec' +Aug 17 23:07:28.778: INFO: stderr: "" +Aug 17 23:07:28.778: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1460-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Aug 17 23:07:28.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 explain e2e-test-crd-publish-openapi-1460-crds.spec.bars' +Aug 17 23:07:29.045: INFO: stderr: "" +Aug 17 23:07:29.045: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1460-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist +Aug 17 23:07:29.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7460 explain e2e-test-crd-publish-openapi-1460-crds.spec.bars2' +Aug 17 23:07:29.291: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:07:36.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7460" for this suite. + +• [SLOW TEST:20.195 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":141,"skipped":2860,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:07:36.339: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 23:07:36.806: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 23:07:39.837: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:07:39.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-6012" for this suite. +STEP: Destroying namespace "webhook-6012-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":142,"skipped":2912,"failed":0} +SSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:07:39.999: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create deployment with httpd image +Aug 17 23:07:40.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8067 create -f -' +Aug 17 23:07:41.404: INFO: stderr: "" +Aug 17 23:07:41.404: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image +Aug 17 23:07:41.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8067 diff -f -' +Aug 17 23:07:42.565: INFO: rc: 1 +Aug 17 23:07:42.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8067 delete -f -' +Aug 17 23:07:42.628: INFO: stderr: "" +Aug 17 23:07:42.628: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:07:42.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8067" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":143,"skipped":2921,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:07:42.641: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service in namespace services-9425 +STEP: creating service affinity-nodeport in namespace services-9425 +STEP: creating replication controller affinity-nodeport in namespace services-9425 +I0817 23:07:42.715633 20 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-9425, replica count: 3 +I0817 23:07:45.767234 20 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 23:07:45.782: INFO: Creating new exec pod +Aug 17 23:07:48.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-9425 exec execpod-affinity7d7d9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' +Aug 17 23:07:48.942: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Aug 17 23:07:48.942: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:07:48.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-9425 exec execpod-affinity7d7d9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.105.62.169 80' +Aug 17 23:07:49.090: INFO: stderr: "+ nc -v -t -w 2 10.105.62.169 80\n+ echo hostName\nConnection to 10.105.62.169 80 port [tcp/http] succeeded!\n" +Aug 17 23:07:49.090: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:07:49.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-9425 exec execpod-affinity7d7d9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.131.205 32598' +Aug 17 23:07:49.232: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.131.205 32598\nConnection to 195.17.131.205 32598 port [tcp/*] succeeded!\n" +Aug 17 23:07:49.232: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:07:49.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-9425 exec execpod-affinity7d7d9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.65.231 32598' +Aug 17 23:07:49.370: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.65.231 32598\nConnection to 195.17.65.231 32598 port [tcp/*] succeeded!\n" +Aug 17 23:07:49.370: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:07:49.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-9425 exec execpod-affinity7d7d9 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://195.17.131.205:32598/ ; done' +Aug 17 23:07:49.588: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:32598/\n" +Aug 17 23:07:49.589: INFO: stdout: "\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb\naffinity-nodeport-wbhvb" +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Received response from host: affinity-nodeport-wbhvb +Aug 17 23:07:49.589: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-9425, will wait for the garbage collector to delete the pods +Aug 17 23:07:49.673: INFO: Deleting ReplicationController affinity-nodeport took: 10.384422ms +Aug 17 23:07:49.773: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.8627ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:07:52.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9425" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:9.389 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":144,"skipped":2951,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:07:52.031: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +Aug 17 23:07:52.072: INFO: PodSpec: initContainers in spec.initContainers +Aug 17 23:08:30.967: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8402a3ab-3123-4134-8754-753be99a2788", GenerateName:"", Namespace:"init-container-7803", SelfLink:"", UID:"d50eaa04-0f3a-4a54-ad98-c7255e3a23d6", ResourceVersion:"40275", Generation:0, CreationTimestamp:time.Date(2022, time.August, 17, 23, 7, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"72481149"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.August, 17, 23, 7, 52, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004a46090), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.August, 17, 23, 7, 53, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004a460c0), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-z7hw7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0054b3180), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-z7hw7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-z7hw7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-z7hw7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003a78af8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"195.17.65.231", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0036f9a40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003a78b80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003a78ba0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003a78ba8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003a78bac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003271960), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.August, 17, 23, 7, 52, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.August, 17, 23, 7, 52, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.August, 17, 23, 7, 52, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.August, 17, 23, 7, 52, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"195.17.65.231", PodIP:"192.168.1.160", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.1.160"}}, StartTime:time.Date(2022, time.August, 17, 23, 7, 52, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0036f9b20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0036f9b90)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://0c3d629da0f4d3c8a0e39dbd9a014f8a84e044bd14739280e0f21921cff300b6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0054b3200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0054b31e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc003a78c44)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:08:30.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-7803" for this suite. + +• [SLOW TEST:38.953 seconds] +[sig-node] InitContainer [NodeConformance] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":145,"skipped":2968,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:08:30.986: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +STEP: waiting for pod running +STEP: creating a file in subpath +Aug 17 23:08:33.042: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3777 PodName:var-expansion-e7abb83b-9821-4f1a-9c97-7507d235e18a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:08:33.042: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:08:33.042: INFO: ExecWithOptions: Clientset creation +Aug 17 23:08:33.042: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-3777/pods/var-expansion-e7abb83b-9821-4f1a-9c97-7507d235e18a/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) +STEP: test for file in mounted path +Aug 17 23:08:33.116: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3777 PodName:var-expansion-e7abb83b-9821-4f1a-9c97-7507d235e18a ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:08:33.116: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:08:33.116: INFO: ExecWithOptions: Clientset creation +Aug 17 23:08:33.116: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/var-expansion-3777/pods/var-expansion-e7abb83b-9821-4f1a-9c97-7507d235e18a/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) +STEP: updating the annotation value +Aug 17 23:08:33.703: INFO: Successfully updated pod "var-expansion-e7abb83b-9821-4f1a-9c97-7507d235e18a" +STEP: waiting for annotated pod running +STEP: deleting the pod gracefully +Aug 17 23:08:33.707: INFO: Deleting pod "var-expansion-e7abb83b-9821-4f1a-9c97-7507d235e18a" in namespace "var-expansion-3777" +Aug 17 23:08:33.715: INFO: Wait up to 5m0s for pod "var-expansion-e7abb83b-9821-4f1a-9c97-7507d235e18a" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:09:07.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-3777" for this suite. + +• [SLOW TEST:36.754 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should succeed in writing subpaths in container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":146,"skipped":3016,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:09:07.740: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] Deployment should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:09:07.772: INFO: Creating simple deployment test-new-deployment +Aug 17 23:09:07.793: INFO: deployment "test-new-deployment" doesn't have the required revision set +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the deployment Spec.Replicas was modified +STEP: Patch a scale subresource +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 17 23:09:09.854: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-1849 5233504d-fcd8-45ae-86f1-62d8b44d7a97 40731 3 2022-08-17 23:09:07 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2022-08-17 23:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002396ed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-08-17 23:09:09 +0000 UTC,LastTransitionTime:2022-08-17 23:09:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-5d9fdcc779" has successfully progressed.,LastUpdateTime:2022-08-17 23:09:09 +0000 UTC,LastTransitionTime:2022-08-17 23:09:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Aug 17 23:09:09.868: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-1849 5bbd0805-4c2a-47af-87f6-445aae6847f6 40736 3 2022-08-17 23:09:07 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 5233504d-fcd8-45ae-86f1-62d8b44d7a97 0xc002397307 0xc002397308}] [] [{kube-controller-manager Update apps/v1 2022-08-17 23:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5233504d-fcd8-45ae-86f1-62d8b44d7a97\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:09:09 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002397398 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 17 23:09:09.875: INFO: Pod "test-new-deployment-5d9fdcc779-g7r5s" is not available: +&Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-g7r5s test-new-deployment-5d9fdcc779- deployment-1849 4abc8367-4c3c-4927-aa51-e362c00272b8 40735 0 2022-08-17 23:09:09 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 5bbd0805-4c2a-47af-87f6-445aae6847f6 0xc0033ac247 0xc0033ac248}] [] [{kube-controller-manager Update v1 2022-08-17 23:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bbd0805-4c2a-47af-87f6-445aae6847f6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z7d8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7d8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:09:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 17 23:09:09.875: INFO: Pod "test-new-deployment-5d9fdcc779-z57xd" is available: +&Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-z57xd test-new-deployment-5d9fdcc779- deployment-1849 c720d564-5270-4466-8a97-4163143ea0a6 40725 0 2022-08-17 23:09:07 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 5bbd0805-4c2a-47af-87f6-445aae6847f6 0xc0033ac3b0 0xc0033ac3b1}] [] [{kube-controller-manager Update v1 2022-08-17 23:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bbd0805-4c2a-47af-87f6-445aae6847f6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-17 23:09:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.155\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5q8hb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5q8hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:09:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:09:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:09:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:09:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:192.168.1.155,StartTime:2022-08-17 23:09:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-17 23:09:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://ec51c76cad6c4759113aa1f29de65c53950bd7794e83a665e046dae59229256b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.155,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:09:09.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-1849" for this suite. +•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":147,"skipped":3026,"failed":0} +SS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:09:09.899: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod busybox-cd52c1ae-2216-483b-900b-48b6a0ac4868 in namespace container-probe-6621 +Aug 17 23:09:11.962: INFO: Started pod busybox-cd52c1ae-2216-483b-900b-48b6a0ac4868 in namespace container-probe-6621 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 17 23:09:11.966: INFO: Initial restart count of pod busybox-cd52c1ae-2216-483b-900b-48b6a0ac4868 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:13:12.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6621" for this suite. + +• [SLOW TEST:242.892 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":148,"skipped":3028,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:13:12.794: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete a collection of events [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Create set of events +Aug 17 23:13:12.826: INFO: created test-event-1 +Aug 17 23:13:12.831: INFO: created test-event-2 +Aug 17 23:13:12.836: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace +STEP: delete collection of events +Aug 17 23:13:12.842: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity +Aug 17 23:13:12.868: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:13:12.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-3099" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":149,"skipped":3044,"failed":0} +S +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:13:12.883: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Performing setup for networking test in namespace pod-network-test-5065 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Aug 17 23:13:12.907: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Aug 17 23:13:12.941: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:13:14.948: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:16.948: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:18.947: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:20.947: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:22.948: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:24.949: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:26.950: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:28.946: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:30.947: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:32.947: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:13:34.947: INFO: The status of Pod netserver-0 is Running (Ready = true) +Aug 17 23:13:34.959: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Aug 17 23:13:36.987: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Aug 17 23:13:36.987: INFO: Breadth first check of 192.168.2.3 on host 195.17.131.205... +Aug 17 23:13:36.991: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.88:9080/dial?request=hostname&protocol=http&host=192.168.2.3&port=8083&tries=1'] Namespace:pod-network-test-5065 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:13:36.991: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:13:36.992: INFO: ExecWithOptions: Clientset creation +Aug 17 23:13:36.992: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-5065/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.1.88%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.3%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:13:37.069: INFO: Waiting for responses: map[] +Aug 17 23:13:37.069: INFO: reached 192.168.2.3 after 0/1 tries +Aug 17 23:13:37.069: INFO: Breadth first check of 192.168.1.186 on host 195.17.65.231... +Aug 17 23:13:37.073: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.88:9080/dial?request=hostname&protocol=http&host=192.168.1.186&port=8083&tries=1'] Namespace:pod-network-test-5065 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:13:37.073: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:13:37.074: INFO: ExecWithOptions: Clientset creation +Aug 17 23:13:37.074: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-5065/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.1.88%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.186%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:13:37.146: INFO: Waiting for responses: map[] +Aug 17 23:13:37.147: INFO: reached 192.168.1.186 after 0/1 tries +Aug 17 23:13:37.147: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:13:37.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-5065" for this suite. + +• [SLOW TEST:24.276 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for intra-pod communication: http [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":150,"skipped":3045,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:13:37.159: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:13:50.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4170" for this suite. + +• [SLOW TEST:13.127 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":151,"skipped":3052,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:13:50.286: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:13:50.322: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: creating the pod +STEP: submitting the pod to kubernetes +Aug 17 23:13:50.352: INFO: The status of Pod pod-logs-websocket-18fb5de7-ede9-4376-b70d-ffd059fcf1ce is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:13:52.358: INFO: The status of Pod pod-logs-websocket-18fb5de7-ede9-4376-b70d-ffd059fcf1ce is Running (Ready = true) +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:13:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2931" for this suite. +•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":152,"skipped":3062,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:13:52.404: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod pod-subpath-test-configmap-6qb4 +STEP: Creating a pod to test atomic-volume-subpath +Aug 17 23:13:52.452: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6qb4" in namespace "subpath-8818" to be "Succeeded or Failed" +Aug 17 23:13:52.457: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.112793ms +Aug 17 23:13:54.461: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 2.009399735s +Aug 17 23:13:56.468: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 4.015950007s +Aug 17 23:13:58.475: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 6.022885108s +Aug 17 23:14:00.480: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 8.02772403s +Aug 17 23:14:02.487: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 10.035193697s +Aug 17 23:14:04.493: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 12.040655014s +Aug 17 23:14:06.498: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 14.046465383s +Aug 17 23:14:08.505: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 16.053553157s +Aug 17 23:14:10.510: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 18.058234783s +Aug 17 23:14:12.518: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=true. Elapsed: 20.065654994s +Aug 17 23:14:14.523: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Running", Reason="", readiness=false. Elapsed: 22.071028632s +Aug 17 23:14:16.530: INFO: Pod "pod-subpath-test-configmap-6qb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.07858294s +STEP: Saw pod success +Aug 17 23:14:16.530: INFO: Pod "pod-subpath-test-configmap-6qb4" satisfied condition "Succeeded or Failed" +Aug 17 23:14:16.535: INFO: Trying to get logs from node 195.17.65.231 pod pod-subpath-test-configmap-6qb4 container test-container-subpath-configmap-6qb4: +STEP: delete the pod +Aug 17 23:14:16.569: INFO: Waiting for pod pod-subpath-test-configmap-6qb4 to disappear +Aug 17 23:14:16.572: INFO: Pod pod-subpath-test-configmap-6qb4 no longer exists +STEP: Deleting pod pod-subpath-test-configmap-6qb4 +Aug 17 23:14:16.572: INFO: Deleting pod "pod-subpath-test-configmap-6qb4" in namespace "subpath-8818" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:14:16.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-8818" for this suite. + +• [SLOW TEST:24.187 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":346,"completed":153,"skipped":3094,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:14:16.592: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name projected-configmap-test-volume-map-c299e73b-ff22-4ec6-a13a-db9b0f32ff0c +STEP: Creating a pod to test consume configMaps +Aug 17 23:14:16.640: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5" in namespace "projected-5295" to be "Succeeded or Failed" +Aug 17 23:14:16.644: INFO: Pod "pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907009ms +Aug 17 23:14:18.650: INFO: Pod "pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5": Phase="Running", Reason="", readiness=true. Elapsed: 2.010008121s +Aug 17 23:14:20.660: INFO: Pod "pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5": Phase="Running", Reason="", readiness=false. Elapsed: 4.019699574s +Aug 17 23:14:22.666: INFO: Pod "pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025654866s +STEP: Saw pod success +Aug 17 23:14:22.666: INFO: Pod "pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5" satisfied condition "Succeeded or Failed" +Aug 17 23:14:22.669: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5 container agnhost-container: +STEP: delete the pod +Aug 17 23:14:22.700: INFO: Waiting for pod pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5 to disappear +Aug 17 23:14:22.703: INFO: Pod pod-projected-configmaps-09b5db39-d9d3-41db-9940-729a439272c5 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:14:22.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5295" for this suite. + +• [SLOW TEST:6.121 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":154,"skipped":3151,"failed":0} +SSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:14:22.713: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod busybox-b56bdbb6-42da-4e17-a500-220e3ecaf95a in namespace container-probe-9784 +Aug 17 23:14:26.763: INFO: Started pod busybox-b56bdbb6-42da-4e17-a500-220e3ecaf95a in namespace container-probe-9784 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 17 23:14:26.767: INFO: Initial restart count of pod busybox-b56bdbb6-42da-4e17-a500-220e3ecaf95a is 0 +Aug 17 23:15:14.927: INFO: Restart count of pod container-probe-9784/busybox-b56bdbb6-42da-4e17-a500-220e3ecaf95a is now 1 (48.160089392s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:15:14.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-9784" for this suite. + +• [SLOW TEST:52.268 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":155,"skipped":3154,"failed":0} +S +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:15:14.984: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a ForbidConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring no more jobs are scheduled +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:21:01.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-4043" for this suite. + +• [SLOW TEST:346.086 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":156,"skipped":3155,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:21:01.070: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating secret secrets-1822/secret-test-70867ae1-8f06-42a9-aae7-9822e8da984b +STEP: Creating a pod to test consume secrets +Aug 17 23:21:01.122: INFO: Waiting up to 5m0s for pod "pod-configmaps-98605da0-2f94-4639-b839-27c119d53380" in namespace "secrets-1822" to be "Succeeded or Failed" +Aug 17 23:21:01.126: INFO: Pod "pod-configmaps-98605da0-2f94-4639-b839-27c119d53380": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018143ms +Aug 17 23:21:03.132: INFO: Pod "pod-configmaps-98605da0-2f94-4639-b839-27c119d53380": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010119779s +Aug 17 23:21:05.140: INFO: Pod "pod-configmaps-98605da0-2f94-4639-b839-27c119d53380": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018329914s +STEP: Saw pod success +Aug 17 23:21:05.140: INFO: Pod "pod-configmaps-98605da0-2f94-4639-b839-27c119d53380" satisfied condition "Succeeded or Failed" +Aug 17 23:21:05.145: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-98605da0-2f94-4639-b839-27c119d53380 container env-test: +STEP: delete the pod +Aug 17 23:21:05.180: INFO: Waiting for pod pod-configmaps-98605da0-2f94-4639-b839-27c119d53380 to disappear +Aug 17 23:21:05.184: INFO: Pod pod-configmaps-98605da0-2f94-4639-b839-27c119d53380 no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:21:05.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1822" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":157,"skipped":3193,"failed":0} +SSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:21:05.196: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-9148 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a new StatefulSet +Aug 17 23:21:05.244: INFO: Found 0 stateful pods, waiting for 3 +Aug 17 23:21:15.255: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:21:15.255: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:21:15.255: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:21:15.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-9148 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 23:21:15.810: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 23:21:15.810: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 23:21:15.810: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 +Aug 17 23:21:25.857: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Aug 17 23:21:35.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-9148 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 23:21:36.025: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 17 23:21:36.025: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 23:21:36.025: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +STEP: Rolling back to a previous revision +Aug 17 23:21:56.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-9148 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Aug 17 23:21:56.197: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Aug 17 23:21:56.197: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Aug 17 23:21:56.197: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Aug 17 23:22:06.244: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Aug 17 23:22:16.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=statefulset-9148 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Aug 17 23:22:16.403: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Aug 17 23:22:16.403: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Aug 17 23:22:16.403: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 17 23:22:26.430: INFO: Deleting all statefulset in ns statefulset-9148 +Aug 17 23:22:26.434: INFO: Scaling statefulset ss2 to 0 +Aug 17 23:22:36.461: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 23:22:36.464: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:22:36.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9148" for this suite. + +• [SLOW TEST:91.303 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":158,"skipped":3197,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:22:36.500: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name s-test-opt-del-92e7638a-a18f-491f-92fe-37062e6c5259 +STEP: Creating secret with name s-test-opt-upd-3ac2b006-de7b-4d2d-a725-49edf7f7135c +STEP: Creating the pod +Aug 17 23:22:36.573: INFO: The status of Pod pod-projected-secrets-c75a8c0b-4215-4b28-997e-547a26cdb3f8 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:22:38.581: INFO: The status of Pod pod-projected-secrets-c75a8c0b-4215-4b28-997e-547a26cdb3f8 is Running (Ready = true) +STEP: Deleting secret s-test-opt-del-92e7638a-a18f-491f-92fe-37062e6c5259 +STEP: Updating secret s-test-opt-upd-3ac2b006-de7b-4d2d-a725-49edf7f7135c +STEP: Creating secret with name s-test-opt-create-1e7389ea-f464-4162-92d7-a3464b5c85e7 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:22:40.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4682" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":159,"skipped":3246,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:22:40.682: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward api env vars +Aug 17 23:22:40.730: INFO: Waiting up to 5m0s for pod "downward-api-d6d00281-db85-4c0a-8c92-33f668823770" in namespace "downward-api-7764" to be "Succeeded or Failed" +Aug 17 23:22:40.734: INFO: Pod "downward-api-d6d00281-db85-4c0a-8c92-33f668823770": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814628ms +Aug 17 23:22:42.738: INFO: Pod "downward-api-d6d00281-db85-4c0a-8c92-33f668823770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008057941s +Aug 17 23:22:44.743: INFO: Pod "downward-api-d6d00281-db85-4c0a-8c92-33f668823770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013452573s +STEP: Saw pod success +Aug 17 23:22:44.744: INFO: Pod "downward-api-d6d00281-db85-4c0a-8c92-33f668823770" satisfied condition "Succeeded or Failed" +Aug 17 23:22:44.747: INFO: Trying to get logs from node 195.17.65.231 pod downward-api-d6d00281-db85-4c0a-8c92-33f668823770 container dapi-container: +STEP: delete the pod +Aug 17 23:22:44.769: INFO: Waiting for pod downward-api-d6d00281-db85-4c0a-8c92-33f668823770 to disappear +Aug 17 23:22:44.772: INFO: Pod downward-api-d6d00281-db85-4c0a-8c92-33f668823770 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:22:44.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7764" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":160,"skipped":3261,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:22:44.783: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-5001 +STEP: changing the ExternalName service to type=ClusterIP +STEP: creating replication controller externalname-service in namespace services-5001 +I0817 23:22:44.852948 20 runners.go:193] Created replication controller with name: externalname-service, namespace: services-5001, replica count: 2 +Aug 17 23:22:47.903: INFO: Creating new exec pod +I0817 23:22:47.903678 20 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 23:22:50.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5001 exec execpodwkp27 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:22:51.073: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:22:51.073: INFO: stdout: "" +Aug 17 23:22:52.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5001 exec execpodwkp27 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:22:52.204: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:22:52.204: INFO: stdout: "" +Aug 17 23:22:53.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5001 exec execpodwkp27 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:22:53.209: INFO: stderr: "+ + nc -v -techo -w 2 hostName externalname-service\n 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:22:53.209: INFO: stdout: "" +Aug 17 23:22:54.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5001 exec execpodwkp27 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:22:54.228: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:22:54.228: INFO: stdout: "" +Aug 17 23:22:55.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5001 exec execpodwkp27 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' +Aug 17 23:22:55.204: INFO: stderr: "+ + echonc hostName -v\n -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Aug 17 23:22:55.204: INFO: stdout: "externalname-service-hn56x" +Aug 17 23:22:55.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5001 exec execpodwkp27 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.104.177.158 80' +Aug 17 23:22:55.342: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.104.177.158 80\nConnection to 10.104.177.158 80 port [tcp/http] succeeded!\n" +Aug 17 23:22:55.342: INFO: stdout: "externalname-service-lzdcf" +Aug 17 23:22:55.342: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:22:55.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5001" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:10.623 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":161,"skipped":3273,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:22:55.407: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename server-version +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should find the server version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Request ServerVersion +STEP: Confirm major version +Aug 17 23:22:55.431: INFO: Major version: 1 +STEP: Confirm minor version +Aug 17 23:22:55.431: INFO: cleanMinorVersion: 23 +Aug 17 23:22:55.431: INFO: Minor version: 23 +[AfterEach] [sig-api-machinery] server version + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:22:55.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "server-version-1043" for this suite. +•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":162,"skipped":3284,"failed":0} +SSSS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:22:55.444: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: referencing a single matching pod +STEP: referencing matching pods with named port +STEP: creating empty Endpoints and EndpointSlices for no matching Pods +STEP: recreating EndpointSlices after they've been deleted +Aug 17 23:23:15.662: INFO: EndpointSlice for Service endpointslice-2606/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:23:25.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-2606" for this suite. + +• [SLOW TEST:30.247 seconds] +[sig-network] EndpointSlice +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":163,"skipped":3288,"failed":0} +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:23:25.693: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Aug 17 23:23:25.726: INFO: Waiting up to 5m0s for pod "pod-313c5c55-e070-4153-a561-820ad27663dc" in namespace "emptydir-2026" to be "Succeeded or Failed" +Aug 17 23:23:25.733: INFO: Pod "pod-313c5c55-e070-4153-a561-820ad27663dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764096ms +Aug 17 23:23:27.741: INFO: Pod "pod-313c5c55-e070-4153-a561-820ad27663dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014024558s +Aug 17 23:23:29.748: INFO: Pod "pod-313c5c55-e070-4153-a561-820ad27663dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021603352s +STEP: Saw pod success +Aug 17 23:23:29.748: INFO: Pod "pod-313c5c55-e070-4153-a561-820ad27663dc" satisfied condition "Succeeded or Failed" +Aug 17 23:23:29.751: INFO: Trying to get logs from node 195.17.65.231 pod pod-313c5c55-e070-4153-a561-820ad27663dc container test-container: +STEP: delete the pod +Aug 17 23:23:29.781: INFO: Waiting for pod pod-313c5c55-e070-4153-a561-820ad27663dc to disappear +Aug 17 23:23:29.785: INFO: Pod pod-313c5c55-e070-4153-a561-820ad27663dc no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:23:29.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2026" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":164,"skipped":3290,"failed":0} + +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:23:29.800: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename certificates +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support CSR API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: getting /apis +STEP: getting /apis/certificates.k8s.io +STEP: getting /apis/certificates.k8s.io/v1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 17 23:23:30.854: INFO: starting watch +STEP: patching +STEP: updating +Aug 17 23:23:30.871: INFO: waiting for watch events with expected annotations +Aug 17 23:23:30.871: INFO: saw patched and updated annotations +STEP: getting /approval +STEP: patching /approval +STEP: updating /approval +STEP: getting /status +STEP: patching /status +STEP: updating /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:23:30.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "certificates-1237" for this suite. +•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":165,"skipped":3290,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:23:30.953: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Service +STEP: Creating a NodePort Service +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota +STEP: Ensuring resource quota status captures service creation +STEP: Deleting Services +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:23:42.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-6305" for this suite. + +• [SLOW TEST:11.226 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":166,"skipped":3300,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:23:42.180: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1539 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Aug 17 23:23:42.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-2515 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' +Aug 17 23:23:42.285: INFO: stderr: "" +Aug 17 23:23:42.285: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created +[AfterEach] Kubectl run pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 +Aug 17 23:23:42.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-2515 delete pods e2e-test-httpd-pod' +Aug 17 23:23:45.095: INFO: stderr: "" +Aug 17 23:23:45.096: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:23:45.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-2515" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":346,"completed":167,"skipped":3316,"failed":0} +SSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:23:45.110: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename limitrange +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a LimitRange +STEP: Setting up watch +STEP: Submitting a LimitRange +Aug 17 23:23:45.135: INFO: observed the limitRanges list +STEP: Verifying LimitRange creation was observed +STEP: Fetching the LimitRange to ensure it has proper values +Aug 17 23:23:45.149: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Aug 17 23:23:45.149: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements +STEP: Ensuring Pod has resource requirements applied from LimitRange +Aug 17 23:23:45.163: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Aug 17 23:23:45.164: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements +STEP: Ensuring Pod has merged resource requirements applied from LimitRange +Aug 17 23:23:45.174: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Aug 17 23:23:45.174: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources +STEP: Failing to create a Pod with more than max resources +STEP: Updating a LimitRange +STEP: Verifying LimitRange updating is effective +STEP: Creating a Pod with less than former min resources +STEP: Failing to create a Pod with more than max resources +STEP: Deleting a LimitRange +STEP: Verifying the LimitRange was deleted +Aug 17 23:23:52.235: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources +[AfterEach] [sig-scheduling] LimitRange + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:23:52.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "limitrange-2445" for this suite. + +• [SLOW TEST:7.155 seconds] +[sig-scheduling] LimitRange +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":168,"skipped":3325,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:23:52.265: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Aug 17 23:23:52.305: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4040 f1945ab1-9c91-4d20-b5c2-d7469607a042 50873 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:23:52.305: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4040 f1945ab1-9c91-4d20-b5c2-d7469607a042 50873 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Aug 17 23:23:52.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4040 f1945ab1-9c91-4d20-b5c2-d7469607a042 50876 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:23:52.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4040 f1945ab1-9c91-4d20-b5c2-d7469607a042 50876 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Aug 17 23:23:52.323: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4040 f1945ab1-9c91-4d20-b5c2-d7469607a042 50878 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:23:52.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4040 f1945ab1-9c91-4d20-b5c2-d7469607a042 50878 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Aug 17 23:23:52.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4040 f1945ab1-9c91-4d20-b5c2-d7469607a042 50881 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:23:52.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4040 f1945ab1-9c91-4d20-b5c2-d7469607a042 50881 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Aug 17 23:23:52.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4040 c8a25a4e-badd-45e6-9845-2ae623353afc 50883 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:23:52.348: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4040 c8a25a4e-badd-45e6-9845-2ae623353afc 50883 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Aug 17 23:24:02.363: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4040 c8a25a4e-badd-45e6-9845-2ae623353afc 50991 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:24:02.363: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4040 c8a25a4e-badd-45e6-9845-2ae623353afc 50991 0 2022-08-17 23:23:52 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-08-17 23:23:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:24:12.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-4040" for this suite. + +• [SLOW TEST:20.114 seconds] +[sig-api-machinery] Watchers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":169,"skipped":3332,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:24:12.382: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-volume-936706dc-741c-4d4c-8db8-5ee7d4dbf1e7 +STEP: Creating a pod to test consume configMaps +Aug 17 23:24:12.430: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c2a0208-a99c-414d-bce4-17724604f308" in namespace "configmap-1974" to be "Succeeded or Failed" +Aug 17 23:24:12.435: INFO: Pod "pod-configmaps-2c2a0208-a99c-414d-bce4-17724604f308": Phase="Pending", Reason="", readiness=false. Elapsed: 5.304433ms +Aug 17 23:24:14.442: INFO: Pod "pod-configmaps-2c2a0208-a99c-414d-bce4-17724604f308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011691108s +Aug 17 23:24:16.449: INFO: Pod "pod-configmaps-2c2a0208-a99c-414d-bce4-17724604f308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019266226s +STEP: Saw pod success +Aug 17 23:24:16.450: INFO: Pod "pod-configmaps-2c2a0208-a99c-414d-bce4-17724604f308" satisfied condition "Succeeded or Failed" +Aug 17 23:24:16.452: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-2c2a0208-a99c-414d-bce4-17724604f308 container configmap-volume-test: +STEP: delete the pod +Aug 17 23:24:16.474: INFO: Waiting for pod pod-configmaps-2c2a0208-a99c-414d-bce4-17724604f308 to disappear +Aug 17 23:24:16.477: INFO: Pod pod-configmaps-2c2a0208-a99c-414d-bce4-17724604f308 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:24:16.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1974" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":170,"skipped":3340,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:24:16.490: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-upd-1603523a-5217-4b4a-8eb0-8b6381ef7e5f +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:24:18.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9743" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":171,"skipped":3357,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:24:18.579: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod liveness-5f0dfab9-520b-4e89-a8f4-5bf3272a08f7 in namespace container-probe-531 +Aug 17 23:24:20.626: INFO: Started pod liveness-5f0dfab9-520b-4e89-a8f4-5bf3272a08f7 in namespace container-probe-531 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 17 23:24:20.629: INFO: Initial restart count of pod liveness-5f0dfab9-520b-4e89-a8f4-5bf3272a08f7 is 0 +Aug 17 23:24:40.707: INFO: Restart count of pod container-probe-531/liveness-5f0dfab9-520b-4e89-a8f4-5bf3272a08f7 is now 1 (20.07840448s elapsed) +Aug 17 23:25:00.786: INFO: Restart count of pod container-probe-531/liveness-5f0dfab9-520b-4e89-a8f4-5bf3272a08f7 is now 2 (40.157022634s elapsed) +Aug 17 23:25:20.858: INFO: Restart count of pod container-probe-531/liveness-5f0dfab9-520b-4e89-a8f4-5bf3272a08f7 is now 3 (1m0.229271874s elapsed) +Aug 17 23:25:40.926: INFO: Restart count of pod container-probe-531/liveness-5f0dfab9-520b-4e89-a8f4-5bf3272a08f7 is now 4 (1m20.297467503s elapsed) +Aug 17 23:26:41.148: INFO: Restart count of pod container-probe-531/liveness-5f0dfab9-520b-4e89-a8f4-5bf3272a08f7 is now 5 (2m20.519013134s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:26:41.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-531" for this suite. + +• [SLOW TEST:142.621 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":172,"skipped":3370,"failed":0} +SS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:26:41.202: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:26:41.246: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Aug 17 23:26:41.258: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:41.258: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:41.265: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 23:26:41.265: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 23:26:42.273: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:42.273: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:42.280: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 23:26:42.280: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 23:26:43.274: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:43.274: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:43.277: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 17 23:26:43.277: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Aug 17 23:26:43.313: INFO: Wrong image for pod: daemon-set-4gjhs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 17 23:26:43.313: INFO: Wrong image for pod: daemon-set-srv5m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 17 23:26:43.317: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:43.317: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:44.324: INFO: Wrong image for pod: daemon-set-4gjhs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 17 23:26:44.329: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:44.329: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:45.335: INFO: Wrong image for pod: daemon-set-4gjhs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 17 23:26:45.344: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:45.344: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:46.323: INFO: Wrong image for pod: daemon-set-4gjhs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.33, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. +Aug 17 23:26:46.323: INFO: Pod daemon-set-sw7td is not available +Aug 17 23:26:46.327: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:46.327: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:47.323: INFO: Pod daemon-set-hzbmq is not available +Aug 17 23:26:47.327: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:47.327: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +STEP: Check that daemon pods are still running on every node of the cluster. +Aug 17 23:26:47.331: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:47.331: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:47.335: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 17 23:26:47.335: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 23:26:48.344: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:48.344: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:48.349: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 17 23:26:48.349: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 17 23:26:49.343: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:49.343: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 17 23:26:49.347: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 17 23:26:49.347: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7001, will wait for the garbage collector to delete the pods +Aug 17 23:26:49.426: INFO: Deleting DaemonSet.extensions daemon-set took: 9.158666ms +Aug 17 23:26:49.527: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.065936ms +Aug 17 23:26:52.034: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 17 23:26:52.034: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 17 23:26:52.037: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"52922"},"items":null} + +Aug 17 23:26:52.040: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"52922"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:26:52.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7001" for this suite. + +• [SLOW TEST:10.861 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":173,"skipped":3372,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:26:52.064: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should include custom resource definition resources in discovery documents [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: fetching the /apis discovery document +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document +STEP: fetching the /apis/apiextensions.k8s.io discovery document +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:26:52.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-9835" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":174,"skipped":3374,"failed":0} +SSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:26:52.107: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 23:26:52.413: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 23:26:55.463: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Registering the webhook via the AdmissionRegistration API +STEP: create a pod +STEP: 'kubectl attach' the pod, should be denied by the webhook +Aug 17 23:26:57.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=webhook-1786 attach --namespace=webhook-1786 to-be-attached-pod -i -c=container1' +Aug 17 23:26:57.591: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:26:57.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1786" for this suite. +STEP: Destroying namespace "webhook-1786-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:5.567 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny attaching pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":175,"skipped":3377,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:26:57.673: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Aug 17 23:26:57.729: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3408 6e189e42-fb76-4e06-8cc4-dac8d3a2c9af 53093 0 2022-08-17 23:26:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-08-17 23:26:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:26:57.729: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3408 6e189e42-fb76-4e06-8cc4-dac8d3a2c9af 53098 0 2022-08-17 23:26:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-08-17 23:26:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Aug 17 23:26:57.747: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3408 6e189e42-fb76-4e06-8cc4-dac8d3a2c9af 53099 0 2022-08-17 23:26:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-08-17 23:26:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:26:57.747: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3408 6e189e42-fb76-4e06-8cc4-dac8d3a2c9af 53100 0 2022-08-17 23:26:57 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-08-17 23:26:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:26:57.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-3408" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":176,"skipped":3382,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:26:57.759: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support CronJob API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a cronjob +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 17 23:26:57.801: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Aug 17 23:26:57.806: INFO: starting watch +STEP: patching +STEP: updating +Aug 17 23:26:57.823: INFO: waiting for watch events with expected annotations +Aug 17 23:26:57.823: INFO: saw patched and updated annotations +STEP: patching /status +STEP: updating /status +STEP: get /status +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:26:57.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-5165" for this suite. +•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":177,"skipped":3415,"failed":0} + +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:26:57.886: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename ingressclass +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:186 +[It] should support creating IngressClass API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: getting /apis +STEP: getting /apis/networking.k8s.io +STEP: getting /apis/networking.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 17 23:26:57.946: INFO: starting watch +STEP: patching +STEP: updating +Aug 17 23:26:57.958: INFO: waiting for watch events with expected annotations +Aug 17 23:26:57.958: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] IngressClass API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:26:57.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "ingressclass-9259" for this suite. +•{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":346,"completed":178,"skipped":3415,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:26:58.010: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should support creating EndpointSlice API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: getting /apis +STEP: getting /apis/discovery.k8s.io +STEP: getting /apis/discovery.k8s.iov1 +STEP: creating +STEP: getting +STEP: listing +STEP: watching +Aug 17 23:26:58.064: INFO: starting watch +STEP: cluster-wide listing +STEP: cluster-wide watching +Aug 17 23:26:58.070: INFO: starting watch +STEP: patching +STEP: updating +Aug 17 23:26:58.086: INFO: waiting for watch events with expected annotations +Aug 17 23:26:58.086: INFO: saw patched and updated annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:26:58.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-726" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":179,"skipped":3441,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:26:58.128: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Setting up the test +STEP: Creating hostNetwork=false pod +Aug 17 23:26:58.170: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:27:00.175: INFO: The status of Pod test-pod is Running (Ready = true) +STEP: Creating hostNetwork=true pod +Aug 17 23:27:00.197: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:27:02.205: INFO: The status of Pod test-host-network-pod is Running (Ready = true) +STEP: Running the test +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false +Aug 17 23:27:02.209: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.210: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.210: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.210: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.276: INFO: Exec stderr: "" +Aug 17 23:27:02.276: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.276: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.277: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.277: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.346: INFO: Exec stderr: "" +Aug 17 23:27:02.346: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.346: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.347: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.347: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.409: INFO: Exec stderr: "" +Aug 17 23:27:02.409: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.409: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.410: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.410: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.490: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount +Aug 17 23:27:02.490: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.490: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.491: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.491: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.559: INFO: Exec stderr: "" +Aug 17 23:27:02.559: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.559: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.560: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.560: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.633: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true +Aug 17 23:27:02.633: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.633: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.634: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.634: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.706: INFO: Exec stderr: "" +Aug 17 23:27:02.706: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.706: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.707: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.707: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.786: INFO: Exec stderr: "" +Aug 17 23:27:02.786: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.786: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.787: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.787: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.854: INFO: Exec stderr: "" +Aug 17 23:27:02.854: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4308 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:02.854: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:02.855: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:02.855: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-4308/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:02.934: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:27:02.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-kubelet-etc-hosts-4308" for this suite. +•{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":180,"skipped":3476,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:27:02.968: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating Pod +STEP: Reading file content from the nginx-container +Aug 17 23:27:05.034: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-750 PodName:pod-sharedvolume-5646c470-ae40-4a90-b938-e39f233aec98 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:27:05.034: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:27:05.035: INFO: ExecWithOptions: Clientset creation +Aug 17 23:27:05.035: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/emptydir-750/pods/pod-sharedvolume-5646c470-ae40-4a90-b938-e39f233aec98/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:27:05.102: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:27:05.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-750" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":181,"skipped":3523,"failed":0} +SSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:27:05.118: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Aug 17 23:27:05.166: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 17 23:28:05.214: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:05.218: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-preemption-path +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 +STEP: Finding an available node +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +Aug 17 23:28:07.291: INFO: found a healthy node: 195.17.65.231 +[It] runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:28:17.406: INFO: pods created so far: [1 1 1] +Aug 17 23:28:17.406: INFO: length of pods created so far: 3 +Aug 17 23:28:21.420: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:28.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-path-2699" for this suite. +[AfterEach] PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:28.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-4337" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:83.398 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + PreemptionExecutionPath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 + runs ReplicaSets to verify preemption running path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":182,"skipped":3527,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:28.518: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating Agnhost RC +Aug 17 23:28:28.544: INFO: namespace kubectl-6424 +Aug 17 23:28:28.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6424 create -f -' +Aug 17 23:28:28.807: INFO: stderr: "" +Aug 17 23:28:28.807: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Aug 17 23:28:29.813: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 17 23:28:29.813: INFO: Found 0 / 1 +Aug 17 23:28:30.816: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 17 23:28:30.816: INFO: Found 1 / 1 +Aug 17 23:28:30.817: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Aug 17 23:28:30.823: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 17 23:28:30.823: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Aug 17 23:28:30.823: INFO: wait on agnhost-primary startup in kubectl-6424 +Aug 17 23:28:30.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6424 logs agnhost-primary-ckcb7 agnhost-primary' +Aug 17 23:28:30.907: INFO: stderr: "" +Aug 17 23:28:30.907: INFO: stdout: "Paused\n" +STEP: exposing RC +Aug 17 23:28:30.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6424 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Aug 17 23:28:30.998: INFO: stderr: "" +Aug 17 23:28:30.998: INFO: stdout: "service/rm2 exposed\n" +Aug 17 23:28:31.002: INFO: Service rm2 in namespace kubectl-6424 found. +STEP: exposing service +Aug 17 23:28:33.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6424 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Aug 17 23:28:33.096: INFO: stderr: "" +Aug 17 23:28:33.096: INFO: stdout: "service/rm3 exposed\n" +Aug 17 23:28:33.102: INFO: Service rm3 in namespace kubectl-6424 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:35.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6424" for this suite. + +• [SLOW TEST:6.604 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl expose + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1248 + should create services for rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":183,"skipped":3568,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:35.122: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5043.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5043.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5043.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5043.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 17 23:28:37.205: INFO: DNS probes using dns-5043/dns-test-c345c2d8-4625-4e3d-b035-204665880a51 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:37.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5043" for this suite. +•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":184,"skipped":3589,"failed":0} +SSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:37.250: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sysctl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl +STEP: Watching for error events or started pod +STEP: Waiting for pod completion +STEP: Checking that the pod succeeded +STEP: Getting logs from the pod +STEP: Checking that the sysctl is actually updated +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:41.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-3177" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":185,"skipped":3595,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:41.332: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename podtemplate +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run the lifecycle of PodTemplates [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-node] PodTemplates + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:41.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "podtemplate-2650" for this suite. +•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":186,"skipped":3607,"failed":0} +SSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:41.408: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should adopt matching pods on creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Given a Pod with a 'name' label pod-adoption is created +Aug 17 23:28:41.446: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:28:43.452: INFO: The status of Pod pod-adoption is Running (Ready = true) +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:44.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-5001" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":187,"skipped":3611,"failed":0} +S +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:44.492: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:28:44.525: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a8960a52-c206-4857-aaaf-cec8d3aaca3d" in namespace "security-context-test-8099" to be "Succeeded or Failed" +Aug 17 23:28:44.529: INFO: Pod "busybox-privileged-false-a8960a52-c206-4857-aaaf-cec8d3aaca3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.464369ms +Aug 17 23:28:46.533: INFO: Pod "busybox-privileged-false-a8960a52-c206-4857-aaaf-cec8d3aaca3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00769779s +Aug 17 23:28:48.538: INFO: Pod "busybox-privileged-false-a8960a52-c206-4857-aaaf-cec8d3aaca3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012547082s +Aug 17 23:28:48.538: INFO: Pod "busybox-privileged-false-a8960a52-c206-4857-aaaf-cec8d3aaca3d" satisfied condition "Succeeded or Failed" +Aug 17 23:28:48.545: INFO: Got logs for pod "busybox-privileged-false-a8960a52-c206-4857-aaaf-cec8d3aaca3d": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:48.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-8099" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":188,"skipped":3612,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:48.556: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-volume-fcfac87f-8c0a-4957-866d-ea6e850a57c7 +STEP: Creating a pod to test consume configMaps +Aug 17 23:28:48.599: INFO: Waiting up to 5m0s for pod "pod-configmaps-333e9ecc-554c-45eb-9ee6-cc240de2b14a" in namespace "configmap-4556" to be "Succeeded or Failed" +Aug 17 23:28:48.604: INFO: Pod "pod-configmaps-333e9ecc-554c-45eb-9ee6-cc240de2b14a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.425101ms +Aug 17 23:28:50.609: INFO: Pod "pod-configmaps-333e9ecc-554c-45eb-9ee6-cc240de2b14a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0095672s +Aug 17 23:28:52.615: INFO: Pod "pod-configmaps-333e9ecc-554c-45eb-9ee6-cc240de2b14a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015605406s +STEP: Saw pod success +Aug 17 23:28:52.615: INFO: Pod "pod-configmaps-333e9ecc-554c-45eb-9ee6-cc240de2b14a" satisfied condition "Succeeded or Failed" +Aug 17 23:28:52.619: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-333e9ecc-554c-45eb-9ee6-cc240de2b14a container agnhost-container: +STEP: delete the pod +Aug 17 23:28:52.641: INFO: Waiting for pod pod-configmaps-333e9ecc-554c-45eb-9ee6-cc240de2b14a to disappear +Aug 17 23:28:52.645: INFO: Pod pod-configmaps-333e9ecc-554c-45eb-9ee6-cc240de2b14a no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:52.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-4556" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":189,"skipped":3633,"failed":0} +S +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:52.658: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support configurable pod DNS nameservers [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... +Aug 17 23:28:52.695: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-9569 3c93df56-a5f7-4dec-aea6-ac5aef68007e 54883 0 2022-08-17 23:28:52 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2022-08-17 23:28:52 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bww97,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bww97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 17 23:28:52.700: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:28:54.706: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) +STEP: Verifying customized DNS suffix list is configured on pod... +Aug 17 23:28:54.706: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9569 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:28:54.706: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:28:54.707: INFO: ExecWithOptions: Clientset creation +Aug 17 23:28:54.707: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-9569/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +STEP: Verifying customized DNS server is configured on pod... +Aug 17 23:28:54.790: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9569 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:28:54.790: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:28:54.791: INFO: ExecWithOptions: Clientset creation +Aug 17 23:28:54.791: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/dns-9569/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:28:54.877: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:28:54.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-9569" for this suite. +•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":190,"skipped":3634,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:28:54.930: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:28:54.968: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-bd9205da-d21d-4ec8-bbf6-824e70917642" in namespace "security-context-test-3846" to be "Succeeded or Failed" +Aug 17 23:28:54.972: INFO: Pod "alpine-nnp-false-bd9205da-d21d-4ec8-bbf6-824e70917642": Phase="Pending", Reason="", readiness=false. Elapsed: 3.304627ms +Aug 17 23:28:56.977: INFO: Pod "alpine-nnp-false-bd9205da-d21d-4ec8-bbf6-824e70917642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009130973s +Aug 17 23:28:58.983: INFO: Pod "alpine-nnp-false-bd9205da-d21d-4ec8-bbf6-824e70917642": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014865857s +Aug 17 23:29:00.989: INFO: Pod "alpine-nnp-false-bd9205da-d21d-4ec8-bbf6-824e70917642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020747748s +Aug 17 23:29:00.989: INFO: Pod "alpine-nnp-false-bd9205da-d21d-4ec8-bbf6-824e70917642" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:29:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-3846" for this suite. + +• [SLOW TEST:6.078 seconds] +[sig-node] Security Context +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when creating containers with AllowPrivilegeEscalation + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":191,"skipped":3642,"failed":0} +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:29:01.009: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:29:01.040: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Aug 17 23:29:08.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-4342 --namespace=crd-publish-openapi-4342 create -f -' +Aug 17 23:29:10.199: INFO: stderr: "" +Aug 17 23:29:10.199: INFO: stdout: "e2e-test-crd-publish-openapi-1401-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Aug 17 23:29:10.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-4342 --namespace=crd-publish-openapi-4342 delete e2e-test-crd-publish-openapi-1401-crds test-cr' +Aug 17 23:29:10.270: INFO: stderr: "" +Aug 17 23:29:10.270: INFO: stdout: "e2e-test-crd-publish-openapi-1401-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Aug 17 23:29:10.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-4342 --namespace=crd-publish-openapi-4342 apply -f -' +Aug 17 23:29:11.604: INFO: stderr: "" +Aug 17 23:29:11.604: INFO: stdout: "e2e-test-crd-publish-openapi-1401-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Aug 17 23:29:11.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-4342 --namespace=crd-publish-openapi-4342 delete e2e-test-crd-publish-openapi-1401-crds test-cr' +Aug 17 23:29:11.682: INFO: stderr: "" +Aug 17 23:29:11.682: INFO: stdout: "e2e-test-crd-publish-openapi-1401-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Aug 17 23:29:11.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-4342 explain e2e-test-crd-publish-openapi-1401-crds' +Aug 17 23:29:11.936: INFO: stderr: "" +Aug 17 23:29:11.936: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1401-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:29:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-4342" for this suite. + +• [SLOW TEST:17.853 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":192,"skipped":3642,"failed":0} +SSSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:29:18.863: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should test the lifecycle of a ReplicationController [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a ReplicationController +STEP: waiting for RC to be added +STEP: waiting for available Replicas +STEP: patching ReplicationController +STEP: waiting for RC to be modified +STEP: patching ReplicationController status +STEP: waiting for RC to be modified +STEP: waiting for available Replicas +STEP: fetching ReplicationController status +STEP: patching ReplicationController scale +STEP: waiting for RC to be modified +STEP: waiting for ReplicationController's scale to be the max amount +STEP: fetching ReplicationController; ensuring that it's patched +STEP: updating ReplicationController status +STEP: waiting for RC to be modified +STEP: listing all ReplicationControllers +STEP: checking that ReplicationController has expected values +STEP: deleting ReplicationControllers by collection +STEP: waiting for ReplicationController to have a DELETED watchEvent +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:29:21.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-9714" for this suite. +•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":193,"skipped":3647,"failed":0} + +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:29:21.526: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 17 23:29:22.336: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 23:29:25.367: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API +STEP: create a namespace for the webhook +STEP: create a configmap should be unconditionally rejected by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:29:25.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7589" for this suite. +STEP: Destroying namespace "webhook-7589-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":194,"skipped":3647,"failed":0} +SS +------------------------------ +[sig-network] Services + should delete a collection of services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:29:25.490: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should delete a collection of services [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a collection of services +Aug 17 23:29:25.522: INFO: Creating e2e-svc-a-xw6dr +Aug 17 23:29:25.544: INFO: Creating e2e-svc-b-m2dkg +Aug 17 23:29:25.562: INFO: Creating e2e-svc-c-cmbx7 +STEP: deleting service collection +Aug 17 23:29:25.629: INFO: Collection of services has been deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:29:25.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9604" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":346,"completed":195,"skipped":3649,"failed":0} +SSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:29:25.646: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:29:25.698: INFO: The status of Pod pod-secrets-1013f63c-a245-4e84-819c-f9427eb17fe6 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:29:27.703: INFO: The status of Pod pod-secrets-1013f63c-a245-4e84-819c-f9427eb17fe6 is Running (Ready = true) +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:29:27.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-5858" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":196,"skipped":3654,"failed":0} + +------------------------------ +[sig-instrumentation] Events + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:29:27.780: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a test event +STEP: listing all events in all namespaces +STEP: patching the test event +STEP: fetching the test event +STEP: deleting the test event +STEP: listing all events in all namespaces +[AfterEach] [sig-instrumentation] Events + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:29:27.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-3135" for this suite. +•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":197,"skipped":3654,"failed":0} +SSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:29:27.881: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Aug 17 23:29:28.237: INFO: Pod name wrapped-volume-race-ed5172ed-4cbf-4e84-a66b-256e4f274516: Found 0 pods out of 5 +Aug 17 23:29:33.247: INFO: Pod name wrapped-volume-race-ed5172ed-4cbf-4e84-a66b-256e4f274516: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-ed5172ed-4cbf-4e84-a66b-256e4f274516 in namespace emptydir-wrapper-79, will wait for the garbage collector to delete the pods +Aug 17 23:29:43.338: INFO: Deleting ReplicationController wrapped-volume-race-ed5172ed-4cbf-4e84-a66b-256e4f274516 took: 10.569632ms +Aug 17 23:29:43.438: INFO: Terminating ReplicationController wrapped-volume-race-ed5172ed-4cbf-4e84-a66b-256e4f274516 pods took: 100.337088ms +STEP: Creating RC which spawns configmap-volume pods +Aug 17 23:29:47.176: INFO: Pod name wrapped-volume-race-bb335415-ff64-42cf-aaa0-b090aa0b2402: Found 1 pods out of 5 +Aug 17 23:29:52.185: INFO: Pod name wrapped-volume-race-bb335415-ff64-42cf-aaa0-b090aa0b2402: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-bb335415-ff64-42cf-aaa0-b090aa0b2402 in namespace emptydir-wrapper-79, will wait for the garbage collector to delete the pods +Aug 17 23:30:02.283: INFO: Deleting ReplicationController wrapped-volume-race-bb335415-ff64-42cf-aaa0-b090aa0b2402 took: 10.428766ms +Aug 17 23:30:02.385: INFO: Terminating ReplicationController wrapped-volume-race-bb335415-ff64-42cf-aaa0-b090aa0b2402 pods took: 101.642177ms +STEP: Creating RC which spawns configmap-volume pods +Aug 17 23:30:06.211: INFO: Pod name wrapped-volume-race-0b8bc242-9feb-4e61-8110-4370268c4d4a: Found 0 pods out of 5 +Aug 17 23:30:11.220: INFO: Pod name wrapped-volume-race-0b8bc242-9feb-4e61-8110-4370268c4d4a: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-0b8bc242-9feb-4e61-8110-4370268c4d4a in namespace emptydir-wrapper-79, will wait for the garbage collector to delete the pods +Aug 17 23:30:21.310: INFO: Deleting ReplicationController wrapped-volume-race-0b8bc242-9feb-4e61-8110-4370268c4d4a took: 9.13425ms +Aug 17 23:30:21.410: INFO: Terminating ReplicationController wrapped-volume-race-0b8bc242-9feb-4e61-8110-4370268c4d4a pods took: 100.797099ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:30:25.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-79" for this suite. + +• [SLOW TEST:57.672 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":198,"skipped":3662,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:30:25.554: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Aug 17 23:30:25.573: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Aug 17 23:30:25.581: INFO: Waiting for terminating namespaces to be deleted... +Aug 17 23:30:25.585: INFO: +Logging pods the apiserver thinks is on node 195.17.131.205 before test +Aug 17 23:30:25.594: INFO: capi-kubeadm-bootstrap-controller-manager-6f58b86764-4snx7 from capi-kubeadm-bootstrap-system started at 2022-08-17 22:22:29 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container manager ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: capi-kubeadm-control-plane-controller-manager-7b679446f7-x2d65 from capi-kubeadm-control-plane-system started at 2022-08-17 22:22:49 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container manager ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: capi-controller-manager-6ff75d8789-8fldg from capi-system started at 2022-08-17 22:22:22 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container manager ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: cert-manager-67565ccf5d-zf6kt from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: cert-manager-cainjector-654854cb95-cb6v8 from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: cert-manager-webhook-fc46785b4-gvkf6 from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: eks-anywhere-packages-ddfc7b44-8zssk from eksa-packages started at 2022-08-17 22:24:50 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container controller ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: etcdadm-bootstrap-provider-controller-manager-7d898b8f77-xgmtd from etcdadm-bootstrap-provider-system started at 2022-08-17 22:22:35 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container manager ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: etcdadm-controller-controller-manager-b6f674477-6lsxb from etcdadm-controller-system started at 2022-08-17 22:22:40 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container manager ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: cilium-hvkwp from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container cilium-agent ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: cilium-operator-5799bc594c-b9rnk from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container cilium-operator ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: kube-proxy-pdhjb from kube-system started at 2022-08-17 22:19:15 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container kube-proxy ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: vsphere-cloud-controller-manager-s5246 from kube-system started at 2022-08-17 22:19:15 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 1 +Aug 17 23:30:25.594: INFO: vsphere-csi-controller-f67d5c78c-l8hxm from kube-system started at 2022-08-17 22:43:28 +0000 UTC (5 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container csi-attacher ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: Container csi-provisioner ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: Container vsphere-csi-controller ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: Container vsphere-syncer ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: vsphere-csi-node-f9msr from kube-system started at 2022-08-17 22:19:15 +0000 UTC (3 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: Container node-driver-registrar ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: Container vsphere-csi-node ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-v7n4s from sonobuoy started at 2022-08-17 22:38:36 +0000 UTC (2 container statuses recorded) +Aug 17 23:30:25.594: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: Container systemd-logs ready: true, restart count 0 +Aug 17 23:30:25.594: INFO: +Logging pods the apiserver thinks is on node 195.17.65.231 before test +Aug 17 23:30:25.604: INFO: cilium-f7vw5 from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.604: INFO: Container cilium-agent ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: cilium-operator-5799bc594c-fpwfg from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.604: INFO: Container cilium-operator ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: kube-proxy-xc469 from kube-system started at 2022-08-17 22:19:12 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.604: INFO: Container kube-proxy ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: vsphere-cloud-controller-manager-49t6p from kube-system started at 2022-08-17 22:48:46 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.604: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: vsphere-csi-node-lhjjp from kube-system started at 2022-08-17 22:19:12 +0000 UTC (3 container statuses recorded) +Aug 17 23:30:25.604: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: Container node-driver-registrar ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: Container vsphere-csi-node ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: sonobuoy from sonobuoy started at 2022-08-17 22:38:32 +0000 UTC (1 container statuses recorded) +Aug 17 23:30:25.604: INFO: Container kube-sonobuoy ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-lppfn from sonobuoy started at 2022-08-17 22:38:36 +0000 UTC (2 container statuses recorded) +Aug 17 23:30:25.604: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 17 23:30:25.604: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.170c45b9815590f7], Reason = [FailedScheduling], Message = [0/4 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:30:26.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-2015" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":346,"completed":199,"skipped":3691,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:30:26.657: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Aug 17 23:30:26.711: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-639 d0c4fd52-7fde-47e2-bb2c-b321d8e99f24 56861 0 2022-08-17 23:30:26 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-08-17 23:30:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:30:26.711: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-639 d0c4fd52-7fde-47e2-bb2c-b321d8e99f24 56862 0 2022-08-17 23:30:26 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-08-17 23:30:26 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:30:26.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-639" for this suite. +•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":200,"skipped":3696,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:30:26.727: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:30:26.765: INFO: The status of Pod server-envvars-f31b101d-b34a-46af-a2de-c7c5a3f9ca94 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:30:28.771: INFO: The status of Pod server-envvars-f31b101d-b34a-46af-a2de-c7c5a3f9ca94 is Running (Ready = true) +Aug 17 23:30:28.819: INFO: Waiting up to 5m0s for pod "client-envvars-eef795ef-2ec4-473d-b0bb-37d5f66e4b99" in namespace "pods-272" to be "Succeeded or Failed" +Aug 17 23:30:28.827: INFO: Pod "client-envvars-eef795ef-2ec4-473d-b0bb-37d5f66e4b99": Phase="Pending", Reason="", readiness=false. Elapsed: 7.448872ms +Aug 17 23:30:30.832: INFO: Pod "client-envvars-eef795ef-2ec4-473d-b0bb-37d5f66e4b99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013136707s +Aug 17 23:30:32.839: INFO: Pod "client-envvars-eef795ef-2ec4-473d-b0bb-37d5f66e4b99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020232703s +STEP: Saw pod success +Aug 17 23:30:32.840: INFO: Pod "client-envvars-eef795ef-2ec4-473d-b0bb-37d5f66e4b99" satisfied condition "Succeeded or Failed" +Aug 17 23:30:32.842: INFO: Trying to get logs from node 195.17.65.231 pod client-envvars-eef795ef-2ec4-473d-b0bb-37d5f66e4b99 container env3cont: +STEP: delete the pod +Aug 17 23:30:32.879: INFO: Waiting for pod client-envvars-eef795ef-2ec4-473d-b0bb-37d5f66e4b99 to disappear +Aug 17 23:30:32.884: INFO: Pod client-envvars-eef795ef-2ec4-473d-b0bb-37d5f66e4b99 no longer exists +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:30:32.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-272" for this suite. + +• [SLOW TEST:6.174 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":201,"skipped":3717,"failed":0} +SSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:30:32.901: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-5597 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:30:32.952: INFO: Found 0 stateful pods, waiting for 1 +Aug 17 23:30:42.958: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet +Aug 17 23:30:43.008: INFO: Found 1 stateful pods, waiting for 2 +Aug 17 23:30:53.015: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:30:53.015: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets +STEP: Delete all of the StatefulSets +STEP: Verify that StatefulSets have been deleted +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 17 23:30:53.055: INFO: Deleting all statefulset in ns statefulset-5597 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:30:53.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5597" for this suite. + +• [SLOW TEST:20.186 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should list, patch and delete a collection of StatefulSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":202,"skipped":3722,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:30:53.088: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: getting the auto-created API token +STEP: reading a file in the container +Aug 17 23:30:55.650: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4441 pod-service-account-c10c5e8c-ec01-4530-a5fd-366d94834d68 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Aug 17 23:30:55.787: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4441 pod-service-account-c10c5e8c-ec01-4530-a5fd-366d94834d68 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Aug 17 23:30:55.931: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4441 pod-service-account-c10c5e8c-ec01-4530-a5fd-366d94834d68 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:30:56.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-4441" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":346,"completed":203,"skipped":3758,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:30:56.084: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: getting the auto-created API token +Aug 17 23:30:56.637: INFO: created pod pod-service-account-defaultsa +Aug 17 23:30:56.637: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Aug 17 23:30:56.646: INFO: created pod pod-service-account-mountsa +Aug 17 23:30:56.646: INFO: pod pod-service-account-mountsa service account token volume mount: true +Aug 17 23:30:56.654: INFO: created pod pod-service-account-nomountsa +Aug 17 23:30:56.654: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Aug 17 23:30:56.663: INFO: created pod pod-service-account-defaultsa-mountspec +Aug 17 23:30:56.663: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Aug 17 23:30:56.670: INFO: created pod pod-service-account-mountsa-mountspec +Aug 17 23:30:56.670: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Aug 17 23:30:56.678: INFO: created pod pod-service-account-nomountsa-mountspec +Aug 17 23:30:56.679: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Aug 17 23:30:56.687: INFO: created pod pod-service-account-defaultsa-nomountspec +Aug 17 23:30:56.687: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Aug 17 23:30:56.696: INFO: created pod pod-service-account-mountsa-nomountspec +Aug 17 23:30:56.696: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Aug 17 23:30:56.701: INFO: created pod pod-service-account-nomountsa-nomountspec +Aug 17 23:30:56.701: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:30:56.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-2464" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":204,"skipped":3782,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:30:56.717: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 +[It] should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Aug 17 23:30:56.752: INFO: Pod name pod-release: Found 0 pods out of 1 +Aug 17 23:31:01.758: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:31:02.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-8841" for this suite. + +• [SLOW TEST:6.098 seconds] +[sig-apps] ReplicationController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":205,"skipped":3796,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:31:02.819: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Aug 17 23:31:02.849: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Aug 17 23:31:02.864: INFO: Waiting for terminating namespaces to be deleted... +Aug 17 23:31:02.871: INFO: +Logging pods the apiserver thinks is on node 195.17.131.205 before test +Aug 17 23:31:02.890: INFO: capi-kubeadm-bootstrap-controller-manager-6f58b86764-4snx7 from capi-kubeadm-bootstrap-system started at 2022-08-17 22:22:29 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.890: INFO: Container manager ready: true, restart count 0 +Aug 17 23:31:02.890: INFO: capi-kubeadm-control-plane-controller-manager-7b679446f7-x2d65 from capi-kubeadm-control-plane-system started at 2022-08-17 22:22:49 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.890: INFO: Container manager ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: capi-controller-manager-6ff75d8789-8fldg from capi-system started at 2022-08-17 22:22:22 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container manager ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: cert-manager-67565ccf5d-zf6kt from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: cert-manager-cainjector-654854cb95-cb6v8 from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: cert-manager-webhook-fc46785b4-gvkf6 from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: eks-anywhere-packages-ddfc7b44-8zssk from eksa-packages started at 2022-08-17 22:24:50 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container controller ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: etcdadm-bootstrap-provider-controller-manager-7d898b8f77-xgmtd from etcdadm-bootstrap-provider-system started at 2022-08-17 22:22:35 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container manager ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: etcdadm-controller-controller-manager-b6f674477-6lsxb from etcdadm-controller-system started at 2022-08-17 22:22:40 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container manager ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: cilium-hvkwp from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container cilium-agent ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: cilium-operator-5799bc594c-b9rnk from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container cilium-operator ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: kube-proxy-pdhjb from kube-system started at 2022-08-17 22:19:15 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container kube-proxy ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: vsphere-cloud-controller-manager-s5246 from kube-system started at 2022-08-17 22:19:15 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 1 +Aug 17 23:31:02.891: INFO: vsphere-csi-controller-f67d5c78c-l8hxm from kube-system started at 2022-08-17 22:43:28 +0000 UTC (5 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container csi-attacher ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: Container csi-provisioner ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: Container vsphere-csi-controller ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: Container vsphere-syncer ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: vsphere-csi-node-f9msr from kube-system started at 2022-08-17 22:19:15 +0000 UTC (3 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: Container node-driver-registrar ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: Container vsphere-csi-node ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-v7n4s from sonobuoy started at 2022-08-17 22:38:36 +0000 UTC (2 container statuses recorded) +Aug 17 23:31:02.891: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: Container systemd-logs ready: true, restart count 0 +Aug 17 23:31:02.891: INFO: +Logging pods the apiserver thinks is on node 195.17.65.231 before test +Aug 17 23:31:02.903: INFO: cilium-f7vw5 from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container cilium-agent ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: cilium-operator-5799bc594c-fpwfg from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container cilium-operator ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: kube-proxy-xc469 from kube-system started at 2022-08-17 22:19:12 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container kube-proxy ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: vsphere-cloud-controller-manager-49t6p from kube-system started at 2022-08-17 22:48:46 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: vsphere-csi-node-lhjjp from kube-system started at 2022-08-17 22:19:12 +0000 UTC (3 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: Container node-driver-registrar ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: Container vsphere-csi-node ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: pod-release-44xzc from replication-controller-8841 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container pod-release ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-release-5r9f9 from replication-controller-8841 started at (0 container statuses recorded) +Aug 17 23:31:02.903: INFO: sonobuoy from sonobuoy started at 2022-08-17 22:38:32 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container kube-sonobuoy ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-lppfn from sonobuoy started at 2022-08-17 22:38:36 +0000 UTC (2 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container sonobuoy-worker ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: Container systemd-logs ready: true, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-defaultsa from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-defaultsa-mountspec from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-defaultsa-nomountspec from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-mountsa from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-mountsa-mountspec from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-mountsa-nomountspec from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-nomountsa from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +Aug 17 23:31:02.903: INFO: pod-service-account-nomountsa-nomountspec from svcaccounts-2464 started at 2022-08-17 23:30:56 +0000 UTC (1 container statuses recorded) +Aug 17 23:31:02.903: INFO: Container token-test ready: false, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-7956c2a2-e53b-4dd8-8c46-1a01620bd90e 95 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 195.17.65.231 on the node which pod4 resides and expect not scheduled +STEP: removing the label kubernetes.io/e2e-7956c2a2-e53b-4dd8-8c46-1a01620bd90e off the node 195.17.65.231 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-7956c2a2-e53b-4dd8-8c46-1a01620bd90e +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:36:13.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-1839" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 + +• [SLOW TEST:310.249 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":206,"skipped":3878,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:36:13.068: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating the pod +Aug 17 23:36:13.108: INFO: The status of Pod annotationupdate964bbf81-26d2-40c4-8550-112975d42d2c is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:36:15.113: INFO: The status of Pod annotationupdate964bbf81-26d2-40c4-8550-112975d42d2c is Running (Ready = true) +Aug 17 23:36:15.655: INFO: Successfully updated pod "annotationupdate964bbf81-26d2-40c4-8550-112975d42d2c" +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:36:19.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-8954" for this suite. + +• [SLOW TEST:6.637 seconds] +[sig-storage] Downward API volume +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":207,"skipped":3897,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:36:19.707: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name secret-test-27a2aebb-7af9-49f3-832c-de7b483ca887 +STEP: Creating a pod to test consume secrets +Aug 17 23:36:19.752: INFO: Waiting up to 5m0s for pod "pod-secrets-277f2c81-524e-41a7-90ff-57c329e8a2b6" in namespace "secrets-1562" to be "Succeeded or Failed" +Aug 17 23:36:19.757: INFO: Pod "pod-secrets-277f2c81-524e-41a7-90ff-57c329e8a2b6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.20066ms +Aug 17 23:36:21.763: INFO: Pod "pod-secrets-277f2c81-524e-41a7-90ff-57c329e8a2b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010822845s +Aug 17 23:36:23.769: INFO: Pod "pod-secrets-277f2c81-524e-41a7-90ff-57c329e8a2b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017375694s +STEP: Saw pod success +Aug 17 23:36:23.769: INFO: Pod "pod-secrets-277f2c81-524e-41a7-90ff-57c329e8a2b6" satisfied condition "Succeeded or Failed" +Aug 17 23:36:23.772: INFO: Trying to get logs from node 195.17.65.231 pod pod-secrets-277f2c81-524e-41a7-90ff-57c329e8a2b6 container secret-volume-test: +STEP: delete the pod +Aug 17 23:36:23.797: INFO: Waiting for pod pod-secrets-277f2c81-524e-41a7-90ff-57c329e8a2b6 to disappear +Aug 17 23:36:23.800: INFO: Pod pod-secrets-277f2c81-524e-41a7-90ff-57c329e8a2b6 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:36:23.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1562" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":208,"skipped":3930,"failed":0} +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:36:23.815: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir volume type on tmpfs +Aug 17 23:36:23.855: INFO: Waiting up to 5m0s for pod "pod-47f19179-a34a-4f58-94d7-dc1c2d04df78" in namespace "emptydir-396" to be "Succeeded or Failed" +Aug 17 23:36:23.862: INFO: Pod "pod-47f19179-a34a-4f58-94d7-dc1c2d04df78": Phase="Pending", Reason="", readiness=false. Elapsed: 7.221266ms +Aug 17 23:36:25.871: INFO: Pod "pod-47f19179-a34a-4f58-94d7-dc1c2d04df78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015780414s +Aug 17 23:36:27.880: INFO: Pod "pod-47f19179-a34a-4f58-94d7-dc1c2d04df78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024884272s +STEP: Saw pod success +Aug 17 23:36:27.880: INFO: Pod "pod-47f19179-a34a-4f58-94d7-dc1c2d04df78" satisfied condition "Succeeded or Failed" +Aug 17 23:36:27.884: INFO: Trying to get logs from node 195.17.65.231 pod pod-47f19179-a34a-4f58-94d7-dc1c2d04df78 container test-container: +STEP: delete the pod +Aug 17 23:36:27.908: INFO: Waiting for pod pod-47f19179-a34a-4f58-94d7-dc1c2d04df78 to disappear +Aug 17 23:36:27.911: INFO: Pod pod-47f19179-a34a-4f58-94d7-dc1c2d04df78 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:36:27.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-396" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":209,"skipped":3935,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] version v1 + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:36:27.927: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-bz6s7 in namespace proxy-8819 +I0817 23:36:27.985257 20 runners.go:193] Created replication controller with name: proxy-service-bz6s7, namespace: proxy-8819, replica count: 1 +I0817 23:36:29.035820 20 runners.go:193] proxy-service-bz6s7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0817 23:36:30.036143 20 runners.go:193] proxy-service-bz6s7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 23:36:30.042: INFO: setup took 2.09103773s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Aug 17 23:36:30.050: INFO: (0) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 6.667935ms) +Aug 17 23:36:30.050: INFO: (0) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 7.322445ms) +Aug 17 23:36:30.051: INFO: (0) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.228978ms) +Aug 17 23:36:30.052: INFO: (0) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 9.278096ms) +Aug 17 23:36:30.052: INFO: (0) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 9.421881ms) +Aug 17 23:36:30.053: INFO: (0) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 9.662208ms) +Aug 17 23:36:30.056: INFO: (0) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 13.26901ms) +Aug 17 23:36:30.057: INFO: (0) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 14.442886ms) +Aug 17 23:36:30.057: INFO: (0) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 15.151589ms) +Aug 17 23:36:30.057: INFO: (0) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 14.851417ms) +Aug 17 23:36:30.058: INFO: (0) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 14.90257ms) +Aug 17 23:36:30.058: INFO: (0) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: ... (200; 5.556441ms) +Aug 17 23:36:30.067: INFO: (1) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 5.853516ms) +Aug 17 23:36:30.067: INFO: (1) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 5.74869ms) +Aug 17 23:36:30.068: INFO: (1) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 7.626602ms) +Aug 17 23:36:30.069: INFO: (1) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 7.546511ms) +Aug 17 23:36:30.069: INFO: (1) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 7.861389ms) +Aug 17 23:36:30.069: INFO: (1) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 7.938746ms) +Aug 17 23:36:30.069: INFO: (1) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 7.746023ms) +Aug 17 23:36:30.069: INFO: (1) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test (200; 8.083109ms) +Aug 17 23:36:30.069: INFO: (1) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.735582ms) +Aug 17 23:36:30.069: INFO: (1) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 8.623787ms) +Aug 17 23:36:30.070: INFO: (1) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 8.901305ms) +Aug 17 23:36:30.070: INFO: (1) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 9.245024ms) +Aug 17 23:36:30.075: INFO: (2) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 4.112228ms) +Aug 17 23:36:30.075: INFO: (2) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 4.432776ms) +Aug 17 23:36:30.076: INFO: (2) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 5.227158ms) +Aug 17 23:36:30.077: INFO: (2) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: ... (200; 7.120398ms) +Aug 17 23:36:30.078: INFO: (2) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 7.040472ms) +Aug 17 23:36:30.079: INFO: (2) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 8.597373ms) +Aug 17 23:36:30.079: INFO: (2) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 7.940507ms) +Aug 17 23:36:30.079: INFO: (2) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 8.157831ms) +Aug 17 23:36:30.079: INFO: (2) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 7.85144ms) +Aug 17 23:36:30.080: INFO: (2) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 8.870151ms) +Aug 17 23:36:30.080: INFO: (2) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 9.372601ms) +Aug 17 23:36:30.080: INFO: (2) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 9.462609ms) +Aug 17 23:36:30.081: INFO: (2) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 9.135146ms) +Aug 17 23:36:30.081: INFO: (2) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 9.690162ms) +Aug 17 23:36:30.085: INFO: (3) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 4.541189ms) +Aug 17 23:36:30.085: INFO: (3) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 4.421695ms) +Aug 17 23:36:30.087: INFO: (3) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 5.87553ms) +Aug 17 23:36:30.088: INFO: (3) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 6.517516ms) +Aug 17 23:36:30.089: INFO: (3) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 7.67031ms) +Aug 17 23:36:30.090: INFO: (3) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: ... (200; 10.273829ms) +Aug 17 23:36:30.092: INFO: (3) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 10.706263ms) +Aug 17 23:36:30.092: INFO: (3) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 10.915247ms) +Aug 17 23:36:30.092: INFO: (3) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 10.743541ms) +Aug 17 23:36:30.092: INFO: (3) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 10.985092ms) +Aug 17 23:36:30.093: INFO: (3) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 11.298055ms) +Aug 17 23:36:30.093: INFO: (3) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 12.206315ms) +Aug 17 23:36:30.093: INFO: (3) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 12.301766ms) +Aug 17 23:36:30.094: INFO: (3) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 12.537845ms) +Aug 17 23:36:30.094: INFO: (3) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 12.588411ms) +Aug 17 23:36:30.099: INFO: (4) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test<... (200; 5.092418ms) +Aug 17 23:36:30.099: INFO: (4) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 5.177649ms) +Aug 17 23:36:30.099: INFO: (4) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 5.219053ms) +Aug 17 23:36:30.100: INFO: (4) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 5.842898ms) +Aug 17 23:36:30.100: INFO: (4) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 5.911678ms) +Aug 17 23:36:30.101: INFO: (4) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 6.25882ms) +Aug 17 23:36:30.101: INFO: (4) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 6.484114ms) +Aug 17 23:36:30.101: INFO: (4) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 6.964523ms) +Aug 17 23:36:30.101: INFO: (4) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 7.151898ms) +Aug 17 23:36:30.102: INFO: (4) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 7.899442ms) +Aug 17 23:36:30.102: INFO: (4) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 7.851845ms) +Aug 17 23:36:30.102: INFO: (4) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 8.33651ms) +Aug 17 23:36:30.102: INFO: (4) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.540728ms) +Aug 17 23:36:30.102: INFO: (4) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 8.378174ms) +Aug 17 23:36:30.103: INFO: (4) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 8.638868ms) +Aug 17 23:36:30.108: INFO: (5) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 4.222038ms) +Aug 17 23:36:30.108: INFO: (5) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: ... (200; 8.081924ms) +Aug 17 23:36:30.112: INFO: (5) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 9.194492ms) +Aug 17 23:36:30.112: INFO: (5) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.435073ms) +Aug 17 23:36:30.112: INFO: (5) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 8.733012ms) +Aug 17 23:36:30.113: INFO: (5) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 9.354763ms) +Aug 17 23:36:30.113: INFO: (5) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 9.46722ms) +Aug 17 23:36:30.113: INFO: (5) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 9.15472ms) +Aug 17 23:36:30.113: INFO: (5) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 9.158098ms) +Aug 17 23:36:30.113: INFO: (5) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 8.767466ms) +Aug 17 23:36:30.113: INFO: (5) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 9.101637ms) +Aug 17 23:36:30.118: INFO: (6) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 3.953755ms) +Aug 17 23:36:30.118: INFO: (6) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 4.440903ms) +Aug 17 23:36:30.118: INFO: (6) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 4.539912ms) +Aug 17 23:36:30.119: INFO: (6) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 4.323539ms) +Aug 17 23:36:30.119: INFO: (6) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 4.949106ms) +Aug 17 23:36:30.120: INFO: (6) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test (200; 5.763355ms) +Aug 17 23:36:30.121: INFO: (6) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 6.175728ms) +Aug 17 23:36:30.121: INFO: (6) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 7.071142ms) +Aug 17 23:36:30.121: INFO: (6) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 6.717853ms) +Aug 17 23:36:30.121: INFO: (6) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 7.074323ms) +Aug 17 23:36:30.121: INFO: (6) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 6.921319ms) +Aug 17 23:36:30.122: INFO: (6) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 7.463933ms) +Aug 17 23:36:30.122: INFO: (6) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 7.439434ms) +Aug 17 23:36:30.123: INFO: (6) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.254879ms) +Aug 17 23:36:30.123: INFO: (6) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 8.519029ms) +Aug 17 23:36:30.127: INFO: (7) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 4.24933ms) +Aug 17 23:36:30.128: INFO: (7) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 4.405335ms) +Aug 17 23:36:30.128: INFO: (7) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 4.320284ms) +Aug 17 23:36:30.129: INFO: (7) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 5.151053ms) +Aug 17 23:36:30.129: INFO: (7) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 6.066904ms) +Aug 17 23:36:30.130: INFO: (7) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test<... (200; 6.484818ms) +Aug 17 23:36:30.130: INFO: (7) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 6.673837ms) +Aug 17 23:36:30.131: INFO: (7) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 7.284832ms) +Aug 17 23:36:30.132: INFO: (7) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 8.338604ms) +Aug 17 23:36:30.132: INFO: (7) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 7.642752ms) +Aug 17 23:36:30.132: INFO: (7) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 7.689973ms) +Aug 17 23:36:30.132: INFO: (7) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.006682ms) +Aug 17 23:36:30.132: INFO: (7) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 8.87781ms) +Aug 17 23:36:30.132: INFO: (7) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 8.509911ms) +Aug 17 23:36:30.132: INFO: (7) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.450912ms) +Aug 17 23:36:30.136: INFO: (8) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 3.544194ms) +Aug 17 23:36:30.136: INFO: (8) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 3.749296ms) +Aug 17 23:36:30.137: INFO: (8) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 5.070755ms) +Aug 17 23:36:30.138: INFO: (8) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test (200; 6.582891ms) +Aug 17 23:36:30.139: INFO: (8) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 6.791062ms) +Aug 17 23:36:30.140: INFO: (8) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 7.239611ms) +Aug 17 23:36:30.140: INFO: (8) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.025358ms) +Aug 17 23:36:30.141: INFO: (8) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 8.093577ms) +Aug 17 23:36:30.142: INFO: (8) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 8.982877ms) +Aug 17 23:36:30.142: INFO: (8) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 9.220202ms) +Aug 17 23:36:30.142: INFO: (8) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 9.551824ms) +Aug 17 23:36:30.142: INFO: (8) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 9.510499ms) +Aug 17 23:36:30.143: INFO: (8) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 10.095068ms) +Aug 17 23:36:30.147: INFO: (9) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 4.494578ms) +Aug 17 23:36:30.148: INFO: (9) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 4.636667ms) +Aug 17 23:36:30.148: INFO: (9) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 4.311724ms) +Aug 17 23:36:30.148: INFO: (9) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 5.03676ms) +Aug 17 23:36:30.149: INFO: (9) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 5.972814ms) +Aug 17 23:36:30.149: INFO: (9) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 5.094397ms) +Aug 17 23:36:30.149: INFO: (9) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 5.421795ms) +Aug 17 23:36:30.150: INFO: (9) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 6.056685ms) +Aug 17 23:36:30.151: INFO: (9) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 7.583628ms) +Aug 17 23:36:30.151: INFO: (9) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 7.804143ms) +Aug 17 23:36:30.151: INFO: (9) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 7.754936ms) +Aug 17 23:36:30.152: INFO: (9) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 8.172389ms) +Aug 17 23:36:30.152: INFO: (9) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 8.973757ms) +Aug 17 23:36:30.153: INFO: (9) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test<... (200; 6.282486ms) +Aug 17 23:36:30.161: INFO: (10) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 6.640208ms) +Aug 17 23:36:30.161: INFO: (10) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 7.07784ms) +Aug 17 23:36:30.162: INFO: (10) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 7.423115ms) +Aug 17 23:36:30.162: INFO: (10) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 8.061504ms) +Aug 17 23:36:30.162: INFO: (10) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.156793ms) +Aug 17 23:36:30.162: INFO: (10) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 8.316436ms) +Aug 17 23:36:30.162: INFO: (10) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 8.337064ms) +Aug 17 23:36:30.163: INFO: (10) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.550149ms) +Aug 17 23:36:30.163: INFO: (10) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 9.214101ms) +Aug 17 23:36:30.164: INFO: (10) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 9.752785ms) +Aug 17 23:36:30.164: INFO: (10) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 9.894184ms) +Aug 17 23:36:30.165: INFO: (10) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 10.547346ms) +Aug 17 23:36:30.170: INFO: (11) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test (200; 6.033901ms) +Aug 17 23:36:30.172: INFO: (11) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 7.106161ms) +Aug 17 23:36:30.172: INFO: (11) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 6.497953ms) +Aug 17 23:36:30.173: INFO: (11) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 7.939149ms) +Aug 17 23:36:30.173: INFO: (11) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 7.794832ms) +Aug 17 23:36:30.174: INFO: (11) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 8.822366ms) +Aug 17 23:36:30.174: INFO: (11) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 8.023021ms) +Aug 17 23:36:30.174: INFO: (11) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.716102ms) +Aug 17 23:36:30.174: INFO: (11) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.300371ms) +Aug 17 23:36:30.174: INFO: (11) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.920599ms) +Aug 17 23:36:30.174: INFO: (11) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 8.278905ms) +Aug 17 23:36:30.175: INFO: (11) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 8.775274ms) +Aug 17 23:36:30.175: INFO: (11) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 9.091736ms) +Aug 17 23:36:30.175: INFO: (11) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 9.241978ms) +Aug 17 23:36:30.179: INFO: (12) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 3.944302ms) +Aug 17 23:36:30.179: INFO: (12) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 4.466146ms) +Aug 17 23:36:30.180: INFO: (12) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 4.941481ms) +Aug 17 23:36:30.181: INFO: (12) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 6.015884ms) +Aug 17 23:36:30.181: INFO: (12) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 5.751722ms) +Aug 17 23:36:30.181: INFO: (12) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 5.961017ms) +Aug 17 23:36:30.182: INFO: (12) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 6.891192ms) +Aug 17 23:36:30.182: INFO: (12) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 7.390399ms) +Aug 17 23:36:30.183: INFO: (12) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 7.659174ms) +Aug 17 23:36:30.183: INFO: (12) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test<... (200; 8.413211ms) +Aug 17 23:36:30.184: INFO: (12) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.586002ms) +Aug 17 23:36:30.184: INFO: (12) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 8.591036ms) +Aug 17 23:36:30.184: INFO: (12) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 8.780451ms) +Aug 17 23:36:30.184: INFO: (12) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 8.924875ms) +Aug 17 23:36:30.188: INFO: (13) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: ... (200; 4.732384ms) +Aug 17 23:36:30.190: INFO: (13) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 5.774925ms) +Aug 17 23:36:30.191: INFO: (13) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 6.183471ms) +Aug 17 23:36:30.191: INFO: (13) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 6.272358ms) +Aug 17 23:36:30.192: INFO: (13) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 7.9339ms) +Aug 17 23:36:30.193: INFO: (13) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.329903ms) +Aug 17 23:36:30.193: INFO: (13) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 8.114892ms) +Aug 17 23:36:30.193: INFO: (13) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.930816ms) +Aug 17 23:36:30.193: INFO: (13) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 8.492705ms) +Aug 17 23:36:30.193: INFO: (13) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 9.24475ms) +Aug 17 23:36:30.195: INFO: (13) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 10.019253ms) +Aug 17 23:36:30.195: INFO: (13) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 10.002601ms) +Aug 17 23:36:30.195: INFO: (13) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 10.701115ms) +Aug 17 23:36:30.200: INFO: (14) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 4.577689ms) +Aug 17 23:36:30.200: INFO: (14) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 4.675654ms) +Aug 17 23:36:30.200: INFO: (14) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 4.525975ms) +Aug 17 23:36:30.201: INFO: (14) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 5.949172ms) +Aug 17 23:36:30.201: INFO: (14) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 5.714669ms) +Aug 17 23:36:30.202: INFO: (14) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 6.412252ms) +Aug 17 23:36:30.202: INFO: (14) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 6.591369ms) +Aug 17 23:36:30.204: INFO: (14) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 8.728665ms) +Aug 17 23:36:30.204: INFO: (14) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.966376ms) +Aug 17 23:36:30.204: INFO: (14) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 8.769858ms) +Aug 17 23:36:30.204: INFO: (14) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test (200; 4.675374ms) +Aug 17 23:36:30.213: INFO: (15) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 5.891608ms) +Aug 17 23:36:30.213: INFO: (15) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 6.363616ms) +Aug 17 23:36:30.213: INFO: (15) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 5.867535ms) +Aug 17 23:36:30.213: INFO: (15) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: ... (200; 6.516376ms) +Aug 17 23:36:30.214: INFO: (15) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 7.346245ms) +Aug 17 23:36:30.214: INFO: (15) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 7.055679ms) +Aug 17 23:36:30.220: INFO: (15) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 13.784274ms) +Aug 17 23:36:30.221: INFO: (15) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 13.634397ms) +Aug 17 23:36:30.221: INFO: (15) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 14.884518ms) +Aug 17 23:36:30.221: INFO: (15) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 14.329677ms) +Aug 17 23:36:30.221: INFO: (15) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 14.627103ms) +Aug 17 23:36:30.222: INFO: (15) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 15.476889ms) +Aug 17 23:36:30.227: INFO: (16) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test<... (200; 9.368941ms) +Aug 17 23:36:30.232: INFO: (16) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 10.01704ms) +Aug 17 23:36:30.233: INFO: (16) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 10.911622ms) +Aug 17 23:36:30.234: INFO: (16) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 11.099949ms) +Aug 17 23:36:30.234: INFO: (16) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 11.495835ms) +Aug 17 23:36:30.236: INFO: (16) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 13.140864ms) +Aug 17 23:36:30.238: INFO: (16) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 14.924355ms) +Aug 17 23:36:30.238: INFO: (16) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 15.275432ms) +Aug 17 23:36:30.239: INFO: (16) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 16.544556ms) +Aug 17 23:36:30.239: INFO: (16) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 16.804928ms) +Aug 17 23:36:30.240: INFO: (16) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 17.108364ms) +Aug 17 23:36:30.246: INFO: (17) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 5.666659ms) +Aug 17 23:36:30.246: INFO: (17) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 6.208755ms) +Aug 17 23:36:30.247: INFO: (17) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 6.403858ms) +Aug 17 23:36:30.247: INFO: (17) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 6.853964ms) +Aug 17 23:36:30.248: INFO: (17) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 7.88938ms) +Aug 17 23:36:30.248: INFO: (17) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test<... (200; 8.291082ms) +Aug 17 23:36:30.249: INFO: (17) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.864531ms) +Aug 17 23:36:30.249: INFO: (17) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 8.554295ms) +Aug 17 23:36:30.249: INFO: (17) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 8.937237ms) +Aug 17 23:36:30.250: INFO: (17) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 9.216206ms) +Aug 17 23:36:30.250: INFO: (17) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 9.534039ms) +Aug 17 23:36:30.250: INFO: (17) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 9.588897ms) +Aug 17 23:36:30.250: INFO: (17) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 9.794975ms) +Aug 17 23:36:30.250: INFO: (17) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname2/proxy/: bar (200; 9.961889ms) +Aug 17 23:36:30.251: INFO: (17) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 10.582153ms) +Aug 17 23:36:30.256: INFO: (18) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 4.725514ms) +Aug 17 23:36:30.256: INFO: (18) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 4.844394ms) +Aug 17 23:36:30.256: INFO: (18) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 4.894938ms) +Aug 17 23:36:30.257: INFO: (18) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 5.930612ms) +Aug 17 23:36:30.258: INFO: (18) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 6.300609ms) +Aug 17 23:36:30.258: INFO: (18) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 6.703381ms) +Aug 17 23:36:30.258: INFO: (18) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:1080/proxy/: test<... (200; 6.486896ms) +Aug 17 23:36:30.259: INFO: (18) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 7.428911ms) +Aug 17 23:36:30.259: INFO: (18) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 7.969887ms) +Aug 17 23:36:30.259: INFO: (18) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 8.04425ms) +Aug 17 23:36:30.259: INFO: (18) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 8.017053ms) +Aug 17 23:36:30.260: INFO: (18) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 8.440049ms) +Aug 17 23:36:30.260: INFO: (18) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: test<... (200; 6.160611ms) +Aug 17 23:36:30.270: INFO: (19) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:1080/proxy/: ... (200; 8.820298ms) +Aug 17 23:36:30.272: INFO: (19) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk/proxy/: test (200; 10.165344ms) +Aug 17 23:36:30.272: INFO: (19) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname1/proxy/: tls baz (200; 10.578644ms) +Aug 17 23:36:30.272: INFO: (19) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:460/proxy/: tls baz (200; 10.87933ms) +Aug 17 23:36:30.273: INFO: (19) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname2/proxy/: bar (200; 11.281033ms) +Aug 17 23:36:30.274: INFO: (19) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:160/proxy/: foo (200; 12.243245ms) +Aug 17 23:36:30.274: INFO: (19) /api/v1/namespaces/proxy-8819/services/https:proxy-service-bz6s7:tlsportname2/proxy/: tls qux (200; 12.497518ms) +Aug 17 23:36:30.275: INFO: (19) /api/v1/namespaces/proxy-8819/pods/proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 13.102255ms) +Aug 17 23:36:30.275: INFO: (19) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:462/proxy/: tls qux (200; 13.271608ms) +Aug 17 23:36:30.276: INFO: (19) /api/v1/namespaces/proxy-8819/services/http:proxy-service-bz6s7:portname1/proxy/: foo (200; 14.452702ms) +Aug 17 23:36:30.276: INFO: (19) /api/v1/namespaces/proxy-8819/pods/http:proxy-service-bz6s7-g8nfk:162/proxy/: bar (200; 14.415671ms) +Aug 17 23:36:30.276: INFO: (19) /api/v1/namespaces/proxy-8819/services/proxy-service-bz6s7:portname1/proxy/: foo (200; 14.62299ms) +Aug 17 23:36:30.276: INFO: (19) /api/v1/namespaces/proxy-8819/pods/https:proxy-service-bz6s7-g8nfk:443/proxy/: >> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod liveness-bb6d9fbf-f25a-49e1-b491-a0a7cef659cd in namespace container-probe-2469 +Aug 17 23:36:35.318: INFO: Started pod liveness-bb6d9fbf-f25a-49e1-b491-a0a7cef659cd in namespace container-probe-2469 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 17 23:36:35.322: INFO: Initial restart count of pod liveness-bb6d9fbf-f25a-49e1-b491-a0a7cef659cd is 0 +Aug 17 23:36:55.381: INFO: Restart count of pod container-probe-2469/liveness-bb6d9fbf-f25a-49e1-b491-a0a7cef659cd is now 1 (20.059475064s elapsed) +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:36:55.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-2469" for this suite. + +• [SLOW TEST:22.154 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":211,"skipped":4016,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:36:55.422: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:37:02.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-1004" for this suite. + +• [SLOW TEST:7.068 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":212,"skipped":4030,"failed":0} +SSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:37:02.490: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a job +STEP: Ensuring job reaches completions +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:37:12.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-8375" for this suite. + +• [SLOW TEST:10.066 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":213,"skipped":4035,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:37:12.558: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:37:12.599: INFO: The status of Pod busybox-scheduling-80ed2406-5104-460a-823e-868766fc96f8 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:37:14.606: INFO: The status of Pod busybox-scheduling-80ed2406-5104-460a-823e-868766fc96f8 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:37:14.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1313" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":214,"skipped":4047,"failed":0} +SSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:37:14.629: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:37:14.652: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-5870 +I0817 23:37:14.664247 20 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5870, replica count: 1 +I0817 23:37:15.714811 20 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0817 23:37:16.714992 20 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 23:37:16.839: INFO: Created: latency-svc-rl4l6 +Aug 17 23:37:16.847: INFO: Got endpoints: latency-svc-rl4l6 [31.462589ms] +Aug 17 23:37:16.868: INFO: Created: latency-svc-24vgd +Aug 17 23:37:16.874: INFO: Got endpoints: latency-svc-24vgd [26.694337ms] +Aug 17 23:37:16.891: INFO: Created: latency-svc-ddzhx +Aug 17 23:37:16.898: INFO: Got endpoints: latency-svc-ddzhx [50.059318ms] +Aug 17 23:37:16.912: INFO: Created: latency-svc-dqv4f +Aug 17 23:37:16.921: INFO: Got endpoints: latency-svc-dqv4f [73.230437ms] +Aug 17 23:37:16.929: INFO: Created: latency-svc-767mz +Aug 17 23:37:16.936: INFO: Got endpoints: latency-svc-767mz [87.499348ms] +Aug 17 23:37:17.171: INFO: Created: latency-svc-465fc +Aug 17 23:37:17.173: INFO: Created: latency-svc-ck5vm +Aug 17 23:37:17.173: INFO: Created: latency-svc-tcjl5 +Aug 17 23:37:17.173: INFO: Created: latency-svc-pks9d +Aug 17 23:37:17.174: INFO: Created: latency-svc-ncn9n +Aug 17 23:37:17.174: INFO: Created: latency-svc-d2kpj +Aug 17 23:37:17.174: INFO: Created: latency-svc-tc5rv +Aug 17 23:37:17.174: INFO: Created: latency-svc-x7cst +Aug 17 23:37:17.174: INFO: Created: latency-svc-gq7bq +Aug 17 23:37:17.174: INFO: Created: latency-svc-zfn9t +Aug 17 23:37:17.174: INFO: Got endpoints: latency-svc-465fc [325.682942ms] +Aug 17 23:37:17.174: INFO: Created: latency-svc-726z2 +Aug 17 23:37:17.174: INFO: Created: latency-svc-gqqh9 +Aug 17 23:37:17.175: INFO: Created: latency-svc-fptgl +Aug 17 23:37:17.175: INFO: Created: latency-svc-nsh6d +Aug 17 23:37:17.175: INFO: Created: latency-svc-mkxjv +Aug 17 23:37:17.188: INFO: Got endpoints: latency-svc-mkxjv [339.843524ms] +Aug 17 23:37:17.188: INFO: Got endpoints: latency-svc-d2kpj [252.56348ms] +Aug 17 23:37:17.189: INFO: Got endpoints: latency-svc-gqqh9 [340.435258ms] +Aug 17 23:37:17.193: INFO: Got endpoints: latency-svc-nsh6d [344.790679ms] +Aug 17 23:37:17.193: INFO: Got endpoints: latency-svc-ncn9n [345.011124ms] +Aug 17 23:37:17.198: INFO: Got endpoints: latency-svc-fptgl [349.782438ms] +Aug 17 23:37:17.203: INFO: Got endpoints: latency-svc-tcjl5 [354.446433ms] +Aug 17 23:37:17.203: INFO: Got endpoints: latency-svc-zfn9t [354.794278ms] +Aug 17 23:37:17.210: INFO: Got endpoints: latency-svc-pks9d [335.623533ms] +Aug 17 23:37:17.211: INFO: Got endpoints: latency-svc-gq7bq [290.069303ms] +Aug 17 23:37:17.216: INFO: Got endpoints: latency-svc-ck5vm [317.956097ms] +Aug 17 23:37:17.216: INFO: Got endpoints: latency-svc-x7cst [368.155192ms] +Aug 17 23:37:17.217: INFO: Got endpoints: latency-svc-tc5rv [368.395769ms] +Aug 17 23:37:17.222: INFO: Created: latency-svc-2dwbr +Aug 17 23:37:17.223: INFO: Got endpoints: latency-svc-726z2 [374.225869ms] +Aug 17 23:37:17.229: INFO: Got endpoints: latency-svc-2dwbr [54.115081ms] +Aug 17 23:37:17.242: INFO: Created: latency-svc-r8bc7 +Aug 17 23:37:17.250: INFO: Got endpoints: latency-svc-r8bc7 [61.334284ms] +Aug 17 23:37:17.262: INFO: Created: latency-svc-s4v4d +Aug 17 23:37:17.268: INFO: Got endpoints: latency-svc-s4v4d [80.444024ms] +Aug 17 23:37:17.275: INFO: Created: latency-svc-mt6rj +Aug 17 23:37:17.283: INFO: Got endpoints: latency-svc-mt6rj [93.998741ms] +Aug 17 23:37:17.294: INFO: Created: latency-svc-crrbl +Aug 17 23:37:17.301: INFO: Got endpoints: latency-svc-crrbl [108.059125ms] +Aug 17 23:37:17.309: INFO: Created: latency-svc-xqw6j +Aug 17 23:37:17.316: INFO: Got endpoints: latency-svc-xqw6j [122.490778ms] +Aug 17 23:37:17.325: INFO: Created: latency-svc-g4r5q +Aug 17 23:37:17.333: INFO: Got endpoints: latency-svc-g4r5q [134.665058ms] +Aug 17 23:37:17.339: INFO: Created: latency-svc-7w4wg +Aug 17 23:37:17.347: INFO: Got endpoints: latency-svc-7w4wg [143.843023ms] +Aug 17 23:37:17.356: INFO: Created: latency-svc-8lgn6 +Aug 17 23:37:17.370: INFO: Got endpoints: latency-svc-8lgn6 [166.862698ms] +Aug 17 23:37:17.374: INFO: Created: latency-svc-88n9z +Aug 17 23:37:17.379: INFO: Got endpoints: latency-svc-88n9z [168.596593ms] +Aug 17 23:37:17.391: INFO: Created: latency-svc-hd224 +Aug 17 23:37:17.399: INFO: Got endpoints: latency-svc-hd224 [187.919001ms] +Aug 17 23:37:17.403: INFO: Created: latency-svc-f4f48 +Aug 17 23:37:17.413: INFO: Got endpoints: latency-svc-f4f48 [196.669168ms] +Aug 17 23:37:17.418: INFO: Created: latency-svc-mgqsx +Aug 17 23:37:17.422: INFO: Got endpoints: latency-svc-mgqsx [205.320653ms] +Aug 17 23:37:17.436: INFO: Created: latency-svc-jbf4c +Aug 17 23:37:17.445: INFO: Created: latency-svc-5lh8g +Aug 17 23:37:17.449: INFO: Got endpoints: latency-svc-jbf4c [232.34169ms] +Aug 17 23:37:17.458: INFO: Got endpoints: latency-svc-5lh8g [235.284165ms] +Aug 17 23:37:17.463: INFO: Created: latency-svc-wjcdj +Aug 17 23:37:17.468: INFO: Got endpoints: latency-svc-wjcdj [239.339285ms] +Aug 17 23:37:17.475: INFO: Created: latency-svc-dl2vq +Aug 17 23:37:17.480: INFO: Got endpoints: latency-svc-dl2vq [230.169908ms] +Aug 17 23:37:17.490: INFO: Created: latency-svc-p5hgm +Aug 17 23:37:17.499: INFO: Got endpoints: latency-svc-p5hgm [230.494551ms] +Aug 17 23:37:17.515: INFO: Created: latency-svc-r7rp8 +Aug 17 23:37:17.515: INFO: Got endpoints: latency-svc-r7rp8 [231.640513ms] +Aug 17 23:37:17.521: INFO: Created: latency-svc-bdl2l +Aug 17 23:37:17.529: INFO: Got endpoints: latency-svc-bdl2l [227.186984ms] +Aug 17 23:37:17.544: INFO: Created: latency-svc-kzflw +Aug 17 23:37:17.555: INFO: Got endpoints: latency-svc-kzflw [238.399936ms] +Aug 17 23:37:17.562: INFO: Created: latency-svc-6mldr +Aug 17 23:37:17.571: INFO: Got endpoints: latency-svc-6mldr [238.192689ms] +Aug 17 23:37:17.589: INFO: Created: latency-svc-ssh2j +Aug 17 23:37:17.610: INFO: Got endpoints: latency-svc-ssh2j [262.789992ms] +Aug 17 23:37:17.620: INFO: Created: latency-svc-4kdfp +Aug 17 23:37:17.629: INFO: Got endpoints: latency-svc-4kdfp [259.035441ms] +Aug 17 23:37:17.636: INFO: Created: latency-svc-x7p4s +Aug 17 23:37:17.667: INFO: Created: latency-svc-g8fnv +Aug 17 23:37:17.667: INFO: Got endpoints: latency-svc-x7p4s [288.361094ms] +Aug 17 23:37:17.681: INFO: Created: latency-svc-shn2l +Aug 17 23:37:17.706: INFO: Created: latency-svc-sjckf +Aug 17 23:37:17.716: INFO: Created: latency-svc-q8ht9 +Aug 17 23:37:17.718: INFO: Got endpoints: latency-svc-g8fnv [318.52063ms] +Aug 17 23:37:17.727: INFO: Created: latency-svc-crlpt +Aug 17 23:37:17.743: INFO: Created: latency-svc-bxtgr +Aug 17 23:37:17.765: INFO: Created: latency-svc-6jvc6 +Aug 17 23:37:17.776: INFO: Got endpoints: latency-svc-shn2l [362.374716ms] +Aug 17 23:37:17.786: INFO: Created: latency-svc-dr8cs +Aug 17 23:37:17.801: INFO: Created: latency-svc-xtnkv +Aug 17 23:37:17.818: INFO: Got endpoints: latency-svc-sjckf [395.760342ms] +Aug 17 23:37:17.822: INFO: Created: latency-svc-45rf4 +Aug 17 23:37:17.838: INFO: Created: latency-svc-b6gzg +Aug 17 23:37:17.856: INFO: Created: latency-svc-hbdtd +Aug 17 23:37:17.869: INFO: Got endpoints: latency-svc-q8ht9 [419.588236ms] +Aug 17 23:37:17.882: INFO: Created: latency-svc-p8thn +Aug 17 23:37:17.902: INFO: Created: latency-svc-nzf45 +Aug 17 23:37:17.922: INFO: Created: latency-svc-v9blt +Aug 17 23:37:17.922: INFO: Got endpoints: latency-svc-crlpt [463.707952ms] +Aug 17 23:37:17.953: INFO: Created: latency-svc-mmr6g +Aug 17 23:37:17.968: INFO: Got endpoints: latency-svc-bxtgr [499.783294ms] +Aug 17 23:37:17.973: INFO: Created: latency-svc-pct9q +Aug 17 23:37:17.995: INFO: Created: latency-svc-xcl89 +Aug 17 23:37:18.013: INFO: Created: latency-svc-lchsn +Aug 17 23:37:18.021: INFO: Got endpoints: latency-svc-6jvc6 [540.684718ms] +Aug 17 23:37:18.029: INFO: Created: latency-svc-mfdwf +Aug 17 23:37:18.042: INFO: Created: latency-svc-5qssv +Aug 17 23:37:18.060: INFO: Created: latency-svc-lwzr6 +Aug 17 23:37:18.066: INFO: Got endpoints: latency-svc-dr8cs [566.116154ms] +Aug 17 23:37:18.088: INFO: Created: latency-svc-mwx9z +Aug 17 23:37:18.116: INFO: Got endpoints: latency-svc-xtnkv [600.920918ms] +Aug 17 23:37:18.133: INFO: Created: latency-svc-cmmhq +Aug 17 23:37:18.168: INFO: Got endpoints: latency-svc-45rf4 [638.456199ms] +Aug 17 23:37:18.187: INFO: Created: latency-svc-rdkpl +Aug 17 23:37:18.218: INFO: Got endpoints: latency-svc-b6gzg [663.176295ms] +Aug 17 23:37:18.240: INFO: Created: latency-svc-jbztt +Aug 17 23:37:18.267: INFO: Got endpoints: latency-svc-hbdtd [695.905005ms] +Aug 17 23:37:18.293: INFO: Created: latency-svc-f22f4 +Aug 17 23:37:18.317: INFO: Got endpoints: latency-svc-p8thn [707.0243ms] +Aug 17 23:37:18.337: INFO: Created: latency-svc-t75mh +Aug 17 23:37:18.366: INFO: Got endpoints: latency-svc-nzf45 [737.284595ms] +Aug 17 23:37:18.389: INFO: Created: latency-svc-cqrkm +Aug 17 23:37:18.416: INFO: Got endpoints: latency-svc-v9blt [749.21368ms] +Aug 17 23:37:18.442: INFO: Created: latency-svc-bnn2p +Aug 17 23:37:18.468: INFO: Got endpoints: latency-svc-mmr6g [750.234624ms] +Aug 17 23:37:18.492: INFO: Created: latency-svc-dkwq5 +Aug 17 23:37:18.519: INFO: Got endpoints: latency-svc-pct9q [742.515908ms] +Aug 17 23:37:18.539: INFO: Created: latency-svc-wbwzk +Aug 17 23:37:18.566: INFO: Got endpoints: latency-svc-xcl89 [748.006618ms] +Aug 17 23:37:18.585: INFO: Created: latency-svc-rbrwm +Aug 17 23:37:18.623: INFO: Got endpoints: latency-svc-lchsn [754.007248ms] +Aug 17 23:37:18.648: INFO: Created: latency-svc-h7dpw +Aug 17 23:37:18.668: INFO: Got endpoints: latency-svc-mfdwf [746.213697ms] +Aug 17 23:37:18.693: INFO: Created: latency-svc-wbd2q +Aug 17 23:37:18.717: INFO: Got endpoints: latency-svc-5qssv [748.513245ms] +Aug 17 23:37:18.738: INFO: Created: latency-svc-s8gpd +Aug 17 23:37:18.770: INFO: Got endpoints: latency-svc-lwzr6 [749.177741ms] +Aug 17 23:37:18.795: INFO: Created: latency-svc-v2mtp +Aug 17 23:37:18.814: INFO: Got endpoints: latency-svc-mwx9z [747.862988ms] +Aug 17 23:37:18.840: INFO: Created: latency-svc-2zllj +Aug 17 23:37:18.869: INFO: Got endpoints: latency-svc-cmmhq [753.516636ms] +Aug 17 23:37:18.895: INFO: Created: latency-svc-7wwzp +Aug 17 23:37:18.921: INFO: Got endpoints: latency-svc-rdkpl [752.79935ms] +Aug 17 23:37:18.967: INFO: Created: latency-svc-s4x9w +Aug 17 23:37:18.968: INFO: Got endpoints: latency-svc-jbztt [749.754469ms] +Aug 17 23:37:18.987: INFO: Created: latency-svc-298px +Aug 17 23:37:19.019: INFO: Got endpoints: latency-svc-f22f4 [752.01862ms] +Aug 17 23:37:19.040: INFO: Created: latency-svc-c6ms6 +Aug 17 23:37:19.068: INFO: Got endpoints: latency-svc-t75mh [750.762826ms] +Aug 17 23:37:19.095: INFO: Created: latency-svc-c4qjz +Aug 17 23:37:19.116: INFO: Got endpoints: latency-svc-cqrkm [749.750137ms] +Aug 17 23:37:19.138: INFO: Created: latency-svc-vxlf8 +Aug 17 23:37:19.168: INFO: Got endpoints: latency-svc-bnn2p [751.466333ms] +Aug 17 23:37:19.189: INFO: Created: latency-svc-6kts8 +Aug 17 23:37:19.218: INFO: Got endpoints: latency-svc-dkwq5 [749.870884ms] +Aug 17 23:37:19.239: INFO: Created: latency-svc-j575d +Aug 17 23:37:19.273: INFO: Got endpoints: latency-svc-wbwzk [753.841185ms] +Aug 17 23:37:19.292: INFO: Created: latency-svc-rb8lp +Aug 17 23:37:19.319: INFO: Got endpoints: latency-svc-rbrwm [752.915831ms] +Aug 17 23:37:19.343: INFO: Created: latency-svc-zgzll +Aug 17 23:37:19.366: INFO: Got endpoints: latency-svc-h7dpw [742.471868ms] +Aug 17 23:37:19.391: INFO: Created: latency-svc-jdnvg +Aug 17 23:37:19.416: INFO: Got endpoints: latency-svc-wbd2q [748.128251ms] +Aug 17 23:37:19.438: INFO: Created: latency-svc-l7g69 +Aug 17 23:37:19.468: INFO: Got endpoints: latency-svc-s8gpd [751.295721ms] +Aug 17 23:37:19.496: INFO: Created: latency-svc-jjvwz +Aug 17 23:37:19.517: INFO: Got endpoints: latency-svc-v2mtp [747.257173ms] +Aug 17 23:37:19.546: INFO: Created: latency-svc-wblls +Aug 17 23:37:19.569: INFO: Got endpoints: latency-svc-2zllj [754.503082ms] +Aug 17 23:37:19.597: INFO: Created: latency-svc-9x2bz +Aug 17 23:37:19.620: INFO: Got endpoints: latency-svc-7wwzp [750.877057ms] +Aug 17 23:37:19.645: INFO: Created: latency-svc-wxnft +Aug 17 23:37:19.667: INFO: Got endpoints: latency-svc-s4x9w [746.20684ms] +Aug 17 23:37:19.695: INFO: Created: latency-svc-hlv82 +Aug 17 23:37:19.716: INFO: Got endpoints: latency-svc-298px [748.378892ms] +Aug 17 23:37:19.741: INFO: Created: latency-svc-blt7n +Aug 17 23:37:19.770: INFO: Got endpoints: latency-svc-c6ms6 [750.747022ms] +Aug 17 23:37:19.796: INFO: Created: latency-svc-dmd9p +Aug 17 23:37:19.818: INFO: Got endpoints: latency-svc-c4qjz [750.501942ms] +Aug 17 23:37:19.835: INFO: Created: latency-svc-rd6g9 +Aug 17 23:37:19.865: INFO: Got endpoints: latency-svc-vxlf8 [748.903409ms] +Aug 17 23:37:19.892: INFO: Created: latency-svc-bh7bn +Aug 17 23:37:19.916: INFO: Got endpoints: latency-svc-6kts8 [747.572698ms] +Aug 17 23:37:19.940: INFO: Created: latency-svc-pdsls +Aug 17 23:37:19.972: INFO: Got endpoints: latency-svc-j575d [753.624292ms] +Aug 17 23:37:19.998: INFO: Created: latency-svc-8qm5l +Aug 17 23:37:20.015: INFO: Got endpoints: latency-svc-rb8lp [742.133889ms] +Aug 17 23:37:20.037: INFO: Created: latency-svc-5mt55 +Aug 17 23:37:20.066: INFO: Got endpoints: latency-svc-zgzll [747.029742ms] +Aug 17 23:37:20.084: INFO: Created: latency-svc-mcltk +Aug 17 23:37:20.118: INFO: Got endpoints: latency-svc-jdnvg [751.956187ms] +Aug 17 23:37:20.140: INFO: Created: latency-svc-xvnlv +Aug 17 23:37:20.167: INFO: Got endpoints: latency-svc-l7g69 [750.793664ms] +Aug 17 23:37:20.204: INFO: Created: latency-svc-cjkhv +Aug 17 23:37:20.221: INFO: Got endpoints: latency-svc-jjvwz [753.175456ms] +Aug 17 23:37:20.270: INFO: Got endpoints: latency-svc-wblls [753.162406ms] +Aug 17 23:37:20.272: INFO: Created: latency-svc-kmpbc +Aug 17 23:37:20.294: INFO: Created: latency-svc-wvtxl +Aug 17 23:37:20.319: INFO: Got endpoints: latency-svc-9x2bz [750.056947ms] +Aug 17 23:37:20.344: INFO: Created: latency-svc-rc28d +Aug 17 23:37:20.366: INFO: Got endpoints: latency-svc-wxnft [746.378523ms] +Aug 17 23:37:20.392: INFO: Created: latency-svc-twg5n +Aug 17 23:37:20.418: INFO: Got endpoints: latency-svc-hlv82 [750.936253ms] +Aug 17 23:37:20.444: INFO: Created: latency-svc-r2zff +Aug 17 23:37:20.474: INFO: Got endpoints: latency-svc-blt7n [757.586827ms] +Aug 17 23:37:20.502: INFO: Created: latency-svc-8jjzj +Aug 17 23:37:20.516: INFO: Got endpoints: latency-svc-dmd9p [745.882129ms] +Aug 17 23:37:20.536: INFO: Created: latency-svc-8kjgp +Aug 17 23:37:20.570: INFO: Got endpoints: latency-svc-rd6g9 [751.164051ms] +Aug 17 23:37:20.599: INFO: Created: latency-svc-rwxbp +Aug 17 23:37:20.619: INFO: Got endpoints: latency-svc-bh7bn [753.488009ms] +Aug 17 23:37:20.640: INFO: Created: latency-svc-wlvr8 +Aug 17 23:37:20.674: INFO: Got endpoints: latency-svc-pdsls [757.513029ms] +Aug 17 23:37:20.691: INFO: Created: latency-svc-59g7c +Aug 17 23:37:20.719: INFO: Got endpoints: latency-svc-8qm5l [747.323231ms] +Aug 17 23:37:20.740: INFO: Created: latency-svc-nkw8z +Aug 17 23:37:20.771: INFO: Got endpoints: latency-svc-5mt55 [754.784947ms] +Aug 17 23:37:20.796: INFO: Created: latency-svc-rvrv8 +Aug 17 23:37:20.818: INFO: Got endpoints: latency-svc-mcltk [751.582207ms] +Aug 17 23:37:20.839: INFO: Created: latency-svc-w682n +Aug 17 23:37:20.870: INFO: Got endpoints: latency-svc-xvnlv [751.748076ms] +Aug 17 23:37:20.896: INFO: Created: latency-svc-rgj4h +Aug 17 23:37:20.915: INFO: Got endpoints: latency-svc-cjkhv [747.120037ms] +Aug 17 23:37:20.936: INFO: Created: latency-svc-jjr8x +Aug 17 23:37:20.967: INFO: Got endpoints: latency-svc-kmpbc [745.2378ms] +Aug 17 23:37:20.994: INFO: Created: latency-svc-kl6sk +Aug 17 23:37:21.019: INFO: Got endpoints: latency-svc-wvtxl [748.504141ms] +Aug 17 23:37:21.052: INFO: Created: latency-svc-4sbm2 +Aug 17 23:37:21.070: INFO: Got endpoints: latency-svc-rc28d [750.314035ms] +Aug 17 23:37:21.088: INFO: Created: latency-svc-zjbzk +Aug 17 23:37:21.119: INFO: Got endpoints: latency-svc-twg5n [752.803221ms] +Aug 17 23:37:21.146: INFO: Created: latency-svc-ckkn9 +Aug 17 23:37:21.168: INFO: Got endpoints: latency-svc-r2zff [750.040361ms] +Aug 17 23:37:21.188: INFO: Created: latency-svc-fv6b9 +Aug 17 23:37:21.215: INFO: Got endpoints: latency-svc-8jjzj [741.486295ms] +Aug 17 23:37:21.234: INFO: Created: latency-svc-gmqp8 +Aug 17 23:37:21.271: INFO: Got endpoints: latency-svc-8kjgp [754.842454ms] +Aug 17 23:37:21.287: INFO: Created: latency-svc-jpfr7 +Aug 17 23:37:21.318: INFO: Got endpoints: latency-svc-rwxbp [747.412349ms] +Aug 17 23:37:21.344: INFO: Created: latency-svc-drwds +Aug 17 23:37:21.369: INFO: Got endpoints: latency-svc-wlvr8 [749.908758ms] +Aug 17 23:37:21.388: INFO: Created: latency-svc-xr58n +Aug 17 23:37:21.421: INFO: Got endpoints: latency-svc-59g7c [746.911972ms] +Aug 17 23:37:21.439: INFO: Created: latency-svc-wb85w +Aug 17 23:37:21.471: INFO: Got endpoints: latency-svc-nkw8z [750.995706ms] +Aug 17 23:37:21.492: INFO: Created: latency-svc-98vd2 +Aug 17 23:37:21.522: INFO: Got endpoints: latency-svc-rvrv8 [751.394849ms] +Aug 17 23:37:21.542: INFO: Created: latency-svc-dskch +Aug 17 23:37:21.567: INFO: Got endpoints: latency-svc-w682n [748.630273ms] +Aug 17 23:37:21.583: INFO: Created: latency-svc-2jwj9 +Aug 17 23:37:21.617: INFO: Got endpoints: latency-svc-rgj4h [746.849656ms] +Aug 17 23:37:21.636: INFO: Created: latency-svc-jt49g +Aug 17 23:37:21.671: INFO: Got endpoints: latency-svc-jjr8x [756.429562ms] +Aug 17 23:37:21.690: INFO: Created: latency-svc-bwz56 +Aug 17 23:37:21.719: INFO: Got endpoints: latency-svc-kl6sk [751.895831ms] +Aug 17 23:37:21.742: INFO: Created: latency-svc-55wpm +Aug 17 23:37:21.765: INFO: Got endpoints: latency-svc-4sbm2 [745.671392ms] +Aug 17 23:37:21.791: INFO: Created: latency-svc-kzs5m +Aug 17 23:37:21.815: INFO: Got endpoints: latency-svc-zjbzk [744.601069ms] +Aug 17 23:37:21.834: INFO: Created: latency-svc-pq5zv +Aug 17 23:37:21.869: INFO: Got endpoints: latency-svc-ckkn9 [749.296278ms] +Aug 17 23:37:21.892: INFO: Created: latency-svc-c6dqd +Aug 17 23:37:21.920: INFO: Got endpoints: latency-svc-fv6b9 [751.794599ms] +Aug 17 23:37:21.941: INFO: Created: latency-svc-qqphj +Aug 17 23:37:21.970: INFO: Got endpoints: latency-svc-gmqp8 [754.308763ms] +Aug 17 23:37:21.992: INFO: Created: latency-svc-9gpcs +Aug 17 23:37:22.020: INFO: Got endpoints: latency-svc-jpfr7 [748.685024ms] +Aug 17 23:37:22.039: INFO: Created: latency-svc-vbjh6 +Aug 17 23:37:22.067: INFO: Got endpoints: latency-svc-drwds [749.224209ms] +Aug 17 23:37:22.090: INFO: Created: latency-svc-vlhmx +Aug 17 23:37:22.117: INFO: Got endpoints: latency-svc-xr58n [747.847405ms] +Aug 17 23:37:22.136: INFO: Created: latency-svc-p69x2 +Aug 17 23:37:22.168: INFO: Got endpoints: latency-svc-wb85w [747.431999ms] +Aug 17 23:37:22.184: INFO: Created: latency-svc-kzzwr +Aug 17 23:37:22.215: INFO: Got endpoints: latency-svc-98vd2 [744.382159ms] +Aug 17 23:37:22.234: INFO: Created: latency-svc-x99bm +Aug 17 23:37:22.268: INFO: Got endpoints: latency-svc-dskch [745.389207ms] +Aug 17 23:37:22.284: INFO: Created: latency-svc-gcbdt +Aug 17 23:37:22.318: INFO: Got endpoints: latency-svc-2jwj9 [750.922651ms] +Aug 17 23:37:22.334: INFO: Created: latency-svc-cnqck +Aug 17 23:37:22.370: INFO: Got endpoints: latency-svc-jt49g [752.695835ms] +Aug 17 23:37:22.390: INFO: Created: latency-svc-jx54c +Aug 17 23:37:22.417: INFO: Got endpoints: latency-svc-bwz56 [745.984919ms] +Aug 17 23:37:22.434: INFO: Created: latency-svc-hqttm +Aug 17 23:37:22.469: INFO: Got endpoints: latency-svc-55wpm [750.312535ms] +Aug 17 23:37:22.488: INFO: Created: latency-svc-2bb7z +Aug 17 23:37:22.520: INFO: Got endpoints: latency-svc-kzs5m [755.127672ms] +Aug 17 23:37:22.545: INFO: Created: latency-svc-x87zc +Aug 17 23:37:22.572: INFO: Got endpoints: latency-svc-pq5zv [757.160307ms] +Aug 17 23:37:22.596: INFO: Created: latency-svc-njzwc +Aug 17 23:37:22.616: INFO: Got endpoints: latency-svc-c6dqd [747.383222ms] +Aug 17 23:37:22.643: INFO: Created: latency-svc-tcdmm +Aug 17 23:37:22.669: INFO: Got endpoints: latency-svc-qqphj [749.101535ms] +Aug 17 23:37:22.692: INFO: Created: latency-svc-wzp4t +Aug 17 23:37:22.717: INFO: Got endpoints: latency-svc-9gpcs [746.958002ms] +Aug 17 23:37:22.737: INFO: Created: latency-svc-6fvq6 +Aug 17 23:37:22.765: INFO: Got endpoints: latency-svc-vbjh6 [745.239554ms] +Aug 17 23:37:22.782: INFO: Created: latency-svc-lwkm9 +Aug 17 23:37:22.819: INFO: Got endpoints: latency-svc-vlhmx [752.293852ms] +Aug 17 23:37:22.843: INFO: Created: latency-svc-nxl6g +Aug 17 23:37:22.868: INFO: Got endpoints: latency-svc-p69x2 [751.382005ms] +Aug 17 23:37:22.898: INFO: Created: latency-svc-7gljt +Aug 17 23:37:22.921: INFO: Got endpoints: latency-svc-kzzwr [752.485414ms] +Aug 17 23:37:22.938: INFO: Created: latency-svc-g84jx +Aug 17 23:37:22.972: INFO: Got endpoints: latency-svc-x99bm [757.198562ms] +Aug 17 23:37:22.989: INFO: Created: latency-svc-g6bl6 +Aug 17 23:37:23.019: INFO: Got endpoints: latency-svc-gcbdt [751.266243ms] +Aug 17 23:37:23.037: INFO: Created: latency-svc-6gzdr +Aug 17 23:37:23.071: INFO: Got endpoints: latency-svc-cnqck [752.891559ms] +Aug 17 23:37:23.091: INFO: Created: latency-svc-4ckss +Aug 17 23:37:23.122: INFO: Got endpoints: latency-svc-jx54c [752.138851ms] +Aug 17 23:37:23.143: INFO: Created: latency-svc-hgbkc +Aug 17 23:37:23.168: INFO: Got endpoints: latency-svc-hqttm [750.963176ms] +Aug 17 23:37:23.187: INFO: Created: latency-svc-9thvz +Aug 17 23:37:23.216: INFO: Got endpoints: latency-svc-2bb7z [747.136169ms] +Aug 17 23:37:23.234: INFO: Created: latency-svc-pfv24 +Aug 17 23:37:23.270: INFO: Got endpoints: latency-svc-x87zc [749.318757ms] +Aug 17 23:37:23.293: INFO: Created: latency-svc-pdzpg +Aug 17 23:37:23.316: INFO: Got endpoints: latency-svc-njzwc [744.466612ms] +Aug 17 23:37:23.335: INFO: Created: latency-svc-knqkc +Aug 17 23:37:23.369: INFO: Got endpoints: latency-svc-tcdmm [752.145477ms] +Aug 17 23:37:23.387: INFO: Created: latency-svc-w6nk5 +Aug 17 23:37:23.419: INFO: Got endpoints: latency-svc-wzp4t [749.986586ms] +Aug 17 23:37:23.441: INFO: Created: latency-svc-p5sxc +Aug 17 23:37:23.469: INFO: Got endpoints: latency-svc-6fvq6 [751.762806ms] +Aug 17 23:37:23.487: INFO: Created: latency-svc-w99jh +Aug 17 23:37:23.519: INFO: Got endpoints: latency-svc-lwkm9 [753.867734ms] +Aug 17 23:37:23.539: INFO: Created: latency-svc-cksc7 +Aug 17 23:37:23.567: INFO: Got endpoints: latency-svc-nxl6g [747.298128ms] +Aug 17 23:37:23.588: INFO: Created: latency-svc-8d42h +Aug 17 23:37:23.622: INFO: Got endpoints: latency-svc-7gljt [753.274963ms] +Aug 17 23:37:23.641: INFO: Created: latency-svc-mmvjw +Aug 17 23:37:23.668: INFO: Got endpoints: latency-svc-g84jx [746.292968ms] +Aug 17 23:37:23.688: INFO: Created: latency-svc-qccxr +Aug 17 23:37:23.720: INFO: Got endpoints: latency-svc-g6bl6 [747.804009ms] +Aug 17 23:37:23.741: INFO: Created: latency-svc-gcb2m +Aug 17 23:37:23.769: INFO: Got endpoints: latency-svc-6gzdr [750.246636ms] +Aug 17 23:37:23.787: INFO: Created: latency-svc-v9v6r +Aug 17 23:37:23.816: INFO: Got endpoints: latency-svc-4ckss [744.122852ms] +Aug 17 23:37:23.837: INFO: Created: latency-svc-cbpjp +Aug 17 23:37:23.867: INFO: Got endpoints: latency-svc-hgbkc [744.50921ms] +Aug 17 23:37:23.886: INFO: Created: latency-svc-5cztc +Aug 17 23:37:23.916: INFO: Got endpoints: latency-svc-9thvz [747.72985ms] +Aug 17 23:37:23.939: INFO: Created: latency-svc-rxlgg +Aug 17 23:37:23.970: INFO: Got endpoints: latency-svc-pfv24 [753.271223ms] +Aug 17 23:37:23.987: INFO: Created: latency-svc-qp7rt +Aug 17 23:37:24.016: INFO: Got endpoints: latency-svc-pdzpg [745.952732ms] +Aug 17 23:37:24.038: INFO: Created: latency-svc-b84bf +Aug 17 23:37:24.067: INFO: Got endpoints: latency-svc-knqkc [750.754964ms] +Aug 17 23:37:24.090: INFO: Created: latency-svc-2677j +Aug 17 23:37:24.116: INFO: Got endpoints: latency-svc-w6nk5 [747.460289ms] +Aug 17 23:37:24.134: INFO: Created: latency-svc-97z6j +Aug 17 23:37:24.171: INFO: Got endpoints: latency-svc-p5sxc [751.368998ms] +Aug 17 23:37:24.190: INFO: Created: latency-svc-6zwfd +Aug 17 23:37:24.215: INFO: Got endpoints: latency-svc-w99jh [746.177795ms] +Aug 17 23:37:24.235: INFO: Created: latency-svc-dcnvf +Aug 17 23:37:24.266: INFO: Got endpoints: latency-svc-cksc7 [746.552205ms] +Aug 17 23:37:24.282: INFO: Created: latency-svc-wj8mt +Aug 17 23:37:24.316: INFO: Got endpoints: latency-svc-8d42h [749.060087ms] +Aug 17 23:37:24.338: INFO: Created: latency-svc-9b86q +Aug 17 23:37:24.370: INFO: Got endpoints: latency-svc-mmvjw [747.789243ms] +Aug 17 23:37:24.390: INFO: Created: latency-svc-p9zgp +Aug 17 23:37:24.419: INFO: Got endpoints: latency-svc-qccxr [750.564054ms] +Aug 17 23:37:24.450: INFO: Created: latency-svc-gn9d4 +Aug 17 23:37:24.466: INFO: Got endpoints: latency-svc-gcb2m [745.631498ms] +Aug 17 23:37:24.499: INFO: Created: latency-svc-xqt72 +Aug 17 23:37:24.518: INFO: Got endpoints: latency-svc-v9v6r [748.862351ms] +Aug 17 23:37:24.539: INFO: Created: latency-svc-69vpq +Aug 17 23:37:24.570: INFO: Got endpoints: latency-svc-cbpjp [753.910613ms] +Aug 17 23:37:24.592: INFO: Created: latency-svc-t8tfs +Aug 17 23:37:24.617: INFO: Got endpoints: latency-svc-5cztc [750.172211ms] +Aug 17 23:37:24.641: INFO: Created: latency-svc-xgg48 +Aug 17 23:37:24.669: INFO: Got endpoints: latency-svc-rxlgg [752.5413ms] +Aug 17 23:37:24.697: INFO: Created: latency-svc-j7gsz +Aug 17 23:37:24.717: INFO: Got endpoints: latency-svc-qp7rt [747.408152ms] +Aug 17 23:37:24.745: INFO: Created: latency-svc-78qsp +Aug 17 23:37:24.769: INFO: Got endpoints: latency-svc-b84bf [752.801319ms] +Aug 17 23:37:24.818: INFO: Got endpoints: latency-svc-2677j [750.860129ms] +Aug 17 23:37:24.867: INFO: Got endpoints: latency-svc-97z6j [750.725827ms] +Aug 17 23:37:24.921: INFO: Got endpoints: latency-svc-6zwfd [749.971564ms] +Aug 17 23:37:24.968: INFO: Got endpoints: latency-svc-dcnvf [753.18063ms] +Aug 17 23:37:25.017: INFO: Got endpoints: latency-svc-wj8mt [751.234136ms] +Aug 17 23:37:25.069: INFO: Got endpoints: latency-svc-9b86q [753.464716ms] +Aug 17 23:37:25.117: INFO: Got endpoints: latency-svc-p9zgp [747.541228ms] +Aug 17 23:37:25.168: INFO: Got endpoints: latency-svc-gn9d4 [748.997937ms] +Aug 17 23:37:25.219: INFO: Got endpoints: latency-svc-xqt72 [752.319388ms] +Aug 17 23:37:25.267: INFO: Got endpoints: latency-svc-69vpq [748.374024ms] +Aug 17 23:37:25.317: INFO: Got endpoints: latency-svc-t8tfs [747.445563ms] +Aug 17 23:37:25.365: INFO: Got endpoints: latency-svc-xgg48 [748.096676ms] +Aug 17 23:37:25.419: INFO: Got endpoints: latency-svc-j7gsz [749.729218ms] +Aug 17 23:37:25.468: INFO: Got endpoints: latency-svc-78qsp [751.045383ms] +Aug 17 23:37:25.469: INFO: Latencies: [26.694337ms 50.059318ms 54.115081ms 61.334284ms 73.230437ms 80.444024ms 87.499348ms 93.998741ms 108.059125ms 122.490778ms 134.665058ms 143.843023ms 166.862698ms 168.596593ms 187.919001ms 196.669168ms 205.320653ms 227.186984ms 230.169908ms 230.494551ms 231.640513ms 232.34169ms 235.284165ms 238.192689ms 238.399936ms 239.339285ms 252.56348ms 259.035441ms 262.789992ms 288.361094ms 290.069303ms 317.956097ms 318.52063ms 325.682942ms 335.623533ms 339.843524ms 340.435258ms 344.790679ms 345.011124ms 349.782438ms 354.446433ms 354.794278ms 362.374716ms 368.155192ms 368.395769ms 374.225869ms 395.760342ms 419.588236ms 463.707952ms 499.783294ms 540.684718ms 566.116154ms 600.920918ms 638.456199ms 663.176295ms 695.905005ms 707.0243ms 737.284595ms 741.486295ms 742.133889ms 742.471868ms 742.515908ms 744.122852ms 744.382159ms 744.466612ms 744.50921ms 744.601069ms 745.2378ms 745.239554ms 745.389207ms 745.631498ms 745.671392ms 745.882129ms 745.952732ms 745.984919ms 746.177795ms 746.20684ms 746.213697ms 746.292968ms 746.378523ms 746.552205ms 746.849656ms 746.911972ms 746.958002ms 747.029742ms 747.120037ms 747.136169ms 747.257173ms 747.298128ms 747.323231ms 747.383222ms 747.408152ms 747.412349ms 747.431999ms 747.445563ms 747.460289ms 747.541228ms 747.572698ms 747.72985ms 747.789243ms 747.804009ms 747.847405ms 747.862988ms 748.006618ms 748.096676ms 748.128251ms 748.374024ms 748.378892ms 748.504141ms 748.513245ms 748.630273ms 748.685024ms 748.862351ms 748.903409ms 748.997937ms 749.060087ms 749.101535ms 749.177741ms 749.21368ms 749.224209ms 749.296278ms 749.318757ms 749.729218ms 749.750137ms 749.754469ms 749.870884ms 749.908758ms 749.971564ms 749.986586ms 750.040361ms 750.056947ms 750.172211ms 750.234624ms 750.246636ms 750.312535ms 750.314035ms 750.501942ms 750.564054ms 750.725827ms 750.747022ms 750.754964ms 750.762826ms 750.793664ms 750.860129ms 750.877057ms 750.922651ms 750.936253ms 750.963176ms 750.995706ms 751.045383ms 751.164051ms 751.234136ms 751.266243ms 751.295721ms 751.368998ms 751.382005ms 751.394849ms 751.466333ms 751.582207ms 751.748076ms 751.762806ms 751.794599ms 751.895831ms 751.956187ms 752.01862ms 752.138851ms 752.145477ms 752.293852ms 752.319388ms 752.485414ms 752.5413ms 752.695835ms 752.79935ms 752.801319ms 752.803221ms 752.891559ms 752.915831ms 753.162406ms 753.175456ms 753.18063ms 753.271223ms 753.274963ms 753.464716ms 753.488009ms 753.516636ms 753.624292ms 753.841185ms 753.867734ms 753.910613ms 754.007248ms 754.308763ms 754.503082ms 754.784947ms 754.842454ms 755.127672ms 756.429562ms 757.160307ms 757.198562ms 757.513029ms 757.586827ms] +Aug 17 23:37:25.469: INFO: 50 %ile: 747.804009ms +Aug 17 23:37:25.469: INFO: 90 %ile: 753.271223ms +Aug 17 23:37:25.469: INFO: 99 %ile: 757.513029ms +Aug 17 23:37:25.469: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:37:25.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-5870" for this suite. + +• [SLOW TEST:10.855 seconds] +[sig-network] Service endpoints latency +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should not be very high [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":346,"completed":215,"skipped":4051,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:37:25.484: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Aug 17 23:37:25.786: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 17 23:37:28.835: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:37:28.843: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Creating a v1 custom resource +STEP: Create a v2 custom resource +STEP: List CRs in v1 +STEP: List CRs in v2 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:37:32.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-9015" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 + +• [SLOW TEST:7.011 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":216,"skipped":4084,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:37:32.497: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a Pod with a static label +STEP: watching for Pod to be ready +Aug 17 23:37:32.577: INFO: observed Pod pod-test in namespace pods-8307 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Aug 17 23:37:32.583: INFO: observed Pod pod-test in namespace pods-8307 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:32 +0000 UTC }] +Aug 17 23:37:32.601: INFO: observed Pod pod-test in namespace pods-8307 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:32 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:32 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:32 +0000 UTC }] +Aug 17 23:37:34.368: INFO: Found Pod pod-test in namespace pods-8307 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-08-17 23:37:32 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data +Aug 17 23:37:34.384: INFO: observed event type ADDED +STEP: getting the Pod and ensuring that it's patched +STEP: replacing the Pod's status Ready condition to False +STEP: check the Pod again to ensure its Ready conditions are False +STEP: deleting the Pod via a Collection with a LabelSelector +STEP: watching for the Pod to be deleted +Aug 17 23:37:34.427: INFO: observed event type ADDED +Aug 17 23:37:34.427: INFO: observed event type MODIFIED +Aug 17 23:37:34.428: INFO: observed event type MODIFIED +Aug 17 23:37:34.428: INFO: observed event type MODIFIED +Aug 17 23:37:34.428: INFO: observed event type MODIFIED +Aug 17 23:37:34.428: INFO: observed event type MODIFIED +Aug 17 23:37:34.428: INFO: observed event type MODIFIED +Aug 17 23:37:36.387: INFO: observed event type MODIFIED +Aug 17 23:37:37.391: INFO: observed event type MODIFIED +Aug 17 23:37:37.398: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:37:37.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8307" for this suite. +•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":217,"skipped":4107,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:37:37.445: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete RS created by deployment when not orphaning [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for all rs to be garbage collected +STEP: expected 0 pods, got 2 pods +STEP: Gathering metrics +Aug 17 23:37:38.612: INFO: The status of Pod kube-controller-manager-195.17.32.244 is Running (Ready = true) +Aug 17 23:37:38.686: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:37:38.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-3784" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":218,"skipped":4112,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:37:38.703: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod with failed condition +STEP: updating the pod +Aug 17 23:39:39.295: INFO: Successfully updated pod "var-expansion-b95ff91e-dbd6-4d51-aa6a-a93560cd0b83" +STEP: waiting for pod running +STEP: deleting the pod gracefully +Aug 17 23:39:41.310: INFO: Deleting pod "var-expansion-b95ff91e-dbd6-4d51-aa6a-a93560cd0b83" in namespace "var-expansion-9814" +Aug 17 23:39:41.321: INFO: Wait up to 5m0s for pod "var-expansion-b95ff91e-dbd6-4d51-aa6a-a93560cd0b83" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:40:13.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9814" for this suite. + +• [SLOW TEST:154.642 seconds] +[sig-node] Variable Expansion +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":219,"skipped":4142,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:40:13.347: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Aug 17 23:40:13.402: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1106 203508c6-d5ce-45d2-838f-cc52918f9d2b 65870 0 2022-08-17 23:40:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-08-17 23:40:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:40:13.402: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1106 203508c6-d5ce-45d2-838f-cc52918f9d2b 65872 0 2022-08-17 23:40:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-08-17 23:40:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:40:13.402: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1106 203508c6-d5ce-45d2-838f-cc52918f9d2b 65873 0 2022-08-17 23:40:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-08-17 23:40:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Aug 17 23:40:23.452: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1106 203508c6-d5ce-45d2-838f-cc52918f9d2b 65996 0 2022-08-17 23:40:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-08-17 23:40:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:40:23.452: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1106 203508c6-d5ce-45d2-838f-cc52918f9d2b 65997 0 2022-08-17 23:40:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-08-17 23:40:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Aug 17 23:40:23.452: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1106 203508c6-d5ce-45d2-838f-cc52918f9d2b 65998 0 2022-08-17 23:40:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-08-17 23:40:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:40:23.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1106" for this suite. + +• [SLOW TEST:10.120 seconds] +[sig-api-machinery] Watchers +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":220,"skipped":4192,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:40:23.468: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 +Aug 17 23:40:23.497: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Aug 17 23:40:23.510: INFO: Waiting for terminating namespaces to be deleted... +Aug 17 23:40:23.517: INFO: +Logging pods the apiserver thinks is on node 195.17.131.205 before test +Aug 17 23:40:23.527: INFO: capi-kubeadm-bootstrap-controller-manager-6f58b86764-4snx7 from capi-kubeadm-bootstrap-system started at 2022-08-17 22:22:29 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container manager ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: capi-kubeadm-control-plane-controller-manager-7b679446f7-x2d65 from capi-kubeadm-control-plane-system started at 2022-08-17 22:22:49 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container manager ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: capi-controller-manager-6ff75d8789-8fldg from capi-system started at 2022-08-17 22:22:22 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container manager ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: cert-manager-67565ccf5d-zf6kt from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: cert-manager-cainjector-654854cb95-cb6v8 from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: cert-manager-webhook-fc46785b4-gvkf6 from cert-manager started at 2022-08-17 22:21:55 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container cert-manager ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: eks-anywhere-packages-ddfc7b44-8zssk from eksa-packages started at 2022-08-17 22:24:50 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container controller ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: etcdadm-bootstrap-provider-controller-manager-7d898b8f77-xgmtd from etcdadm-bootstrap-provider-system started at 2022-08-17 22:22:35 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container manager ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: etcdadm-controller-controller-manager-b6f674477-6lsxb from etcdadm-controller-system started at 2022-08-17 22:22:40 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container manager ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: cilium-hvkwp from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container cilium-agent ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: cilium-operator-5799bc594c-b9rnk from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.527: INFO: Container cilium-operator ready: true, restart count 0 +Aug 17 23:40:23.527: INFO: kube-proxy-pdhjb from kube-system started at 2022-08-17 22:19:15 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.528: INFO: Container kube-proxy ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: vsphere-cloud-controller-manager-s5246 from kube-system started at 2022-08-17 22:19:15 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.528: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 1 +Aug 17 23:40:23.528: INFO: vsphere-csi-controller-f67d5c78c-l8hxm from kube-system started at 2022-08-17 22:43:28 +0000 UTC (5 container statuses recorded) +Aug 17 23:40:23.528: INFO: Container csi-attacher ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: Container csi-provisioner ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: Container vsphere-csi-controller ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: Container vsphere-syncer ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: vsphere-csi-node-f9msr from kube-system started at 2022-08-17 22:19:15 +0000 UTC (3 container statuses recorded) +Aug 17 23:40:23.528: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: Container node-driver-registrar ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: Container vsphere-csi-node ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-v7n4s from sonobuoy started at 2022-08-17 22:38:36 +0000 UTC (2 container statuses recorded) +Aug 17 23:40:23.528: INFO: Container sonobuoy-worker ready: false, restart count 4 +Aug 17 23:40:23.528: INFO: Container systemd-logs ready: true, restart count 0 +Aug 17 23:40:23.528: INFO: +Logging pods the apiserver thinks is on node 195.17.65.231 before test +Aug 17 23:40:23.538: INFO: cilium-f7vw5 from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.538: INFO: Container cilium-agent ready: true, restart count 0 +Aug 17 23:40:23.538: INFO: cilium-operator-5799bc594c-fpwfg from kube-system started at 2022-08-17 22:21:25 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.538: INFO: Container cilium-operator ready: true, restart count 0 +Aug 17 23:40:23.538: INFO: kube-proxy-xc469 from kube-system started at 2022-08-17 22:19:12 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.538: INFO: Container kube-proxy ready: true, restart count 0 +Aug 17 23:40:23.538: INFO: vsphere-cloud-controller-manager-49t6p from kube-system started at 2022-08-17 22:48:46 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.538: INFO: Container vsphere-cloud-controller-manager ready: true, restart count 0 +Aug 17 23:40:23.538: INFO: vsphere-csi-node-lhjjp from kube-system started at 2022-08-17 22:19:12 +0000 UTC (3 container statuses recorded) +Aug 17 23:40:23.538: INFO: Container liveness-probe ready: true, restart count 0 +Aug 17 23:40:23.538: INFO: Container node-driver-registrar ready: true, restart count 0 +Aug 17 23:40:23.538: INFO: Container vsphere-csi-node ready: true, restart count 0 +Aug 17 23:40:23.538: INFO: sonobuoy from sonobuoy started at 2022-08-17 22:38:32 +0000 UTC (1 container statuses recorded) +Aug 17 23:40:23.538: INFO: Container kube-sonobuoy ready: true, restart count 0 +Aug 17 23:40:23.538: INFO: sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-lppfn from sonobuoy started at 2022-08-17 22:38:36 +0000 UTC (2 container statuses recorded) +Aug 17 23:40:23.538: INFO: Container sonobuoy-worker ready: false, restart count 3 +Aug 17 23:40:23.538: INFO: Container systemd-logs ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-ce774b31-6239-46f0-ab62-af1a86952d71 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-ce774b31-6239-46f0-ab62-af1a86952d71 off the node 195.17.65.231 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-ce774b31-6239-46f0-ab62-af1a86952d71 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:40:27.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-2865" for this suite. +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 +•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":346,"completed":221,"skipped":4233,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:40:27.675: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating projection with secret that has name projected-secret-test-35e770ed-9bfb-4ddf-a6d1-d22ea29b8253 +STEP: Creating a pod to test consume secrets +Aug 17 23:40:27.712: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c12c2441-409b-4f9f-8324-e39cf8e1f0a0" in namespace "projected-2183" to be "Succeeded or Failed" +Aug 17 23:40:27.716: INFO: Pod "pod-projected-secrets-c12c2441-409b-4f9f-8324-e39cf8e1f0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301323ms +Aug 17 23:40:29.727: INFO: Pod "pod-projected-secrets-c12c2441-409b-4f9f-8324-e39cf8e1f0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014482367s +Aug 17 23:40:31.735: INFO: Pod "pod-projected-secrets-c12c2441-409b-4f9f-8324-e39cf8e1f0a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022730525s +STEP: Saw pod success +Aug 17 23:40:31.735: INFO: Pod "pod-projected-secrets-c12c2441-409b-4f9f-8324-e39cf8e1f0a0" satisfied condition "Succeeded or Failed" +Aug 17 23:40:31.739: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-secrets-c12c2441-409b-4f9f-8324-e39cf8e1f0a0 container projected-secret-volume-test: +STEP: delete the pod +Aug 17 23:40:31.776: INFO: Waiting for pod pod-projected-secrets-c12c2441-409b-4f9f-8324-e39cf8e1f0a0 to disappear +Aug 17 23:40:31.779: INFO: Pod pod-projected-secrets-c12c2441-409b-4f9f-8324-e39cf8e1f0a0 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:40:31.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2183" for this suite. +•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":222,"skipped":4251,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:40:31.791: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:40:31.823: INFO: Creating deployment "test-recreate-deployment" +Aug 17 23:40:31.831: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Aug 17 23:40:31.839: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Aug 17 23:40:33.858: INFO: Waiting deployment "test-recreate-deployment" to complete +Aug 17 23:40:33.865: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Aug 17 23:40:33.877: INFO: Updating deployment test-recreate-deployment +Aug 17 23:40:33.877: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 17 23:40:34.029: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-6411 601923bd-c626-4385-9e62-78a6bf0c389b 66250 2 2022-08-17 23:40:31 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-08-17 23:40:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:40:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002f29e28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-08-17 23:40:33 +0000 UTC,LastTransitionTime:2022-08-17 23:40:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5b99bd5487" is progressing.,LastUpdateTime:2022-08-17 23:40:34 +0000 UTC,LastTransitionTime:2022-08-17 23:40:31 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Aug 17 23:40:34.033: INFO: New ReplicaSet "test-recreate-deployment-5b99bd5487" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-5b99bd5487 deployment-6411 9c5133e7-6372-431f-9fb2-b3deb062abf1 66249 1 2022-08-17 23:40:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 601923bd-c626-4385-9e62-78a6bf0c389b 0xc0043b2c87 0xc0043b2c88}] [] [{kube-controller-manager Update apps/v1 2022-08-17 23:40:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"601923bd-c626-4385-9e62-78a6bf0c389b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:40:34 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5b99bd5487,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0043b2d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 17 23:40:34.033: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Aug 17 23:40:34.033: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-7d659f7dc9 deployment-6411 d527e443-858c-4cb8-bfba-c83c4c2d01cd 66238 2 2022-08-17 23:40:31 +0000 UTC map[name:sample-pod-3 pod-template-hash:7d659f7dc9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 601923bd-c626-4385-9e62-78a6bf0c389b 0xc0043b2d97 0xc0043b2d98}] [] [{kube-controller-manager Update apps/v1 2022-08-17 23:40:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"601923bd-c626-4385-9e62-78a6bf0c389b\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:40:33 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d659f7dc9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:7d659f7dc9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0043b2e48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 17 23:40:34.037: INFO: Pod "test-recreate-deployment-5b99bd5487-xqx9h" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-xqx9h test-recreate-deployment-5b99bd5487- deployment-6411 c9d410b6-88fd-416e-9275-71f654dcb725 66248 0 2022-08-17 23:40:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 9c5133e7-6372-431f-9fb2-b3deb062abf1 0xc0043b32c7 0xc0043b32c8}] [] [{kube-controller-manager Update v1 2022-08-17 23:40:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c5133e7-6372-431f-9fb2-b3deb062abf1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-17 23:40:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zm9jm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm9jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:40:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:40:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:40:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:40:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:,StartTime:2022-08-17 23:40:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:40:34.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-6411" for this suite. +•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":223,"skipped":4262,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:40:34.055: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-projected-all-test-volume-e8e2bb3f-f1e4-4667-8971-4b3030247d6c +STEP: Creating secret with name secret-projected-all-test-volume-00621c20-ff53-4c37-9acb-7a27793bfc1c +STEP: Creating a pod to test Check all projections for projected volume plugin +Aug 17 23:40:34.095: INFO: Waiting up to 5m0s for pod "projected-volume-b4aee191-c68e-4980-a966-15cd8b596003" in namespace "projected-6025" to be "Succeeded or Failed" +Aug 17 23:40:34.105: INFO: Pod "projected-volume-b4aee191-c68e-4980-a966-15cd8b596003": Phase="Pending", Reason="", readiness=false. Elapsed: 10.432477ms +Aug 17 23:40:36.114: INFO: Pod "projected-volume-b4aee191-c68e-4980-a966-15cd8b596003": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019319338s +Aug 17 23:40:38.119: INFO: Pod "projected-volume-b4aee191-c68e-4980-a966-15cd8b596003": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024185678s +STEP: Saw pod success +Aug 17 23:40:38.119: INFO: Pod "projected-volume-b4aee191-c68e-4980-a966-15cd8b596003" satisfied condition "Succeeded or Failed" +Aug 17 23:40:38.123: INFO: Trying to get logs from node 195.17.65.231 pod projected-volume-b4aee191-c68e-4980-a966-15cd8b596003 container projected-all-volume-test: +STEP: delete the pod +Aug 17 23:40:38.151: INFO: Waiting for pod projected-volume-b4aee191-c68e-4980-a966-15cd8b596003 to disappear +Aug 17 23:40:38.154: INFO: Pod projected-volume-b4aee191-c68e-4980-a966-15cd8b596003 no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:40:38.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6025" for this suite. +•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":224,"skipped":4282,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:40:38.168: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] custom resource defaulting for requests and from storage works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:40:38.191: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:40:41.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-5301" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":225,"skipped":4289,"failed":0} +SSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:40:41.479: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Discovering how many secrets are in namespace by default +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Secret +STEP: Ensuring resource quota status captures secret creation +STEP: Deleting a secret +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:40:58.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-574" for this suite. + +• [SLOW TEST:17.092 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":226,"skipped":4298,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:40:58.572: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Create set of pods +Aug 17 23:40:58.605: INFO: created test-pod-1 +Aug 17 23:41:00.614: INFO: running and ready test-pod-1 +Aug 17 23:41:00.621: INFO: created test-pod-2 +Aug 17 23:41:02.631: INFO: running and ready test-pod-2 +Aug 17 23:41:02.640: INFO: created test-pod-3 +Aug 17 23:41:04.661: INFO: running and ready test-pod-3 +STEP: waiting for all 3 pods to be located +STEP: waiting for all pods to be deleted +Aug 17 23:41:04.708: INFO: Pod quantity 3 is different from expected quantity 0 +Aug 17 23:41:05.717: INFO: Pod quantity 3 is different from expected quantity 0 +Aug 17 23:41:06.714: INFO: Pod quantity 1 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:41:07.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-9348" for this suite. + +• [SLOW TEST:9.163 seconds] +[sig-node] Pods +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should delete a collection of pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":227,"skipped":4318,"failed":0} +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:41:07.735: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Aug 17 23:41:07.765: INFO: Waiting up to 5m0s for pod "pod-5832c229-4e4a-4d47-83d9-81eea1aee500" in namespace "emptydir-4172" to be "Succeeded or Failed" +Aug 17 23:41:07.771: INFO: Pod "pod-5832c229-4e4a-4d47-83d9-81eea1aee500": Phase="Pending", Reason="", readiness=false. Elapsed: 5.329795ms +Aug 17 23:41:09.780: INFO: Pod "pod-5832c229-4e4a-4d47-83d9-81eea1aee500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014527159s +Aug 17 23:41:11.788: INFO: Pod "pod-5832c229-4e4a-4d47-83d9-81eea1aee500": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022267701s +STEP: Saw pod success +Aug 17 23:41:11.788: INFO: Pod "pod-5832c229-4e4a-4d47-83d9-81eea1aee500" satisfied condition "Succeeded or Failed" +Aug 17 23:41:11.791: INFO: Trying to get logs from node 195.17.65.231 pod pod-5832c229-4e4a-4d47-83d9-81eea1aee500 container test-container: +STEP: delete the pod +Aug 17 23:41:11.820: INFO: Waiting for pod pod-5832c229-4e4a-4d47-83d9-81eea1aee500 to disappear +Aug 17 23:41:11.823: INFO: Pod pod-5832c229-4e4a-4d47-83d9-81eea1aee500 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:41:11.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4172" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":228,"skipped":4327,"failed":0} +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:41:11.839: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ReplicationController +STEP: Ensuring resource quota status captures replication controller creation +STEP: Deleting a ReplicationController +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:41:22.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-4519" for this suite. + +• [SLOW TEST:11.108 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":229,"skipped":4332,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:41:22.947: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should patch a secret [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a secret +STEP: listing secrets in all namespaces to ensure that there are more than zero +STEP: patching the secret +STEP: deleting the secret using a LabelSelector +STEP: listing secrets in all namespaces, searching for label name and value in patch +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:41:23.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1327" for this suite. +•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":230,"skipped":4342,"failed":0} +SSS +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:41:23.055: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:41:23.095: INFO: created pod +Aug 17 23:41:23.095: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3995" to be "Succeeded or Failed" +Aug 17 23:41:23.104: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.688686ms +Aug 17 23:41:25.109: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013820972s +Aug 17 23:41:27.113: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01778148s +STEP: Saw pod success +Aug 17 23:41:27.113: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Aug 17 23:41:57.114: INFO: polling logs +Aug 17 23:41:57.126: INFO: Pod logs: +2022/08/17 23:41:24 OK: Got token +2022/08/17 23:41:24 validating with in-cluster discovery +2022/08/17 23:41:24 OK: got issuer https://kubernetes.default.svc.cluster.local +2022/08/17 23:41:24 Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3995:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1660780283, NotBefore:1660779683, IssuedAt:1660779683, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3995", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"9431610b-64ea-4ad6-accf-9c7057666120"}}} +2022/08/17 23:41:24 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local +2022/08/17 23:41:24 OK: Validated signature on JWT +2022/08/17 23:41:24 OK: Got valid claims from token! +2022/08/17 23:41:24 Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-3995:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1660780283, NotBefore:1660779683, IssuedAt:1660779683, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-3995", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"9431610b-64ea-4ad6-accf-9c7057666120"}}} + +Aug 17 23:41:57.126: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:41:57.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3995" for this suite. + +• [SLOW TEST:34.093 seconds] +[sig-auth] ServiceAccounts +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":231,"skipped":4345,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:41:57.150: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:41:57.176: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:41:57.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-6022" for this suite. +•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":346,"completed":232,"skipped":4365,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:41:57.782: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:41:57.834: INFO: The status of Pod busybox-host-aliasesd4ad2558-ddb8-43c1-8745-ba881acf0bbb is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:41:59.840: INFO: The status of Pod busybox-host-aliasesd4ad2558-ddb8-43c1-8745-ba881acf0bbb is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:41:59.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-3402" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":233,"skipped":4402,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:41:59.868: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:42:03.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-4499" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":234,"skipped":4432,"failed":0} +SSSSSS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:42:03.945: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Create a Replicaset +STEP: Verify that the required pods have come up. +Aug 17 23:42:03.988: INFO: Pod name sample-pod: Found 0 pods out of 1 +Aug 17 23:42:08.994: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Getting /status +Aug 17 23:42:08.996: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status +Aug 17 23:42:09.006: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated +Aug 17 23:42:09.007: INFO: Observed &ReplicaSet event: ADDED +Aug 17 23:42:09.007: INFO: Observed &ReplicaSet event: MODIFIED +Aug 17 23:42:09.007: INFO: Observed &ReplicaSet event: MODIFIED +Aug 17 23:42:09.008: INFO: Observed &ReplicaSet event: MODIFIED +Aug 17 23:42:09.008: INFO: Found replicaset test-rs in namespace replicaset-3523 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Aug 17 23:42:09.008: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status +Aug 17 23:42:09.008: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Aug 17 23:42:09.015: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched +Aug 17 23:42:09.016: INFO: Observed &ReplicaSet event: ADDED +Aug 17 23:42:09.016: INFO: Observed &ReplicaSet event: MODIFIED +Aug 17 23:42:09.016: INFO: Observed &ReplicaSet event: MODIFIED +Aug 17 23:42:09.016: INFO: Observed &ReplicaSet event: MODIFIED +Aug 17 23:42:09.016: INFO: Observed replicaset test-rs in namespace replicaset-3523 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Aug 17 23:42:09.016: INFO: Observed &ReplicaSet event: MODIFIED +Aug 17 23:42:09.016: INFO: Found replicaset test-rs in namespace replicaset-3523 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Aug 17 23:42:09.016: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:42:09.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3523" for this suite. + +• [SLOW TEST:5.084 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should validate Replicaset Status endpoints [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":235,"skipped":4438,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:42:09.029: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 +STEP: creating the pod +Aug 17 23:42:09.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-3423 create -f -' +Aug 17 23:42:11.401: INFO: stderr: "" +Aug 17 23:42:11.401: INFO: stdout: "pod/pause created\n" +Aug 17 23:42:11.401: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Aug 17 23:42:11.401: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3423" to be "running and ready" +Aug 17 23:42:11.407: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.838236ms +Aug 17 23:42:13.412: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.010648773s +Aug 17 23:42:13.412: INFO: Pod "pause" satisfied condition "running and ready" +Aug 17 23:42:13.412: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: adding the label testing-label with value testing-label-value to a pod +Aug 17 23:42:13.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-3423 label pods pause testing-label=testing-label-value' +Aug 17 23:42:13.486: INFO: stderr: "" +Aug 17 23:42:13.486: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Aug 17 23:42:13.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-3423 get pod pause -L testing-label' +Aug 17 23:42:13.550: INFO: stderr: "" +Aug 17 23:42:13.550: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod +Aug 17 23:42:13.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-3423 label pods pause testing-label-' +Aug 17 23:42:13.624: INFO: stderr: "" +Aug 17 23:42:13.624: INFO: stdout: "pod/pause unlabeled\n" +STEP: verifying the pod doesn't have the label testing-label +Aug 17 23:42:13.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-3423 get pod pause -L testing-label' +Aug 17 23:42:13.693: INFO: stderr: "" +Aug 17 23:42:13.693: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" +[AfterEach] Kubectl label + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1339 +STEP: using delete to clean up resources +Aug 17 23:42:13.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-3423 delete --grace-period=0 --force -f -' +Aug 17 23:42:13.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 23:42:13.767: INFO: stdout: "pod \"pause\" force deleted\n" +Aug 17 23:42:13.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-3423 get rc,svc -l name=pause --no-headers' +Aug 17 23:42:13.843: INFO: stderr: "No resources found in kubectl-3423 namespace.\n" +Aug 17 23:42:13.843: INFO: stdout: "" +Aug 17 23:42:13.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-3423 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Aug 17 23:42:13.909: INFO: stderr: "" +Aug 17 23:42:13.909: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:42:13.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3423" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":346,"completed":236,"skipped":4477,"failed":0} +SSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:42:13.924: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-1946 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a new StatefulSet +Aug 17 23:42:13.974: INFO: Found 0 stateful pods, waiting for 3 +Aug 17 23:42:23.980: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:42:23.980: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:42:23.980: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 +Aug 17 23:42:24.011: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Aug 17 23:42:34.050: INFO: Updating stateful set ss2 +Aug 17 23:42:34.057: INFO: Waiting for Pod statefulset-1946/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +STEP: Restoring Pods to the correct revision when they are deleted +Aug 17 23:42:44.127: INFO: Found 1 stateful pods, waiting for 3 +Aug 17 23:42:54.134: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:42:54.134: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Aug 17 23:42:54.134: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Aug 17 23:42:54.163: INFO: Updating stateful set ss2 +Aug 17 23:42:54.174: INFO: Waiting for Pod statefulset-1946/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +Aug 17 23:43:04.206: INFO: Updating stateful set ss2 +Aug 17 23:43:04.224: INFO: Waiting for StatefulSet statefulset-1946/ss2 to complete update +Aug 17 23:43:04.224: INFO: Waiting for Pod statefulset-1946/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 17 23:43:14.239: INFO: Deleting all statefulset in ns statefulset-1946 +Aug 17 23:43:14.243: INFO: Scaling statefulset ss2 to 0 +Aug 17 23:43:24.266: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 23:43:24.269: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:24.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1946" for this suite. + +• [SLOW TEST:70.387 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":237,"skipped":4482,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:24.311: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +Aug 17 23:43:24.355: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:43:26.361: INFO: The status of Pod pod-adoption-release is Running (Ready = true) +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Aug 17 23:43:27.390: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:28.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3154" for this suite. +•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":238,"skipped":4497,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:28.428: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name projected-configmap-test-volume-6c19d65e-2817-4853-849f-b67f7eba4b89 +STEP: Creating a pod to test consume configMaps +Aug 17 23:43:28.464: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81739c6b-a8c7-4ba5-b8f4-a224161ec02e" in namespace "projected-2965" to be "Succeeded or Failed" +Aug 17 23:43:28.467: INFO: Pod "pod-projected-configmaps-81739c6b-a8c7-4ba5-b8f4-a224161ec02e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.317443ms +Aug 17 23:43:30.472: INFO: Pod "pod-projected-configmaps-81739c6b-a8c7-4ba5-b8f4-a224161ec02e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008316497s +Aug 17 23:43:32.477: INFO: Pod "pod-projected-configmaps-81739c6b-a8c7-4ba5-b8f4-a224161ec02e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013605405s +STEP: Saw pod success +Aug 17 23:43:32.477: INFO: Pod "pod-projected-configmaps-81739c6b-a8c7-4ba5-b8f4-a224161ec02e" satisfied condition "Succeeded or Failed" +Aug 17 23:43:32.481: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-configmaps-81739c6b-a8c7-4ba5-b8f4-a224161ec02e container projected-configmap-volume-test: +STEP: delete the pod +Aug 17 23:43:32.517: INFO: Waiting for pod pod-projected-configmaps-81739c6b-a8c7-4ba5-b8f4-a224161ec02e to disappear +Aug 17 23:43:32.520: INFO: Pod pod-projected-configmaps-81739c6b-a8c7-4ba5-b8f4-a224161ec02e no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:32.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2965" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":239,"skipped":4564,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:32.534: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 23:43:32.572: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27e74a48-13ea-4d12-88df-4de259795459" in namespace "projected-9037" to be "Succeeded or Failed" +Aug 17 23:43:32.575: INFO: Pod "downwardapi-volume-27e74a48-13ea-4d12-88df-4de259795459": Phase="Pending", Reason="", readiness=false. Elapsed: 3.31032ms +Aug 17 23:43:34.581: INFO: Pod "downwardapi-volume-27e74a48-13ea-4d12-88df-4de259795459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008952166s +Aug 17 23:43:36.585: INFO: Pod "downwardapi-volume-27e74a48-13ea-4d12-88df-4de259795459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012704746s +STEP: Saw pod success +Aug 17 23:43:36.585: INFO: Pod "downwardapi-volume-27e74a48-13ea-4d12-88df-4de259795459" satisfied condition "Succeeded or Failed" +Aug 17 23:43:36.589: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-27e74a48-13ea-4d12-88df-4de259795459 container client-container: +STEP: delete the pod +Aug 17 23:43:36.610: INFO: Waiting for pod downwardapi-volume-27e74a48-13ea-4d12-88df-4de259795459 to disappear +Aug 17 23:43:36.613: INFO: Pod downwardapi-volume-27e74a48-13ea-4d12-88df-4de259795459 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:36.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9037" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":240,"skipped":4587,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:36.628: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 +STEP: create the container to handle the HTTPGet hook request. +Aug 17 23:43:36.673: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:43:38.680: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the pod with lifecycle hook +Aug 17 23:43:38.696: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:43:40.703: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Aug 17 23:43:40.740: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Aug 17 23:43:40.744: INFO: Pod pod-with-poststart-http-hook still exists +Aug 17 23:43:42.745: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Aug 17 23:43:42.750: INFO: Pod pod-with-poststart-http-hook still exists +Aug 17 23:43:44.744: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Aug 17 23:43:44.750: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:44.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-936" for this suite. + +• [SLOW TEST:8.137 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":241,"skipped":4604,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:44.765: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should patch a Namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a Namespace +STEP: patching the Namespace +STEP: get the Namespace and ensuring it has the label +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:44.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-8114" for this suite. +STEP: Destroying namespace "nspatchtest-7d9aeaf9-f208-4349-be00-a2689fbd6b6a-5904" for this suite. +•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":242,"skipped":4610,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:44.845: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name projected-configmap-test-volume-c45effe1-cb32-4a20-bf85-57ef5a4f4472 +STEP: Creating a pod to test consume configMaps +Aug 17 23:43:44.889: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-126bbb49-6655-4fd9-8112-73b2710db5f8" in namespace "projected-9455" to be "Succeeded or Failed" +Aug 17 23:43:44.903: INFO: Pod "pod-projected-configmaps-126bbb49-6655-4fd9-8112-73b2710db5f8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.082715ms +Aug 17 23:43:46.909: INFO: Pod "pod-projected-configmaps-126bbb49-6655-4fd9-8112-73b2710db5f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019882214s +Aug 17 23:43:48.915: INFO: Pod "pod-projected-configmaps-126bbb49-6655-4fd9-8112-73b2710db5f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025907802s +STEP: Saw pod success +Aug 17 23:43:48.915: INFO: Pod "pod-projected-configmaps-126bbb49-6655-4fd9-8112-73b2710db5f8" satisfied condition "Succeeded or Failed" +Aug 17 23:43:48.919: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-configmaps-126bbb49-6655-4fd9-8112-73b2710db5f8 container agnhost-container: +STEP: delete the pod +Aug 17 23:43:48.942: INFO: Waiting for pod pod-projected-configmaps-126bbb49-6655-4fd9-8112-73b2710db5f8 to disappear +Aug 17 23:43:48.945: INFO: Pod pod-projected-configmaps-126bbb49-6655-4fd9-8112-73b2710db5f8 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:48.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9455" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":243,"skipped":4624,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:48.962: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Aug 17 23:43:50.084: INFO: The status of Pod kube-controller-manager-195.17.32.244 is Running (Ready = true) +Aug 17 23:43:50.159: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:50.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-7913" for this suite. +•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":244,"skipped":4658,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:50.176: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: Orphaning one of the Job's Pods +Aug 17 23:43:52.749: INFO: Successfully updated pod "adopt-release-8c6kr" +STEP: Checking that the Job readopts the Pod +Aug 17 23:43:52.749: INFO: Waiting up to 15m0s for pod "adopt-release-8c6kr" in namespace "job-9940" to be "adopted" +Aug 17 23:43:52.753: INFO: Pod "adopt-release-8c6kr": Phase="Running", Reason="", readiness=true. Elapsed: 3.939768ms +Aug 17 23:43:54.757: INFO: Pod "adopt-release-8c6kr": Phase="Running", Reason="", readiness=true. Elapsed: 2.008305992s +Aug 17 23:43:54.757: INFO: Pod "adopt-release-8c6kr" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod +Aug 17 23:43:55.271: INFO: Successfully updated pod "adopt-release-8c6kr" +STEP: Checking that the Job releases the Pod +Aug 17 23:43:55.271: INFO: Waiting up to 15m0s for pod "adopt-release-8c6kr" in namespace "job-9940" to be "released" +Aug 17 23:43:55.285: INFO: Pod "adopt-release-8c6kr": Phase="Running", Reason="", readiness=true. Elapsed: 13.487585ms +Aug 17 23:43:55.285: INFO: Pod "adopt-release-8c6kr" satisfied condition "released" +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:55.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-9940" for this suite. + +• [SLOW TEST:5.130 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":245,"skipped":4670,"failed":0} +SSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:55.307: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if v1 is in available api versions [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: validating api versions +Aug 17 23:43:55.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7985 api-versions' +Aug 17 23:43:55.392: INFO: stderr: "" +Aug 17 23:43:55.392: INFO: stdout: "acme.cert-manager.io/v1\naddons.cluster.x-k8s.io/v1alpha3\naddons.cluster.x-k8s.io/v1alpha4\naddons.cluster.x-k8s.io/v1beta1\nadmissionregistration.k8s.io/v1\nanywhere.eks.amazonaws.com/v1alpha1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\nbootstrap.cluster.x-k8s.io/v1alpha3\nbootstrap.cluster.x-k8s.io/v1alpha4\nbootstrap.cluster.x-k8s.io/v1beta1\ncert-manager.io/v1\ncertificates.k8s.io/v1\ncilium.io/v2\ncilium.io/v2alpha1\ncluster.x-k8s.io/v1alpha3\ncluster.x-k8s.io/v1alpha4\ncluster.x-k8s.io/v1beta1\nclusterctl.cluster.x-k8s.io/v1alpha3\ncontrolplane.cluster.x-k8s.io/v1alpha3\ncontrolplane.cluster.x-k8s.io/v1alpha4\ncontrolplane.cluster.x-k8s.io/v1beta1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\ndistro.eks.amazonaws.com/v1alpha1\netcdcluster.cluster.x-k8s.io/v1alpha3\netcdcluster.cluster.x-k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\ninfrastructure.cluster.x-k8s.io/v1alpha3\ninfrastructure.cluster.x-k8s.io/v1alpha4\ninfrastructure.cluster.x-k8s.io/v1beta1\nipam.cluster.x-k8s.io/v1alpha1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npackages.eks.amazonaws.com/v1alpha1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nruntime.cluster.x-k8s.io/v1alpha1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:55.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7985" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":246,"skipped":4676,"failed":0} +SSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:55.404: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check is all data is printed [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:43:55.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-5124 version' +Aug 17 23:43:55.483: INFO: stderr: "" +Aug 17 23:43:55.483: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.7\", GitCommit:\"42c05a547468804b2053ecf60a3bd15560362fc2\", GitTreeState:\"clean\", BuildDate:\"2022-05-24T12:30:55Z\", GoVersion:\"go1.17.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.7-eks-7709a84\", GitCommit:\"7709a84959a3677cf457b60e86014e22feb6ed20\", GitTreeState:\"archive\", BuildDate:\"2022-05-24T12:23:29Z\", GoVersion:\"go1.17.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:43:55.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5124" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":247,"skipped":4679,"failed":0} +SSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:43:55.497: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service in namespace services-4531 +Aug 17 23:43:55.535: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:43:57.542: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Aug 17 23:43:57.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-4531 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Aug 17 23:43:57.678: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Aug 17 23:43:57.678: INFO: stdout: "iptables" +Aug 17 23:43:57.678: INFO: proxyMode: iptables +Aug 17 23:43:57.692: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Aug 17 23:43:57.696: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-clusterip-timeout in namespace services-4531 +STEP: creating replication controller affinity-clusterip-timeout in namespace services-4531 +I0817 23:43:57.730110 20 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4531, replica count: 3 +I0817 23:44:00.781400 20 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 17 23:44:00.793: INFO: Creating new exec pod +Aug 17 23:44:03.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-4531 exec execpod-affinityx97wr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' +Aug 17 23:44:03.970: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" +Aug 17 23:44:03.970: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:44:03.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-4531 exec execpod-affinityx97wr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.97.77.173 80' +Aug 17 23:44:04.105: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.97.77.173 80\nConnection to 10.97.77.173 80 port [tcp/http] succeeded!\n" +Aug 17 23:44:04.105: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 17 23:44:04.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-4531 exec execpod-affinityx97wr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.77.173:80/ ; done' +Aug 17 23:44:04.308: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n" +Aug 17 23:44:04.308: INFO: stdout: "\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77\naffinity-clusterip-timeout-2mm77" +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Received response from host: affinity-clusterip-timeout-2mm77 +Aug 17 23:44:04.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-4531 exec execpod-affinityx97wr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.77.173:80/' +Aug 17 23:44:04.445: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n" +Aug 17 23:44:04.445: INFO: stdout: "affinity-clusterip-timeout-2mm77" +Aug 17 23:44:24.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-4531 exec execpod-affinityx97wr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.97.77.173:80/' +Aug 17 23:44:24.583: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.97.77.173:80/\n" +Aug 17 23:44:24.583: INFO: stdout: "affinity-clusterip-timeout-s5br9" +Aug 17 23:44:24.583: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-4531, will wait for the garbage collector to delete the pods +Aug 17 23:44:24.699: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 9.16249ms +Aug 17 23:44:24.800: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.793037ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:44:27.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-4531" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:31.564 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":248,"skipped":4684,"failed":0} +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:44:27.061: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 +STEP: create the container to handle the HTTPGet hook request. +Aug 17 23:44:27.106: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:44:29.114: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the pod with lifecycle hook +Aug 17 23:44:29.129: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:44:31.137: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) +STEP: delete the pod with lifecycle hook +Aug 17 23:44:31.150: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Aug 17 23:44:31.155: INFO: Pod pod-with-prestop-http-hook still exists +Aug 17 23:44:33.155: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Aug 17 23:44:33.161: INFO: Pod pod-with-prestop-http-hook still exists +Aug 17 23:44:35.155: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Aug 17 23:44:35.164: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [sig-node] Container Lifecycle Hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:44:35.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4294" for this suite. + +• [SLOW TEST:8.129 seconds] +[sig-node] Container Lifecycle Hook +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44 + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":249,"skipped":4706,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:44:35.191: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Performing setup for networking test in namespace pod-network-test-8577 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Aug 17 23:44:35.220: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Aug 17 23:44:35.251: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:44:37.258: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:39.256: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:41.258: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:43.265: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:45.259: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:47.261: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:49.259: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:51.257: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:53.260: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:55.259: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:44:57.258: INFO: The status of Pod netserver-0 is Running (Ready = true) +Aug 17 23:44:57.266: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Aug 17 23:44:59.307: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Aug 17 23:44:59.307: INFO: Going to poll 192.168.2.98 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Aug 17 23:44:59.311: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.98 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8577 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:44:59.311: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:44:59.312: INFO: ExecWithOptions: Clientset creation +Aug 17 23:44:59.312: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-8577/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.2.98+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:45:00.380: INFO: Found all 1 expected endpoints: [netserver-0] +Aug 17 23:45:00.380: INFO: Going to poll 192.168.1.38 on port 8081 at least 0 times, with a maximum of 34 tries before failing +Aug 17 23:45:00.385: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.1.38 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8577 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:45:00.385: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:45:00.386: INFO: ExecWithOptions: Clientset creation +Aug 17 23:45:00.386: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-8577/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.1.38+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:45:01.455: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:45:01.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-8577" for this suite. + +• [SLOW TEST:26.281 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":250,"skipped":4755,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:45:01.473: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:45:01.498: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Aug 17 23:45:01.509: INFO: Pod name sample-pod: Found 0 pods out of 1 +Aug 17 23:45:06.513: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Aug 17 23:45:06.513: INFO: Creating deployment "test-rolling-update-deployment" +Aug 17 23:45:06.521: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Aug 17 23:45:06.526: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Aug 17 23:45:08.537: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Aug 17 23:45:08.539: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 17 23:45:08.549: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7323 c02c3f7c-0c79-4ae8-9deb-ad6bf86ec85c 70578 1 2022-08-17 23:45:06 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-08-17 23:45:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:45:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006e1e058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-08-17 23:45:06 +0000 UTC,LastTransitionTime:2022-08-17 23:45:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-796dbc4547" has successfully progressed.,LastUpdateTime:2022-08-17 23:45:07 +0000 UTC,LastTransitionTime:2022-08-17 23:45:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Aug 17 23:45:08.552: INFO: New ReplicaSet "test-rolling-update-deployment-796dbc4547" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-796dbc4547 deployment-7323 f838c979-595c-48dd-9f50-36f05ec2cb5b 70564 1 2022-08-17 23:45:06 +0000 UTC map[name:sample-pod pod-template-hash:796dbc4547] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c02c3f7c-0c79-4ae8-9deb-ad6bf86ec85c 0xc006e1e517 0xc006e1e518}] [] [{kube-controller-manager Update apps/v1 2022-08-17 23:45:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c02c3f7c-0c79-4ae8-9deb-ad6bf86ec85c\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:45:07 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 796dbc4547,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:796dbc4547] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc006e1e5c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Aug 17 23:45:08.552: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Aug 17 23:45:08.552: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7323 c65d3b04-b2ef-4c79-905c-1c388e80bae9 70576 2 2022-08-17 23:45:01 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c02c3f7c-0c79-4ae8-9deb-ad6bf86ec85c 0xc006e1e3e7 0xc006e1e3e8}] [] [{e2e.test Update apps/v1 2022-08-17 23:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:45:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c02c3f7c-0c79-4ae8-9deb-ad6bf86ec85c\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-08-17 23:45:07 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006e1e4a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 17 23:45:08.555: INFO: Pod "test-rolling-update-deployment-796dbc4547-nmbm7" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-796dbc4547-nmbm7 test-rolling-update-deployment-796dbc4547- deployment-7323 ea668b6c-320e-4f7b-9682-a7916830ab7d 70563 0 2022-08-17 23:45:06 +0000 UTC map[name:sample-pod pod-template-hash:796dbc4547] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-796dbc4547 f838c979-595c-48dd-9f50-36f05ec2cb5b 0xc006e1ea57 0xc006e1ea58}] [] [{kube-controller-manager Update v1 2022-08-17 23:45:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f838c979-595c-48dd-9f50-36f05ec2cb5b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-17 23:45:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.120\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8tsgn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tsgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:45:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:45:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:45:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-17 23:45:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:192.168.1.120,StartTime:2022-08-17 23:45:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-17 23:45:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://ae573b0dd3a0e4096f564f08fa0984205a3d422598ba034bc0c84a2a75f967ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:45:08.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7323" for this suite. + +• [SLOW TEST:7.094 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":251,"skipped":4821,"failed":0} +SSSSS +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:45:08.568: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pdb +STEP: Waiting for the pdb to be processed +STEP: updating the pdb +STEP: Waiting for the pdb to be processed +STEP: patching the pdb +STEP: Waiting for the pdb to be processed +STEP: Waiting for the pdb to be deleted +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:45:14.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-3353" for this suite. + +• [SLOW TEST:6.118 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should create a PodDisruptionBudget [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":252,"skipped":4826,"failed":0} +SSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:45:14.686: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1573 +[It] should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 +Aug 17 23:45:14.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6244 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Aug 17 23:45:14.812: INFO: stderr: "" +Aug 17 23:45:14.812: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running +STEP: verifying the pod e2e-test-httpd-pod was created +Aug 17 23:45:19.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6244 get pod e2e-test-httpd-pod -o json' +Aug 17 23:45:19.931: INFO: stderr: "" +Aug 17 23:45:19.931: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2022-08-17T23:45:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6244\",\n \"resourceVersion\": \"70744\",\n \"uid\": \"c0eaa910-70e5-433c-9de5-4261f43d66df\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-wrww6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"195.17.65.231\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-wrww6\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-08-17T23:45:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-08-17T23:45:16Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-08-17T23:45:16Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-08-17T23:45:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://cb07726dde5983902fcba8dab3dbaedc06daa8c78047d93ce27752002dda0796\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-08-17T23:45:15Z\"\n }\n }\n }\n ],\n \"hostIP\": \"195.17.65.231\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.1.90\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.1.90\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-08-17T23:45:14Z\"\n }\n}\n" +STEP: replace the image in the pod +Aug 17 23:45:19.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6244 replace -f -' +Aug 17 23:45:20.195: INFO: stderr: "" +Aug 17 23:45:20.195: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 +[AfterEach] Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 +Aug 17 23:45:20.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-6244 delete pods e2e-test-httpd-pod' +Aug 17 23:45:21.690: INFO: stderr: "" +Aug 17 23:45:21.690: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:45:21.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-6244" for this suite. + +• [SLOW TEST:7.018 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl replace + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1570 + should update a single-container pod's image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":253,"skipped":4831,"failed":0} +SSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:45:21.706: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:45:21.741: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f631c499-35db-4da6-9f8a-128531eeebb9" in namespace "security-context-test-1468" to be "Succeeded or Failed" +Aug 17 23:45:21.745: INFO: Pod "busybox-readonly-false-f631c499-35db-4da6-9f8a-128531eeebb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124348ms +Aug 17 23:45:23.750: INFO: Pod "busybox-readonly-false-f631c499-35db-4da6-9f8a-128531eeebb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008631162s +Aug 17 23:45:25.758: INFO: Pod "busybox-readonly-false-f631c499-35db-4da6-9f8a-128531eeebb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017101166s +Aug 17 23:45:25.758: INFO: Pod "busybox-readonly-false-f631c499-35db-4da6-9f8a-128531eeebb9" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:45:25.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-1468" for this suite. +•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":254,"skipped":4834,"failed":0} +SSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:45:25.773: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename endpointslice +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:45:25.809: INFO: Endpoints addresses: [195.17.131.206 195.17.32.244] , ports: [6443] +Aug 17 23:45:25.809: INFO: EndpointSlices addresses: [195.17.131.206 195.17.32.244] , ports: [6443] +[AfterEach] [sig-network] EndpointSlice + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:45:25.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslice-1752" for this suite. +•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":255,"skipped":4837,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:45:25.820: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:45:25.859: INFO: Pod name sample-pod: Found 0 pods out of 1 +Aug 17 23:45:30.863: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +STEP: Scaling up "test-rs" replicaset +Aug 17 23:45:30.873: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet +Aug 17 23:45:30.886: INFO: observed ReplicaSet test-rs in namespace replicaset-5191 with ReadyReplicas 1, AvailableReplicas 1 +Aug 17 23:45:30.903: INFO: observed ReplicaSet test-rs in namespace replicaset-5191 with ReadyReplicas 1, AvailableReplicas 1 +Aug 17 23:45:30.935: INFO: observed ReplicaSet test-rs in namespace replicaset-5191 with ReadyReplicas 1, AvailableReplicas 1 +Aug 17 23:45:30.950: INFO: observed ReplicaSet test-rs in namespace replicaset-5191 with ReadyReplicas 1, AvailableReplicas 1 +Aug 17 23:45:32.168: INFO: observed ReplicaSet test-rs in namespace replicaset-5191 with ReadyReplicas 2, AvailableReplicas 2 +Aug 17 23:45:32.721: INFO: observed Replicaset test-rs in namespace replicaset-5191 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:45:32.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-5191" for this suite. + +• [SLOW TEST:6.921 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Replace and Patch tests [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":256,"skipped":4855,"failed":0} +SSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:45:32.741: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod liveness-85958012-2854-4e6e-ad4f-cd9f86087b98 in namespace container-probe-7143 +Aug 17 23:45:34.797: INFO: Started pod liveness-85958012-2854-4e6e-ad4f-cd9f86087b98 in namespace container-probe-7143 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 17 23:45:34.800: INFO: Initial restart count of pod liveness-85958012-2854-4e6e-ad4f-cd9f86087b98 is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:49:35.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7143" for this suite. + +• [SLOW TEST:242.943 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":257,"skipped":4864,"failed":0} +S +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:49:35.685: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name projected-configmap-test-volume-7c1204f5-78be-4351-b810-68db9cc27069 +STEP: Creating a pod to test consume configMaps +Aug 17 23:49:35.730: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d77c3447-8563-432d-8545-1e9e053a6f87" in namespace "projected-3812" to be "Succeeded or Failed" +Aug 17 23:49:35.734: INFO: Pod "pod-projected-configmaps-d77c3447-8563-432d-8545-1e9e053a6f87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35342ms +Aug 17 23:49:37.743: INFO: Pod "pod-projected-configmaps-d77c3447-8563-432d-8545-1e9e053a6f87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012681475s +Aug 17 23:49:39.749: INFO: Pod "pod-projected-configmaps-d77c3447-8563-432d-8545-1e9e053a6f87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018499778s +STEP: Saw pod success +Aug 17 23:49:39.749: INFO: Pod "pod-projected-configmaps-d77c3447-8563-432d-8545-1e9e053a6f87" satisfied condition "Succeeded or Failed" +Aug 17 23:49:39.752: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-configmaps-d77c3447-8563-432d-8545-1e9e053a6f87 container agnhost-container: +STEP: delete the pod +Aug 17 23:49:39.783: INFO: Waiting for pod pod-projected-configmaps-d77c3447-8563-432d-8545-1e9e053a6f87 to disappear +Aug 17 23:49:39.786: INFO: Pod pod-projected-configmaps-d77c3447-8563-432d-8545-1e9e053a6f87 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:49:39.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3812" for this suite. +•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":258,"skipped":4865,"failed":0} +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:49:39.804: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0666 on node default medium +Aug 17 23:49:39.846: INFO: Waiting up to 5m0s for pod "pod-55926913-5a92-4b52-8698-bdcbc823f177" in namespace "emptydir-4100" to be "Succeeded or Failed" +Aug 17 23:49:39.856: INFO: Pod "pod-55926913-5a92-4b52-8698-bdcbc823f177": Phase="Pending", Reason="", readiness=false. Elapsed: 9.498289ms +Aug 17 23:49:41.862: INFO: Pod "pod-55926913-5a92-4b52-8698-bdcbc823f177": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016146339s +Aug 17 23:49:43.868: INFO: Pod "pod-55926913-5a92-4b52-8698-bdcbc823f177": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021397748s +STEP: Saw pod success +Aug 17 23:49:43.868: INFO: Pod "pod-55926913-5a92-4b52-8698-bdcbc823f177" satisfied condition "Succeeded or Failed" +Aug 17 23:49:43.871: INFO: Trying to get logs from node 195.17.65.231 pod pod-55926913-5a92-4b52-8698-bdcbc823f177 container test-container: +STEP: delete the pod +Aug 17 23:49:43.907: INFO: Waiting for pod pod-55926913-5a92-4b52-8698-bdcbc823f177 to disappear +Aug 17 23:49:43.916: INFO: Pod pod-55926913-5a92-4b52-8698-bdcbc823f177 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:49:43.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4100" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":259,"skipped":4866,"failed":0} + +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:49:43.931: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 23:49:43.971: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fcfc806-17fb-45a8-825f-83c773cb2c71" in namespace "downward-api-6382" to be "Succeeded or Failed" +Aug 17 23:49:43.976: INFO: Pod "downwardapi-volume-2fcfc806-17fb-45a8-825f-83c773cb2c71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.020074ms +Aug 17 23:49:45.982: INFO: Pod "downwardapi-volume-2fcfc806-17fb-45a8-825f-83c773cb2c71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008968917s +Aug 17 23:49:47.990: INFO: Pod "downwardapi-volume-2fcfc806-17fb-45a8-825f-83c773cb2c71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016916138s +STEP: Saw pod success +Aug 17 23:49:47.990: INFO: Pod "downwardapi-volume-2fcfc806-17fb-45a8-825f-83c773cb2c71" satisfied condition "Succeeded or Failed" +Aug 17 23:49:47.993: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-2fcfc806-17fb-45a8-825f-83c773cb2c71 container client-container: +STEP: delete the pod +Aug 17 23:49:48.026: INFO: Waiting for pod downwardapi-volume-2fcfc806-17fb-45a8-825f-83c773cb2c71 to disappear +Aug 17 23:49:48.030: INFO: Pod downwardapi-volume-2fcfc806-17fb-45a8-825f-83c773cb2c71 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:49:48.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6382" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":260,"skipped":4866,"failed":0} +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:49:48.044: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Aug 17 23:49:48.069: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:49:56.239: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:50:23.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1906" for this suite. + +• [SLOW TEST:35.688 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":261,"skipped":4868,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:50:23.732: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename endpointslicemirroring +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: mirroring a new custom Endpoint +Aug 17 23:50:23.792: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint +Aug 17 23:50:25.811: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 +STEP: mirroring deletion of a custom Endpoint +Aug 17 23:50:27.828: INFO: Waiting for 0 EndpointSlices to exist, got 1 +[AfterEach] [sig-network] EndpointSliceMirroring + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:50:29.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "endpointslicemirroring-8461" for this suite. + +• [SLOW TEST:6.118 seconds] +[sig-network] EndpointSliceMirroring +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should mirror a custom Endpoints resource through create update and delete [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":262,"skipped":4886,"failed":0} +SSSSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:50:29.851: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename job +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a job +STEP: Ensuring active pods == parallelism +STEP: delete a job +STEP: deleting Job.batch foo in namespace job-5945, will wait for the garbage collector to delete the pods +Aug 17 23:50:31.949: INFO: Deleting Job.batch foo took: 7.065943ms +Aug 17 23:50:32.050: INFO: Terminating Job.batch foo pods took: 100.925679ms +STEP: Ensuring job was deleted +[AfterEach] [sig-apps] Job + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:51:04.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "job-5945" for this suite. + +• [SLOW TEST:34.720 seconds] +[sig-apps] Job +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should delete a job [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":263,"skipped":4891,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:51:04.571: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Performing setup for networking test in namespace pod-network-test-9447 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Aug 17 23:51:04.600: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Aug 17 23:51:04.633: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:51:06.638: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:08.638: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:10.638: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:12.640: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:14.638: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:16.642: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:18.641: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:20.644: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:22.643: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:24.641: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:26.643: INFO: The status of Pod netserver-0 is Running (Ready = true) +Aug 17 23:51:26.651: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Aug 17 23:51:28.691: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Aug 17 23:51:28.691: INFO: Going to poll 192.168.2.110 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Aug 17 23:51:28.694: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.2.110:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9447 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:51:28.694: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:51:28.695: INFO: ExecWithOptions: Clientset creation +Aug 17 23:51:28.695: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-9447/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.2.110%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:51:28.784: INFO: Found all 1 expected endpoints: [netserver-0] +Aug 17 23:51:28.784: INFO: Going to poll 192.168.1.234 on port 8083 at least 0 times, with a maximum of 34 tries before failing +Aug 17 23:51:28.789: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.1.234:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9447 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:51:28.789: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:51:28.790: INFO: ExecWithOptions: Clientset creation +Aug 17 23:51:28.790: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-9447/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F192.168.1.234%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:51:28.861: INFO: Found all 1 expected endpoints: [netserver-1] +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:51:28.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-9447" for this suite. + +• [SLOW TEST:24.305 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":264,"skipped":4904,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:51:28.879: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Performing setup for networking test in namespace pod-network-test-4239 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Aug 17 23:51:28.911: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Aug 17 23:51:28.955: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:51:30.960: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:32.962: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:34.960: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:36.963: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:38.960: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:40.959: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:42.961: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:44.960: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:46.959: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:48.960: INFO: The status of Pod netserver-0 is Running (Ready = false) +Aug 17 23:51:50.961: INFO: The status of Pod netserver-0 is Running (Ready = true) +Aug 17 23:51:50.968: INFO: The status of Pod netserver-1 is Running (Ready = true) +STEP: Creating test pods +Aug 17 23:51:52.988: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 +Aug 17 23:51:52.988: INFO: Breadth first check of 192.168.2.202 on host 195.17.131.205... +Aug 17 23:51:52.992: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.187:9080/dial?request=hostname&protocol=udp&host=192.168.2.202&port=8081&tries=1'] Namespace:pod-network-test-4239 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:51:52.992: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:51:52.993: INFO: ExecWithOptions: Clientset creation +Aug 17 23:51:52.993: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4239/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.1.187%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.2.202%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:51:53.062: INFO: Waiting for responses: map[] +Aug 17 23:51:53.062: INFO: reached 192.168.2.202 after 0/1 tries +Aug 17 23:51:53.062: INFO: Breadth first check of 192.168.1.125 on host 195.17.65.231... +Aug 17 23:51:53.066: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.1.187:9080/dial?request=hostname&protocol=udp&host=192.168.1.125&port=8081&tries=1'] Namespace:pod-network-test-4239 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 17 23:51:53.066: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 17 23:51:53.067: INFO: ExecWithOptions: Clientset creation +Aug 17 23:51:53.067: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/pod-network-test-4239/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.1.187%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.1.125%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) +Aug 17 23:51:53.137: INFO: Waiting for responses: map[] +Aug 17 23:51:53.137: INFO: reached 192.168.1.125 after 0/1 tries +Aug 17 23:51:53.137: INFO: Going to retry 0 out of 2 pods.... +[AfterEach] [sig-network] Networking + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:51:53.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-4239" for this suite. + +• [SLOW TEST:24.273 seconds] +[sig-network] Networking +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 + Granular Checks: Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":265,"skipped":4971,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:51:53.152: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename aggregator +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 +Aug 17 23:51:53.180: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Registering the sample API server. +Aug 17 23:51:53.941: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created +Aug 17 23:51:55.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7b4b967944\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 23:51:58.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7b4b967944\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 23:52:00.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7b4b967944\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 23:52:02.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), LastTransitionTime:time.Date(2022, time.August, 17, 23, 51, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7b4b967944\" is progressing."}}, CollisionCount:(*int32)(nil)} +Aug 17 23:52:04.308: INFO: Waited 286.317268ms for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' +STEP: List APIServices +Aug 17 23:52:05.253: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:52:06.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-7078" for this suite. + +• [SLOW TEST:12.997 seconds] +[sig-api-machinery] Aggregator +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":266,"skipped":4982,"failed":0} +SSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:52:06.150: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Aug 17 23:52:06.205: INFO: Waiting up to 5m0s for pod "pod-2722f844-a7f8-4931-be4d-9fe70a7adda3" in namespace "emptydir-6308" to be "Succeeded or Failed" +Aug 17 23:52:06.211: INFO: Pod "pod-2722f844-a7f8-4931-be4d-9fe70a7adda3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454361ms +Aug 17 23:52:08.217: INFO: Pod "pod-2722f844-a7f8-4931-be4d-9fe70a7adda3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012443764s +Aug 17 23:52:10.225: INFO: Pod "pod-2722f844-a7f8-4931-be4d-9fe70a7adda3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020550633s +STEP: Saw pod success +Aug 17 23:52:10.225: INFO: Pod "pod-2722f844-a7f8-4931-be4d-9fe70a7adda3" satisfied condition "Succeeded or Failed" +Aug 17 23:52:10.229: INFO: Trying to get logs from node 195.17.65.231 pod pod-2722f844-a7f8-4931-be4d-9fe70a7adda3 container test-container: +STEP: delete the pod +Aug 17 23:52:10.261: INFO: Waiting for pod pod-2722f844-a7f8-4931-be4d-9fe70a7adda3 to disappear +Aug 17 23:52:10.265: INFO: Pod pod-2722f844-a7f8-4931-be4d-9fe70a7adda3 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:52:10.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6308" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":267,"skipped":4985,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:52:10.277: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +Aug 17 23:52:20.781: INFO: 80 pods remaining +Aug 17 23:52:20.781: INFO: 68 pods has nil DeletionTimestamp +Aug 17 23:52:20.781: INFO: +Aug 17 23:52:25.778: INFO: 57 pods remaining +Aug 17 23:52:25.778: INFO: 50 pods has nil DeletionTimestamp +Aug 17 23:52:25.778: INFO: +STEP: Gathering metrics +Aug 17 23:52:30.818: INFO: The status of Pod kube-controller-manager-195.17.32.244 is Running (Ready = true) +Aug 17 23:52:30.894: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Aug 17 23:52:30.894: INFO: Deleting pod "simpletest-rc-to-be-deleted-24m9j" in namespace "gc-4066" +Aug 17 23:52:30.933: INFO: Deleting pod "simpletest-rc-to-be-deleted-25x7w" in namespace "gc-4066" +Aug 17 23:52:30.959: INFO: Deleting pod "simpletest-rc-to-be-deleted-2cs4k" in namespace "gc-4066" +Aug 17 23:52:30.978: INFO: Deleting pod "simpletest-rc-to-be-deleted-2dpnv" in namespace "gc-4066" +Aug 17 23:52:31.003: INFO: Deleting pod "simpletest-rc-to-be-deleted-2dz67" in namespace "gc-4066" +Aug 17 23:52:31.021: INFO: Deleting pod "simpletest-rc-to-be-deleted-2m4cm" in namespace "gc-4066" +Aug 17 23:52:31.033: INFO: Deleting pod "simpletest-rc-to-be-deleted-4gdbf" in namespace "gc-4066" +Aug 17 23:52:31.056: INFO: Deleting pod "simpletest-rc-to-be-deleted-4k5xq" in namespace "gc-4066" +Aug 17 23:52:31.072: INFO: Deleting pod "simpletest-rc-to-be-deleted-5c782" in namespace "gc-4066" +Aug 17 23:52:31.090: INFO: Deleting pod "simpletest-rc-to-be-deleted-5cfrl" in namespace "gc-4066" +Aug 17 23:52:31.106: INFO: Deleting pod "simpletest-rc-to-be-deleted-5gz6n" in namespace "gc-4066" +Aug 17 23:52:31.122: INFO: Deleting pod "simpletest-rc-to-be-deleted-5mg8k" in namespace "gc-4066" +Aug 17 23:52:31.137: INFO: Deleting pod "simpletest-rc-to-be-deleted-64w6m" in namespace "gc-4066" +Aug 17 23:52:31.155: INFO: Deleting pod "simpletest-rc-to-be-deleted-66nwh" in namespace "gc-4066" +Aug 17 23:52:31.171: INFO: Deleting pod "simpletest-rc-to-be-deleted-6b5xh" in namespace "gc-4066" +Aug 17 23:52:31.192: INFO: Deleting pod "simpletest-rc-to-be-deleted-6bsbz" in namespace "gc-4066" +Aug 17 23:52:31.205: INFO: Deleting pod "simpletest-rc-to-be-deleted-6dlkd" in namespace "gc-4066" +Aug 17 23:52:31.218: INFO: Deleting pod "simpletest-rc-to-be-deleted-726mx" in namespace "gc-4066" +Aug 17 23:52:31.240: INFO: Deleting pod "simpletest-rc-to-be-deleted-7g5h9" in namespace "gc-4066" +Aug 17 23:52:31.264: INFO: Deleting pod "simpletest-rc-to-be-deleted-7gjvj" in namespace "gc-4066" +Aug 17 23:52:31.287: INFO: Deleting pod "simpletest-rc-to-be-deleted-7wjlw" in namespace "gc-4066" +Aug 17 23:52:31.308: INFO: Deleting pod "simpletest-rc-to-be-deleted-84pzx" in namespace "gc-4066" +Aug 17 23:52:31.322: INFO: Deleting pod "simpletest-rc-to-be-deleted-8jf5v" in namespace "gc-4066" +Aug 17 23:52:31.336: INFO: Deleting pod "simpletest-rc-to-be-deleted-8jmk9" in namespace "gc-4066" +Aug 17 23:52:31.354: INFO: Deleting pod "simpletest-rc-to-be-deleted-8qf2m" in namespace "gc-4066" +Aug 17 23:52:31.373: INFO: Deleting pod "simpletest-rc-to-be-deleted-8xgnw" in namespace "gc-4066" +Aug 17 23:52:31.398: INFO: Deleting pod "simpletest-rc-to-be-deleted-926d9" in namespace "gc-4066" +Aug 17 23:52:31.412: INFO: Deleting pod "simpletest-rc-to-be-deleted-94gfm" in namespace "gc-4066" +Aug 17 23:52:31.423: INFO: Deleting pod "simpletest-rc-to-be-deleted-9c9zh" in namespace "gc-4066" +Aug 17 23:52:31.440: INFO: Deleting pod "simpletest-rc-to-be-deleted-b9v6n" in namespace "gc-4066" +Aug 17 23:52:31.461: INFO: Deleting pod "simpletest-rc-to-be-deleted-bjt7w" in namespace "gc-4066" +Aug 17 23:52:31.486: INFO: Deleting pod "simpletest-rc-to-be-deleted-bq2n6" in namespace "gc-4066" +Aug 17 23:52:31.501: INFO: Deleting pod "simpletest-rc-to-be-deleted-bt6l2" in namespace "gc-4066" +Aug 17 23:52:31.517: INFO: Deleting pod "simpletest-rc-to-be-deleted-bxbgk" in namespace "gc-4066" +Aug 17 23:52:31.529: INFO: Deleting pod "simpletest-rc-to-be-deleted-c2gtv" in namespace "gc-4066" +Aug 17 23:52:31.550: INFO: Deleting pod "simpletest-rc-to-be-deleted-c6l6c" in namespace "gc-4066" +Aug 17 23:52:31.564: INFO: Deleting pod "simpletest-rc-to-be-deleted-ccllb" in namespace "gc-4066" +Aug 17 23:52:31.579: INFO: Deleting pod "simpletest-rc-to-be-deleted-ccm6v" in namespace "gc-4066" +Aug 17 23:52:31.595: INFO: Deleting pod "simpletest-rc-to-be-deleted-ck5g7" in namespace "gc-4066" +Aug 17 23:52:31.618: INFO: Deleting pod "simpletest-rc-to-be-deleted-cpjf4" in namespace "gc-4066" +Aug 17 23:52:31.641: INFO: Deleting pod "simpletest-rc-to-be-deleted-d2ndt" in namespace "gc-4066" +Aug 17 23:52:31.659: INFO: Deleting pod "simpletest-rc-to-be-deleted-d66lp" in namespace "gc-4066" +Aug 17 23:52:31.677: INFO: Deleting pod "simpletest-rc-to-be-deleted-dmbtw" in namespace "gc-4066" +Aug 17 23:52:31.704: INFO: Deleting pod "simpletest-rc-to-be-deleted-f42j6" in namespace "gc-4066" +Aug 17 23:52:31.721: INFO: Deleting pod "simpletest-rc-to-be-deleted-fmccf" in namespace "gc-4066" +Aug 17 23:52:31.739: INFO: Deleting pod "simpletest-rc-to-be-deleted-frbzn" in namespace "gc-4066" +Aug 17 23:52:31.755: INFO: Deleting pod "simpletest-rc-to-be-deleted-ftd94" in namespace "gc-4066" +Aug 17 23:52:31.778: INFO: Deleting pod "simpletest-rc-to-be-deleted-g42ql" in namespace "gc-4066" +Aug 17 23:52:31.794: INFO: Deleting pod "simpletest-rc-to-be-deleted-gbd9z" in namespace "gc-4066" +Aug 17 23:52:31.811: INFO: Deleting pod "simpletest-rc-to-be-deleted-gkjtp" in namespace "gc-4066" +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:52:31.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-4066" for this suite. + +• [SLOW TEST:21.570 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":268,"skipped":5005,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:52:31.849: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name projected-secret-test-c9c9e1f1-25f6-4eda-8336-404ec1c642d9 +STEP: Creating a pod to test consume secrets +Aug 17 23:52:31.897: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3" in namespace "projected-5797" to be "Succeeded or Failed" +Aug 17 23:52:31.902: INFO: Pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.939162ms +Aug 17 23:52:33.923: INFO: Pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026177941s +Aug 17 23:52:35.939: INFO: Pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041931969s +Aug 17 23:52:37.945: INFO: Pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048606486s +Aug 17 23:52:39.953: INFO: Pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056299145s +Aug 17 23:52:41.961: INFO: Pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064074979s +Aug 17 23:52:43.969: INFO: Pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.072621671s +STEP: Saw pod success +Aug 17 23:52:43.969: INFO: Pod "pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3" satisfied condition "Succeeded or Failed" +Aug 17 23:52:43.973: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3 container secret-volume-test: +STEP: delete the pod +Aug 17 23:52:43.996: INFO: Waiting for pod pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3 to disappear +Aug 17 23:52:43.999: INFO: Pod pod-projected-secrets-c01b6871-047e-4b61-bb1d-09de5e23c3f3 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:52:44.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5797" for this suite. + +• [SLOW TEST:12.162 seconds] +[sig-storage] Projected secret +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":269,"skipped":5057,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:52:44.012: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:52:44.053: INFO: The status of Pod busybox-readonly-fsf50bb2fa-7089-40eb-994f-981863ca9609 is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:52:46.062: INFO: The status of Pod busybox-readonly-fsf50bb2fa-7089-40eb-994f-981863ca9609 is Running (Ready = true) +[AfterEach] [sig-node] Kubelet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:52:46.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-6921" for this suite. +•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":270,"skipped":5067,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:52:46.090: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should add annotations for pods in rc [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating Agnhost RC +Aug 17 23:52:46.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-1331 create -f -' +Aug 17 23:52:48.443: INFO: stderr: "" +Aug 17 23:52:48.443: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Aug 17 23:52:49.450: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 17 23:52:49.450: INFO: Found 0 / 1 +Aug 17 23:52:50.447: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 17 23:52:50.447: INFO: Found 1 / 1 +Aug 17 23:52:50.447: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Aug 17 23:52:50.451: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 17 23:52:50.451: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Aug 17 23:52:50.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-1331 patch pod agnhost-primary-nkvml -p {"metadata":{"annotations":{"x":"y"}}}' +Aug 17 23:52:50.532: INFO: stderr: "" +Aug 17 23:52:50.532: INFO: stdout: "pod/agnhost-primary-nkvml patched\n" +STEP: checking annotations +Aug 17 23:52:50.536: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 17 23:52:50.536: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:52:50.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1331" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":271,"skipped":5095,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:52:50.551: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should support --unix-socket=/path [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Starting the proxy +Aug 17 23:52:50.577: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-5243 proxy --unix-socket=/tmp/kubectl-proxy-unix3047802488/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:52:50.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5243" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":272,"skipped":5109,"failed":0} +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:52:50.635: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:52:50.669: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:52:56.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-6557" for this suite. + +• [SLOW TEST:5.692 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 + listing custom resource definition objects works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":346,"completed":273,"skipped":5115,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:52:56.332: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 23:52:56.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d21dc21-ce02-4bf8-a841-b5980e97cda5" in namespace "downward-api-4426" to be "Succeeded or Failed" +Aug 17 23:52:56.381: INFO: Pod "downwardapi-volume-9d21dc21-ce02-4bf8-a841-b5980e97cda5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642445ms +Aug 17 23:52:58.392: INFO: Pod "downwardapi-volume-9d21dc21-ce02-4bf8-a841-b5980e97cda5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017385792s +Aug 17 23:53:00.399: INFO: Pod "downwardapi-volume-9d21dc21-ce02-4bf8-a841-b5980e97cda5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024883962s +STEP: Saw pod success +Aug 17 23:53:00.400: INFO: Pod "downwardapi-volume-9d21dc21-ce02-4bf8-a841-b5980e97cda5" satisfied condition "Succeeded or Failed" +Aug 17 23:53:00.403: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-9d21dc21-ce02-4bf8-a841-b5980e97cda5 container client-container: +STEP: delete the pod +Aug 17 23:53:00.426: INFO: Waiting for pod downwardapi-volume-9d21dc21-ce02-4bf8-a841-b5980e97cda5 to disappear +Aug 17 23:53:00.429: INFO: Pod downwardapi-volume-9d21dc21-ce02-4bf8-a841-b5980e97cda5 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:53:00.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4426" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":274,"skipped":5138,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:53:00.443: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:53:00.473: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Aug 17 23:53:08.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7348 --namespace=crd-publish-openapi-7348 create -f -' +Aug 17 23:53:10.349: INFO: stderr: "" +Aug 17 23:53:10.349: INFO: stdout: "e2e-test-crd-publish-openapi-8701-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Aug 17 23:53:10.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7348 --namespace=crd-publish-openapi-7348 delete e2e-test-crd-publish-openapi-8701-crds test-cr' +Aug 17 23:53:10.453: INFO: stderr: "" +Aug 17 23:53:10.453: INFO: stdout: "e2e-test-crd-publish-openapi-8701-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Aug 17 23:53:10.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7348 --namespace=crd-publish-openapi-7348 apply -f -' +Aug 17 23:53:10.740: INFO: stderr: "" +Aug 17 23:53:10.740: INFO: stdout: "e2e-test-crd-publish-openapi-8701-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Aug 17 23:53:10.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7348 --namespace=crd-publish-openapi-7348 delete e2e-test-crd-publish-openapi-8701-crds test-cr' +Aug 17 23:53:10.813: INFO: stderr: "" +Aug 17 23:53:10.813: INFO: stdout: "e2e-test-crd-publish-openapi-8701-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema +Aug 17 23:53:10.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=crd-publish-openapi-7348 explain e2e-test-crd-publish-openapi-8701-crds' +Aug 17 23:53:11.085: INFO: stderr: "" +Aug 17 23:53:11.085: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8701-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:53:18.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7348" for this suite. + +• [SLOW TEST:18.498 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":275,"skipped":5197,"failed":0} +SSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:53:18.942: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a suspended cronjob +STEP: Ensuring no jobs are scheduled +STEP: Ensuring no job exists by listing jobs explicitly +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:58:19.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-3461" for this suite. + +• [SLOW TEST:300.072 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":276,"skipped":5206,"failed":0} +S +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:58:19.015: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename lease-test +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] lease API should be available [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-node] Lease + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:58:19.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "lease-test-3406" for this suite. +•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":277,"skipped":5207,"failed":0} +S +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:58:19.146: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should observe PodDisruptionBudget status updated [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +Aug 17 23:58:21.234: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:58:23.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-3262" for this suite. +•{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":278,"skipped":5208,"failed":0} +SS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:58:23.255: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name secret-test-6279b1e9-5a96-4776-ad75-b7a5518185ed +STEP: Creating a pod to test consume secrets +Aug 17 23:58:23.302: INFO: Waiting up to 5m0s for pod "pod-secrets-3592e931-197b-41b9-80e4-463b66d2327d" in namespace "secrets-1130" to be "Succeeded or Failed" +Aug 17 23:58:23.305: INFO: Pod "pod-secrets-3592e931-197b-41b9-80e4-463b66d2327d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20151ms +Aug 17 23:58:25.311: INFO: Pod "pod-secrets-3592e931-197b-41b9-80e4-463b66d2327d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008611121s +Aug 17 23:58:27.317: INFO: Pod "pod-secrets-3592e931-197b-41b9-80e4-463b66d2327d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014754828s +STEP: Saw pod success +Aug 17 23:58:27.317: INFO: Pod "pod-secrets-3592e931-197b-41b9-80e4-463b66d2327d" satisfied condition "Succeeded or Failed" +Aug 17 23:58:27.321: INFO: Trying to get logs from node 195.17.65.231 pod pod-secrets-3592e931-197b-41b9-80e4-463b66d2327d container secret-volume-test: +STEP: delete the pod +Aug 17 23:58:27.354: INFO: Waiting for pod pod-secrets-3592e931-197b-41b9-80e4-463b66d2327d to disappear +Aug 17 23:58:27.357: INFO: Pod pod-secrets-3592e931-197b-41b9-80e4-463b66d2327d no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:58:27.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-1130" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":279,"skipped":5210,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:58:27.369: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename discovery +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 +STEP: Setting up server cert +[It] should validate PreferredVersion for each APIGroup [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 17 23:58:27.795: INFO: Checking APIGroup: apiregistration.k8s.io +Aug 17 23:58:27.796: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Aug 17 23:58:27.796: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Aug 17 23:58:27.796: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Aug 17 23:58:27.796: INFO: Checking APIGroup: apps +Aug 17 23:58:27.797: INFO: PreferredVersion.GroupVersion: apps/v1 +Aug 17 23:58:27.797: INFO: Versions found [{apps/v1 v1}] +Aug 17 23:58:27.797: INFO: apps/v1 matches apps/v1 +Aug 17 23:58:27.797: INFO: Checking APIGroup: events.k8s.io +Aug 17 23:58:27.798: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Aug 17 23:58:27.798: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] +Aug 17 23:58:27.798: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Aug 17 23:58:27.798: INFO: Checking APIGroup: authentication.k8s.io +Aug 17 23:58:27.798: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Aug 17 23:58:27.798: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Aug 17 23:58:27.798: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Aug 17 23:58:27.798: INFO: Checking APIGroup: authorization.k8s.io +Aug 17 23:58:27.799: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Aug 17 23:58:27.799: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Aug 17 23:58:27.799: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Aug 17 23:58:27.799: INFO: Checking APIGroup: autoscaling +Aug 17 23:58:27.800: INFO: PreferredVersion.GroupVersion: autoscaling/v2 +Aug 17 23:58:27.800: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] +Aug 17 23:58:27.800: INFO: autoscaling/v2 matches autoscaling/v2 +Aug 17 23:58:27.800: INFO: Checking APIGroup: batch +Aug 17 23:58:27.801: INFO: PreferredVersion.GroupVersion: batch/v1 +Aug 17 23:58:27.801: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] +Aug 17 23:58:27.801: INFO: batch/v1 matches batch/v1 +Aug 17 23:58:27.801: INFO: Checking APIGroup: certificates.k8s.io +Aug 17 23:58:27.802: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Aug 17 23:58:27.802: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Aug 17 23:58:27.802: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Aug 17 23:58:27.802: INFO: Checking APIGroup: networking.k8s.io +Aug 17 23:58:27.802: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Aug 17 23:58:27.802: INFO: Versions found [{networking.k8s.io/v1 v1}] +Aug 17 23:58:27.802: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Aug 17 23:58:27.802: INFO: Checking APIGroup: policy +Aug 17 23:58:27.803: INFO: PreferredVersion.GroupVersion: policy/v1 +Aug 17 23:58:27.803: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] +Aug 17 23:58:27.803: INFO: policy/v1 matches policy/v1 +Aug 17 23:58:27.803: INFO: Checking APIGroup: rbac.authorization.k8s.io +Aug 17 23:58:27.804: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Aug 17 23:58:27.804: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Aug 17 23:58:27.804: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Aug 17 23:58:27.804: INFO: Checking APIGroup: storage.k8s.io +Aug 17 23:58:27.805: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Aug 17 23:58:27.805: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Aug 17 23:58:27.805: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Aug 17 23:58:27.805: INFO: Checking APIGroup: admissionregistration.k8s.io +Aug 17 23:58:27.806: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Aug 17 23:58:27.806: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Aug 17 23:58:27.806: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Aug 17 23:58:27.806: INFO: Checking APIGroup: apiextensions.k8s.io +Aug 17 23:58:27.807: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Aug 17 23:58:27.807: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Aug 17 23:58:27.807: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Aug 17 23:58:27.807: INFO: Checking APIGroup: scheduling.k8s.io +Aug 17 23:58:27.807: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Aug 17 23:58:27.807: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Aug 17 23:58:27.807: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Aug 17 23:58:27.807: INFO: Checking APIGroup: coordination.k8s.io +Aug 17 23:58:27.808: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Aug 17 23:58:27.808: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Aug 17 23:58:27.808: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Aug 17 23:58:27.808: INFO: Checking APIGroup: node.k8s.io +Aug 17 23:58:27.809: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Aug 17 23:58:27.809: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] +Aug 17 23:58:27.809: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Aug 17 23:58:27.809: INFO: Checking APIGroup: discovery.k8s.io +Aug 17 23:58:27.810: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Aug 17 23:58:27.810: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] +Aug 17 23:58:27.810: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Aug 17 23:58:27.810: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Aug 17 23:58:27.811: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 +Aug 17 23:58:27.811: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] +Aug 17 23:58:27.811: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 +Aug 17 23:58:27.811: INFO: Checking APIGroup: acme.cert-manager.io +Aug 17 23:58:27.812: INFO: PreferredVersion.GroupVersion: acme.cert-manager.io/v1 +Aug 17 23:58:27.812: INFO: Versions found [{acme.cert-manager.io/v1 v1}] +Aug 17 23:58:27.812: INFO: acme.cert-manager.io/v1 matches acme.cert-manager.io/v1 +Aug 17 23:58:27.812: INFO: Checking APIGroup: cert-manager.io +Aug 17 23:58:27.813: INFO: PreferredVersion.GroupVersion: cert-manager.io/v1 +Aug 17 23:58:27.813: INFO: Versions found [{cert-manager.io/v1 v1}] +Aug 17 23:58:27.813: INFO: cert-manager.io/v1 matches cert-manager.io/v1 +Aug 17 23:58:27.813: INFO: Checking APIGroup: anywhere.eks.amazonaws.com +Aug 17 23:58:27.813: INFO: PreferredVersion.GroupVersion: anywhere.eks.amazonaws.com/v1alpha1 +Aug 17 23:58:27.813: INFO: Versions found [{anywhere.eks.amazonaws.com/v1alpha1 v1alpha1}] +Aug 17 23:58:27.813: INFO: anywhere.eks.amazonaws.com/v1alpha1 matches anywhere.eks.amazonaws.com/v1alpha1 +Aug 17 23:58:27.813: INFO: Checking APIGroup: distro.eks.amazonaws.com +Aug 17 23:58:27.814: INFO: PreferredVersion.GroupVersion: distro.eks.amazonaws.com/v1alpha1 +Aug 17 23:58:27.814: INFO: Versions found [{distro.eks.amazonaws.com/v1alpha1 v1alpha1}] +Aug 17 23:58:27.814: INFO: distro.eks.amazonaws.com/v1alpha1 matches distro.eks.amazonaws.com/v1alpha1 +Aug 17 23:58:27.814: INFO: Checking APIGroup: ipam.cluster.x-k8s.io +Aug 17 23:58:27.815: INFO: PreferredVersion.GroupVersion: ipam.cluster.x-k8s.io/v1alpha1 +Aug 17 23:58:27.815: INFO: Versions found [{ipam.cluster.x-k8s.io/v1alpha1 v1alpha1}] +Aug 17 23:58:27.815: INFO: ipam.cluster.x-k8s.io/v1alpha1 matches ipam.cluster.x-k8s.io/v1alpha1 +Aug 17 23:58:27.815: INFO: Checking APIGroup: packages.eks.amazonaws.com +Aug 17 23:58:27.816: INFO: PreferredVersion.GroupVersion: packages.eks.amazonaws.com/v1alpha1 +Aug 17 23:58:27.816: INFO: Versions found [{packages.eks.amazonaws.com/v1alpha1 v1alpha1}] +Aug 17 23:58:27.816: INFO: packages.eks.amazonaws.com/v1alpha1 matches packages.eks.amazonaws.com/v1alpha1 +Aug 17 23:58:27.816: INFO: Checking APIGroup: runtime.cluster.x-k8s.io +Aug 17 23:58:27.816: INFO: PreferredVersion.GroupVersion: runtime.cluster.x-k8s.io/v1alpha1 +Aug 17 23:58:27.816: INFO: Versions found [{runtime.cluster.x-k8s.io/v1alpha1 v1alpha1}] +Aug 17 23:58:27.816: INFO: runtime.cluster.x-k8s.io/v1alpha1 matches runtime.cluster.x-k8s.io/v1alpha1 +Aug 17 23:58:27.816: INFO: Checking APIGroup: addons.cluster.x-k8s.io +Aug 17 23:58:27.817: INFO: PreferredVersion.GroupVersion: addons.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.817: INFO: Versions found [{addons.cluster.x-k8s.io/v1beta1 v1beta1} {addons.cluster.x-k8s.io/v1alpha4 v1alpha4} {addons.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Aug 17 23:58:27.817: INFO: addons.cluster.x-k8s.io/v1beta1 matches addons.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.817: INFO: Checking APIGroup: bootstrap.cluster.x-k8s.io +Aug 17 23:58:27.818: INFO: PreferredVersion.GroupVersion: bootstrap.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.818: INFO: Versions found [{bootstrap.cluster.x-k8s.io/v1beta1 v1beta1} {bootstrap.cluster.x-k8s.io/v1alpha4 v1alpha4} {bootstrap.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Aug 17 23:58:27.818: INFO: bootstrap.cluster.x-k8s.io/v1beta1 matches bootstrap.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.818: INFO: Checking APIGroup: cluster.x-k8s.io +Aug 17 23:58:27.819: INFO: PreferredVersion.GroupVersion: cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.819: INFO: Versions found [{cluster.x-k8s.io/v1beta1 v1beta1} {cluster.x-k8s.io/v1alpha4 v1alpha4} {cluster.x-k8s.io/v1alpha3 v1alpha3}] +Aug 17 23:58:27.819: INFO: cluster.x-k8s.io/v1beta1 matches cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.819: INFO: Checking APIGroup: clusterctl.cluster.x-k8s.io +Aug 17 23:58:27.819: INFO: PreferredVersion.GroupVersion: clusterctl.cluster.x-k8s.io/v1alpha3 +Aug 17 23:58:27.819: INFO: Versions found [{clusterctl.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Aug 17 23:58:27.819: INFO: clusterctl.cluster.x-k8s.io/v1alpha3 matches clusterctl.cluster.x-k8s.io/v1alpha3 +Aug 17 23:58:27.819: INFO: Checking APIGroup: controlplane.cluster.x-k8s.io +Aug 17 23:58:27.820: INFO: PreferredVersion.GroupVersion: controlplane.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.820: INFO: Versions found [{controlplane.cluster.x-k8s.io/v1beta1 v1beta1} {controlplane.cluster.x-k8s.io/v1alpha4 v1alpha4} {controlplane.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Aug 17 23:58:27.820: INFO: controlplane.cluster.x-k8s.io/v1beta1 matches controlplane.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.820: INFO: Checking APIGroup: etcdcluster.cluster.x-k8s.io +Aug 17 23:58:27.821: INFO: PreferredVersion.GroupVersion: etcdcluster.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.821: INFO: Versions found [{etcdcluster.cluster.x-k8s.io/v1beta1 v1beta1} {etcdcluster.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Aug 17 23:58:27.821: INFO: etcdcluster.cluster.x-k8s.io/v1beta1 matches etcdcluster.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.821: INFO: Checking APIGroup: infrastructure.cluster.x-k8s.io +Aug 17 23:58:27.822: INFO: PreferredVersion.GroupVersion: infrastructure.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.822: INFO: Versions found [{infrastructure.cluster.x-k8s.io/v1beta1 v1beta1} {infrastructure.cluster.x-k8s.io/v1alpha4 v1alpha4} {infrastructure.cluster.x-k8s.io/v1alpha3 v1alpha3}] +Aug 17 23:58:27.822: INFO: infrastructure.cluster.x-k8s.io/v1beta1 matches infrastructure.cluster.x-k8s.io/v1beta1 +Aug 17 23:58:27.822: INFO: Checking APIGroup: cilium.io +Aug 17 23:58:27.823: INFO: PreferredVersion.GroupVersion: cilium.io/v2 +Aug 17 23:58:27.823: INFO: Versions found [{cilium.io/v2 v2} {cilium.io/v2alpha1 v2alpha1}] +Aug 17 23:58:27.823: INFO: cilium.io/v2 matches cilium.io/v2 +[AfterEach] [sig-api-machinery] Discovery + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:58:27.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "discovery-2920" for this suite. +•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":280,"skipped":5221,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:58:27.835: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-upd-08883c59-5a8b-491f-a0b9-962fb0609e86 +STEP: Creating the pod +Aug 17 23:58:27.883: INFO: The status of Pod pod-configmaps-c7ec1872-4118-4ff8-94a4-b56c600e5bde is Pending, waiting for it to be Running (with Ready = true) +Aug 17 23:58:29.889: INFO: The status of Pod pod-configmaps-c7ec1872-4118-4ff8-94a4-b56c600e5bde is Running (Ready = true) +STEP: Updating configmap configmap-test-upd-08883c59-5a8b-491f-a0b9-962fb0609e86 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:58:31.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-6227" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":281,"skipped":5242,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:58:31.943: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 +[It] should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a replication controller +Aug 17 23:58:31.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 create -f -' +Aug 17 23:58:33.324: INFO: stderr: "" +Aug 17 23:58:33.324: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Aug 17 23:58:33.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 17 23:58:33.403: INFO: stderr: "" +Aug 17 23:58:33.403: INFO: stdout: "update-demo-nautilus-pwc82 update-demo-nautilus-vwfq5 " +Aug 17 23:58:33.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-pwc82 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 23:58:33.474: INFO: stderr: "" +Aug 17 23:58:33.474: INFO: stdout: "" +Aug 17 23:58:33.474: INFO: update-demo-nautilus-pwc82 is created but not running +Aug 17 23:58:38.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 17 23:58:38.544: INFO: stderr: "" +Aug 17 23:58:38.545: INFO: stdout: "update-demo-nautilus-pwc82 update-demo-nautilus-vwfq5 " +Aug 17 23:58:38.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-pwc82 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 23:58:38.610: INFO: stderr: "" +Aug 17 23:58:38.610: INFO: stdout: "true" +Aug 17 23:58:38.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-pwc82 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 17 23:58:38.678: INFO: stderr: "" +Aug 17 23:58:38.678: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 17 23:58:38.678: INFO: validating pod update-demo-nautilus-pwc82 +Aug 17 23:58:38.684: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 17 23:58:38.684: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 17 23:58:38.684: INFO: update-demo-nautilus-pwc82 is verified up and running +Aug 17 23:58:38.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-vwfq5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 23:58:38.751: INFO: stderr: "" +Aug 17 23:58:38.751: INFO: stdout: "true" +Aug 17 23:58:38.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-vwfq5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 17 23:58:38.815: INFO: stderr: "" +Aug 17 23:58:38.815: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 17 23:58:38.815: INFO: validating pod update-demo-nautilus-vwfq5 +Aug 17 23:58:38.821: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 17 23:58:38.821: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 17 23:58:38.821: INFO: update-demo-nautilus-vwfq5 is verified up and running +STEP: scaling down the replication controller +Aug 17 23:58:38.824: INFO: scanned /root for discovery docs: +Aug 17 23:58:38.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Aug 17 23:58:39.929: INFO: stderr: "" +Aug 17 23:58:39.929: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Aug 17 23:58:39.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 17 23:58:39.998: INFO: stderr: "" +Aug 17 23:58:39.998: INFO: stdout: "update-demo-nautilus-pwc82 update-demo-nautilus-vwfq5 " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Aug 17 23:58:44.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 17 23:58:45.071: INFO: stderr: "" +Aug 17 23:58:45.071: INFO: stdout: "update-demo-nautilus-pwc82 " +Aug 17 23:58:45.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-pwc82 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 23:58:45.138: INFO: stderr: "" +Aug 17 23:58:45.138: INFO: stdout: "true" +Aug 17 23:58:45.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-pwc82 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 17 23:58:45.209: INFO: stderr: "" +Aug 17 23:58:45.209: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 17 23:58:45.209: INFO: validating pod update-demo-nautilus-pwc82 +Aug 17 23:58:45.213: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 17 23:58:45.213: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 17 23:58:45.213: INFO: update-demo-nautilus-pwc82 is verified up and running +STEP: scaling up the replication controller +Aug 17 23:58:45.217: INFO: scanned /root for discovery docs: +Aug 17 23:58:45.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Aug 17 23:58:46.314: INFO: stderr: "" +Aug 17 23:58:46.314: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Aug 17 23:58:46.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 17 23:58:46.385: INFO: stderr: "" +Aug 17 23:58:46.385: INFO: stdout: "update-demo-nautilus-bq2sd update-demo-nautilus-pwc82 " +Aug 17 23:58:46.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-bq2sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 23:58:46.450: INFO: stderr: "" +Aug 17 23:58:46.451: INFO: stdout: "" +Aug 17 23:58:46.451: INFO: update-demo-nautilus-bq2sd is created but not running +Aug 17 23:58:51.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Aug 17 23:58:51.524: INFO: stderr: "" +Aug 17 23:58:51.524: INFO: stdout: "update-demo-nautilus-bq2sd update-demo-nautilus-pwc82 " +Aug 17 23:58:51.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-bq2sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 23:58:51.594: INFO: stderr: "" +Aug 17 23:58:51.594: INFO: stdout: "true" +Aug 17 23:58:51.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-bq2sd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 17 23:58:51.665: INFO: stderr: "" +Aug 17 23:58:51.665: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 17 23:58:51.665: INFO: validating pod update-demo-nautilus-bq2sd +Aug 17 23:58:51.671: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 17 23:58:51.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 17 23:58:51.671: INFO: update-demo-nautilus-bq2sd is verified up and running +Aug 17 23:58:51.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-pwc82 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Aug 17 23:58:51.741: INFO: stderr: "" +Aug 17 23:58:51.741: INFO: stdout: "true" +Aug 17 23:58:51.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods update-demo-nautilus-pwc82 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Aug 17 23:58:51.833: INFO: stderr: "" +Aug 17 23:58:51.833: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" +Aug 17 23:58:51.833: INFO: validating pod update-demo-nautilus-pwc82 +Aug 17 23:58:51.837: INFO: got data: { + "image": "nautilus.jpg" +} + +Aug 17 23:58:51.837: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Aug 17 23:58:51.837: INFO: update-demo-nautilus-pwc82 is verified up and running +STEP: using delete to clean up resources +Aug 17 23:58:51.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 delete --grace-period=0 --force -f -' +Aug 17 23:58:51.916: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Aug 17 23:58:51.916: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Aug 17 23:58:51.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get rc,svc -l name=update-demo --no-headers' +Aug 17 23:58:52.000: INFO: stderr: "No resources found in kubectl-7319 namespace.\n" +Aug 17 23:58:52.000: INFO: stdout: "" +Aug 17 23:58:52.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-7319 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Aug 17 23:58:52.072: INFO: stderr: "" +Aug 17 23:58:52.072: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:58:52.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7319" for this suite. + +• [SLOW TEST:20.145 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294 + should scale a replication controller [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":346,"completed":282,"skipped":5283,"failed":0} +SSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:58:52.089: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:59:17.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-762" for this suite. + +• [SLOW TEST:25.349 seconds] +[sig-node] Container Runtime +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + blackbox test + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 + when starting a container that exits + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 + should run with the expected status [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":283,"skipped":5286,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:59:17.439: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be immutable if `immutable` field is set [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:59:17.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-4854" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":284,"skipped":5311,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:59:17.520: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-4678 +[It] should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating statefulset ss in namespace statefulset-4678 +Aug 17 23:59:17.564: INFO: Found 0 stateful pods, waiting for 1 +Aug 17 23:59:27.570: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource +STEP: updating a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +STEP: Patch a scale subresource +STEP: verifying the statefulset Spec.Replicas was modified +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 17 23:59:27.602: INFO: Deleting all statefulset in ns statefulset-4678 +Aug 17 23:59:27.609: INFO: Scaling statefulset ss to 0 +Aug 17 23:59:37.634: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 17 23:59:37.638: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:59:37.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-4678" for this suite. + +• [SLOW TEST:20.147 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + should have a working scale subresource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":285,"skipped":5318,"failed":0} +SSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:59:37.669: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the container +STEP: wait for the container to reach Failed +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Aug 17 23:59:41.743: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:59:41.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-8956" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":346,"completed":286,"skipped":5330,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:59:41.775: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap configmap-183/configmap-test-2e3facd0-c4da-4d22-965d-1e702afc2acf +STEP: Creating a pod to test consume configMaps +Aug 17 23:59:41.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-70e08435-46ec-4c1f-a8e5-99e3b46e3e53" in namespace "configmap-183" to be "Succeeded or Failed" +Aug 17 23:59:41.822: INFO: Pod "pod-configmaps-70e08435-46ec-4c1f-a8e5-99e3b46e3e53": Phase="Pending", Reason="", readiness=false. Elapsed: 5.669298ms +Aug 17 23:59:43.828: INFO: Pod "pod-configmaps-70e08435-46ec-4c1f-a8e5-99e3b46e3e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011487192s +Aug 17 23:59:45.833: INFO: Pod "pod-configmaps-70e08435-46ec-4c1f-a8e5-99e3b46e3e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016465895s +STEP: Saw pod success +Aug 17 23:59:45.833: INFO: Pod "pod-configmaps-70e08435-46ec-4c1f-a8e5-99e3b46e3e53" satisfied condition "Succeeded or Failed" +Aug 17 23:59:45.836: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-70e08435-46ec-4c1f-a8e5-99e3b46e3e53 container env-test: +STEP: delete the pod +Aug 17 23:59:45.856: INFO: Waiting for pod pod-configmaps-70e08435-46ec-4c1f-a8e5-99e3b46e3e53 to disappear +Aug 17 23:59:45.859: INFO: Pod pod-configmaps-70e08435-46ec-4c1f-a8e5-99e3b46e3e53 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:59:45.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-183" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":287,"skipped":5382,"failed":0} +SSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:59:45.874: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 17 23:59:45.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd4fe937-1bbf-4f23-96b2-c55890c4a029" in namespace "downward-api-130" to be "Succeeded or Failed" +Aug 17 23:59:45.920: INFO: Pod "downwardapi-volume-cd4fe937-1bbf-4f23-96b2-c55890c4a029": Phase="Pending", Reason="", readiness=false. Elapsed: 7.226338ms +Aug 17 23:59:47.926: INFO: Pod "downwardapi-volume-cd4fe937-1bbf-4f23-96b2-c55890c4a029": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013762995s +Aug 17 23:59:49.932: INFO: Pod "downwardapi-volume-cd4fe937-1bbf-4f23-96b2-c55890c4a029": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019958656s +STEP: Saw pod success +Aug 17 23:59:49.932: INFO: Pod "downwardapi-volume-cd4fe937-1bbf-4f23-96b2-c55890c4a029" satisfied condition "Succeeded or Failed" +Aug 17 23:59:49.936: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-cd4fe937-1bbf-4f23-96b2-c55890c4a029 container client-container: +STEP: delete the pod +Aug 17 23:59:49.963: INFO: Waiting for pod downwardapi-volume-cd4fe937-1bbf-4f23-96b2-c55890c4a029 to disappear +Aug 17 23:59:49.967: INFO: Pod downwardapi-volume-cd4fe937-1bbf-4f23-96b2-c55890c4a029 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 17 23:59:49.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-130" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":288,"skipped":5387,"failed":0} +SSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 17 23:59:49.994: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:00:50.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5710" for this suite. + +• [SLOW TEST:60.116 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":289,"skipped":5390,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:00:50.111: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +Aug 18 00:00:50.139: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:00:54.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-2019" for this suite. +•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":290,"skipped":5401,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:00:54.474: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 18 00:00:54.515: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Pending, waiting for it to be Running (with Ready = true) +Aug 18 00:00:56.522: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:00:58.521: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:00.523: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:02.522: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:04.521: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:06.521: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:08.522: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:10.523: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:12.522: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:14.522: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = false) +Aug 18 00:01:16.521: INFO: The status of Pod test-webserver-f0130a48-6cf4-40d4-99ef-a4a70706acdd is Running (Ready = true) +Aug 18 00:01:16.524: INFO: Container started at 2022-08-18 00:00:55 +0000 UTC, pod became ready at 2022-08-18 00:01:14 +0000 UTC +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:01:16.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-528" for this suite. + +• [SLOW TEST:22.064 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":291,"skipped":5471,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:01:16.538: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 18 00:01:17.185: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 18 00:01:20.226: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 18 00:01:20.231: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3722-crds.webhook.example.com via the AdmissionRegistration API +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:01:23.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-1522" for this suite. +STEP: Destroying namespace "webhook-1522-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 + +• [SLOW TEST:6.885 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":292,"skipped":5478,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:01:23.425: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 18 00:01:23.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69f67f92-8656-4429-ab83-ff77be7d2714" in namespace "projected-1459" to be "Succeeded or Failed" +Aug 18 00:01:23.486: INFO: Pod "downwardapi-volume-69f67f92-8656-4429-ab83-ff77be7d2714": Phase="Pending", Reason="", readiness=false. Elapsed: 3.511272ms +Aug 18 00:01:25.492: INFO: Pod "downwardapi-volume-69f67f92-8656-4429-ab83-ff77be7d2714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008924476s +Aug 18 00:01:27.499: INFO: Pod "downwardapi-volume-69f67f92-8656-4429-ab83-ff77be7d2714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016162492s +STEP: Saw pod success +Aug 18 00:01:27.499: INFO: Pod "downwardapi-volume-69f67f92-8656-4429-ab83-ff77be7d2714" satisfied condition "Succeeded or Failed" +Aug 18 00:01:27.502: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-69f67f92-8656-4429-ab83-ff77be7d2714 container client-container: +STEP: delete the pod +Aug 18 00:01:27.528: INFO: Waiting for pod downwardapi-volume-69f67f92-8656-4429-ab83-ff77be7d2714 to disappear +Aug 18 00:01:27.531: INFO: Pod downwardapi-volume-69f67f92-8656-4429-ab83-ff77be7d2714 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:01:27.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1459" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":293,"skipped":5535,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:01:27.545: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Aug 18 00:01:28.181: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 18 00:01:31.214: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Registering the crd webhook via the AdmissionRegistration API +STEP: Creating a custom resource definition that should be denied by the webhook +Aug 18 00:01:31.243: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:01:31.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7551" for this suite. +STEP: Destroying namespace "webhook-7551-markers" for this suite. +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 +•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":294,"skipped":5597,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:01:31.334: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating service in namespace services-5933 +Aug 18 00:01:31.373: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) +Aug 18 00:01:33.377: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) +Aug 18 00:01:33.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5933 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' +Aug 18 00:01:33.539: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" +Aug 18 00:01:33.539: INFO: stdout: "iptables" +Aug 18 00:01:33.539: INFO: proxyMode: iptables +Aug 18 00:01:33.554: INFO: Waiting for pod kube-proxy-mode-detector to disappear +Aug 18 00:01:33.557: INFO: Pod kube-proxy-mode-detector no longer exists +STEP: creating service affinity-nodeport-timeout in namespace services-5933 +STEP: creating replication controller affinity-nodeport-timeout in namespace services-5933 +I0818 00:01:33.587927 20 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5933, replica count: 3 +I0818 00:01:36.639163 20 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Aug 18 00:01:36.649: INFO: Creating new exec pod +Aug 18 00:01:39.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5933 exec execpod-affinity7ncvf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' +Aug 18 00:01:39.815: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" +Aug 18 00:01:39.815: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 18 00:01:39.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5933 exec execpod-affinity7ncvf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.97.27.65 80' +Aug 18 00:01:39.957: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 10.97.27.65 80\nConnection to 10.97.27.65 80 port [tcp/http] succeeded!\n" +Aug 18 00:01:39.957: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 18 00:01:39.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5933 exec execpod-affinity7ncvf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.131.205 31721' +Aug 18 00:01:40.090: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.131.205 31721\nConnection to 195.17.131.205 31721 port [tcp/*] succeeded!\n" +Aug 18 00:01:40.090: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 18 00:01:40.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5933 exec execpod-affinity7ncvf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 195.17.65.231 31721' +Aug 18 00:01:40.224: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 195.17.65.231 31721\nConnection to 195.17.65.231 31721 port [tcp/*] succeeded!\n" +Aug 18 00:01:40.224: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" +Aug 18 00:01:40.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5933 exec execpod-affinity7ncvf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://195.17.131.205:31721/ ; done' +Aug 18 00:01:40.436: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n+ echo\n+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n" +Aug 18 00:01:40.437: INFO: stdout: "\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl\naffinity-nodeport-timeout-hf4nl" +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Received response from host: affinity-nodeport-timeout-hf4nl +Aug 18 00:01:40.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5933 exec execpod-affinity7ncvf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://195.17.131.205:31721/' +Aug 18 00:01:40.581: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n" +Aug 18 00:01:40.581: INFO: stdout: "affinity-nodeport-timeout-hf4nl" +Aug 18 00:02:00.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-5933 exec execpod-affinity7ncvf -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://195.17.131.205:31721/' +Aug 18 00:02:00.721: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://195.17.131.205:31721/\n" +Aug 18 00:02:00.721: INFO: stdout: "affinity-nodeport-timeout-6qvlm" +Aug 18 00:02:00.721: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-5933, will wait for the garbage collector to delete the pods +Aug 18 00:02:00.801: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 8.29043ms +Aug 18 00:02:00.903: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 102.646126ms +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:02:02.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5933" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:31.514 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":295,"skipped":5615,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:02:02.849: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod test-webserver-2a1db918-6696-4554-84e9-2b1f54f4eeda in namespace container-probe-7403 +Aug 18 00:02:04.895: INFO: Started pod test-webserver-2a1db918-6696-4554-84e9-2b1f54f4eeda in namespace container-probe-7403 +STEP: checking the pod's current state and verifying that restartCount is present +Aug 18 00:02:04.899: INFO: Initial restart count of pod test-webserver-2a1db918-6696-4554-84e9-2b1f54f4eeda is 0 +STEP: deleting the pod +[AfterEach] [sig-node] Probing container + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:05.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7403" for this suite. + +• [SLOW TEST:242.982 seconds] +[sig-node] Probing container +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":296,"skipped":5664,"failed":0} +S +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:05.834: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename disruption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pdb that targets all three pods in a test replica set +STEP: Waiting for the pdb to be processed +STEP: First trying to evict a pod which shouldn't be evictable +STEP: Waiting for all pods to be running +Aug 18 00:06:07.903: INFO: pods: 0 < 3 +STEP: locating a running pod +STEP: Updating the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +STEP: Waiting for the pdb to observed all healthy pods +STEP: Patching the pdb to disallow a pod to be evicted +STEP: Waiting for the pdb to be processed +STEP: Waiting for all pods to be running +STEP: locating a running pod +STEP: Deleting the pdb to allow a pod to be evicted +STEP: Waiting for the pdb to be deleted +STEP: Trying to evict the same pod we tried earlier which should now be evictable +STEP: Waiting for all pods to be running +[AfterEach] [sig-apps] DisruptionController + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:12.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "disruption-5083" for this suite. + +• [SLOW TEST:6.435 seconds] +[sig-apps] DisruptionController +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should block an eviction until the PDB is updated to allow it [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":297,"skipped":5665,"failed":0} +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:12.268: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a Deployment +STEP: waiting for Deployment to be created +STEP: waiting for all Replicas to be Ready +Aug 18 00:06:12.314: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 18 00:06:12.315: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 18 00:06:12.332: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 18 00:06:12.333: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 18 00:06:12.350: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 18 00:06:12.350: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 18 00:06:12.387: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 18 00:06:12.387: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Aug 18 00:06:14.044: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Aug 18 00:06:14.044: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Aug 18 00:06:14.220: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment +Aug 18 00:06:14.245: INFO: observed event type ADDED +STEP: waiting for Replicas to scale +Aug 18 00:06:14.247: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 +Aug 18 00:06:14.247: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 +Aug 18 00:06:14.247: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 +Aug 18 00:06:14.247: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 +Aug 18 00:06:14.247: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 +Aug 18 00:06:14.247: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 +Aug 18 00:06:14.247: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 +Aug 18 00:06:14.248: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 0 +Aug 18 00:06:14.248: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +Aug 18 00:06:14.248: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +Aug 18 00:06:14.248: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.248: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.248: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.248: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.259: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.259: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.283: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.284: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.314: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.314: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:14.322: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +Aug 18 00:06:14.322: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +Aug 18 00:06:16.239: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:16.239: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:16.273: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +STEP: listing Deployments +Aug 18 00:06:16.278: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment +Aug 18 00:06:16.293: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus +Aug 18 00:06:16.305: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 18 00:06:16.311: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 18 00:06:16.344: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 18 00:06:16.365: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Aug 18 00:06:18.072: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Aug 18 00:06:18.244: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Aug 18 00:06:18.287: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Aug 18 00:06:18.297: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Aug 18 00:06:20.066: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus +STEP: fetching the DeploymentStatus +Aug 18 00:06:20.122: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +Aug 18 00:06:20.123: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +Aug 18 00:06:20.123: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +Aug 18 00:06:20.123: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 1 +Aug 18 00:06:20.123: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:20.123: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 3 +Aug 18 00:06:20.123: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 3 +Aug 18 00:06:20.123: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 2 +Aug 18 00:06:20.123: INFO: observed Deployment test-deployment in namespace deployment-9904 with ReadyReplicas 3 +STEP: deleting the Deployment +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +Aug 18 00:06:20.138: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 18 00:06:20.142: INFO: Log out all the ReplicaSets if there is no deployment created +Aug 18 00:06:20.146: INFO: ReplicaSet "test-deployment-5ddd8b47d8": +&ReplicaSet{ObjectMeta:{test-deployment-5ddd8b47d8 deployment-9904 b6e57eb0-ba56-4668-be25-38fda05f0182 87235 4 2022-08-18 00:06:14 +0000 UTC map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 69995283-2848-4168-aaaa-91b192a3141f 0xc008f02f17 0xc008f02f18}] [] [{kube-controller-manager Update apps/v1 2022-08-18 00:06:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69995283-2848-4168-aaaa-91b192a3141f\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-18 00:06:20 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 5ddd8b47d8,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.6 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc008f02fa0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Aug 18 00:06:20.150: INFO: pod: "test-deployment-5ddd8b47d8-cztxs": +&Pod{ObjectMeta:{test-deployment-5ddd8b47d8-cztxs test-deployment-5ddd8b47d8- deployment-9904 ac03d733-846e-40dd-8e60-60397f54ca91 87231 0 2022-08-18 00:06:16 +0000 UTC 2022-08-18 00:06:21 +0000 UTC 0xc0034b6648 map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-5ddd8b47d8 b6e57eb0-ba56-4668-be25-38fda05f0182 0xc0034b6677 0xc0034b6678}] [] [{kube-controller-manager Update v1 2022-08-18 00:06:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6e57eb0-ba56-4668-be25-38fda05f0182\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:06:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7dkjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.6,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7dkjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:192.168.2.39,StartTime:2022-08-18 00:06:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:06:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.6,ImageID:k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,ContainerID:containerd://8838fd6ca68ea1c1af6eeb9b616c1652017deac970e14001369ed2a526fee450,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Aug 18 00:06:20.150: INFO: ReplicaSet "test-deployment-6cdc5bc678": +&ReplicaSet{ObjectMeta:{test-deployment-6cdc5bc678 deployment-9904 8faaf56e-1f66-49a7-a7f0-d5fe6008b445 87066 3 2022-08-18 00:06:12 +0000 UTC map[pod-template-hash:6cdc5bc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 69995283-2848-4168-aaaa-91b192a3141f 0xc008f03007 0xc008f03008}] [] [{kube-controller-manager Update apps/v1 2022-08-18 00:06:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69995283-2848-4168-aaaa-91b192a3141f\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-18 00:06:16 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 6cdc5bc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:6cdc5bc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc008f03090 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Aug 18 00:06:20.154: INFO: ReplicaSet "test-deployment-854fdc678": +&ReplicaSet{ObjectMeta:{test-deployment-854fdc678 deployment-9904 2b0d020b-c28e-4310-84e6-8d5de97fccf1 87227 2 2022-08-18 00:06:16 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 69995283-2848-4168-aaaa-91b192a3141f 0xc008f030f7 0xc008f030f8}] [] [{kube-controller-manager Update apps/v1 2022-08-18 00:06:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"69995283-2848-4168-aaaa-91b192a3141f\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-18 00:06:18 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 854fdc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc008f03180 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Aug 18 00:06:20.158: INFO: pod: "test-deployment-854fdc678-5tx4l": +&Pod{ObjectMeta:{test-deployment-854fdc678-5tx4l test-deployment-854fdc678- deployment-9904 0ee9fef9-dcc6-49a5-9774-040b7cd0ecc5 87174 0 2022-08-18 00:06:16 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 2b0d020b-c28e-4310-84e6-8d5de97fccf1 0xc0034b7517 0xc0034b7518}] [] [{kube-controller-manager Update v1 2022-08-18 00:06:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b0d020b-c28e-4310-84e6-8d5de97fccf1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:06:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.226\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k67l4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k67l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:192.168.1.226,StartTime:2022-08-18 00:06:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:06:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://1062f7c14a5e78d7de13cb7bb0eaf2647619aacc5f428dc4a94dec79ec17155d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Aug 18 00:06:20.158: INFO: pod: "test-deployment-854fdc678-pv7lg": +&Pod{ObjectMeta:{test-deployment-854fdc678-pv7lg test-deployment-854fdc678- deployment-9904 8a3760d4-b39a-442e-9af3-6c02cd9e0215 87226 0 2022-08-18 00:06:18 +0000 UTC map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 2b0d020b-c28e-4310-84e6-8d5de97fccf1 0xc0034b7707 0xc0034b7708}] [] [{kube-controller-manager Update v1 2022-08-18 00:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b0d020b-c28e-4310-84e6-8d5de97fccf1\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:06:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.115\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qwk62,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qwk62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:06:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:192.168.2.115,StartTime:2022-08-18 00:06:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:06:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4eecbb300ae9acca2d6ff45c460e55fcb6c2d633ad9ca0c55c4b24c96afc78b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:20.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9904" for this suite. + +• [SLOW TEST:7.902 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":298,"skipped":5681,"failed":0} +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:20.171: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating projection with secret that has name projected-secret-test-map-2acfac23-badf-4947-9b9b-400d96d61515 +STEP: Creating a pod to test consume secrets +Aug 18 00:06:20.215: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1" in namespace "projected-3145" to be "Succeeded or Failed" +Aug 18 00:06:20.218: INFO: Pod "pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.831011ms +Aug 18 00:06:22.225: INFO: Pod "pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009071656s +Aug 18 00:06:24.231: INFO: Pod "pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01593703s +Aug 18 00:06:26.238: INFO: Pod "pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022502349s +STEP: Saw pod success +Aug 18 00:06:26.238: INFO: Pod "pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1" satisfied condition "Succeeded or Failed" +Aug 18 00:06:26.242: INFO: Trying to get logs from node 195.17.65.231 pod pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1 container projected-secret-volume-test: +STEP: delete the pod +Aug 18 00:06:26.284: INFO: Waiting for pod pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1 to disappear +Aug 18 00:06:26.288: INFO: Pod pod-projected-secrets-78019629-7978-4c2f-a2cd-0054a2e675d1 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:26.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-3145" for this suite. + +• [SLOW TEST:6.129 seconds] +[sig-storage] Projected secret +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":299,"skipped":5681,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:26.304: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Aug 18 00:06:26.369: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:06:26.369: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:06:26.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:06:26.375: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:06:27.384: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:06:27.385: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:06:27.389: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:06:27.389: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:06:28.383: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:06:28.383: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:06:28.387: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 18 00:06:28.387: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Aug 18 00:06:28.407: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:06:28.407: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:06:28.412: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 18 00:06:28.412: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7766, will wait for the garbage collector to delete the pods +Aug 18 00:06:29.497: INFO: Deleting DaemonSet.extensions daemon-set took: 10.817653ms +Aug 18 00:06:29.598: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.942553ms +Aug 18 00:06:31.303: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:06:31.303: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 18 00:06:31.305: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"87505"},"items":null} + +Aug 18 00:06:31.308: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"87505"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:31.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7766" for this suite. + +• [SLOW TEST:5.027 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":300,"skipped":5695,"failed":0} +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:31.331: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-webhook +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 +STEP: Setting up server cert +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication +STEP: Deploying the custom resource conversion webhook pod +STEP: Wait for the deployment to be ready +Aug 18 00:06:32.147: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Aug 18 00:06:35.183: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 18 00:06:35.189: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Creating a v1 custom resource +STEP: v2 custom resource should be converted +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:38.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-webhook-6161" for this suite. +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 + +• [SLOW TEST:7.277 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":301,"skipped":5709,"failed":0} +SSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:38.632: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name secret-test-dd179ca7-fe7b-4c20-9ee7-401c10b46d3e +STEP: Creating a pod to test consume secrets +Aug 18 00:06:38.695: INFO: Waiting up to 5m0s for pod "pod-secrets-66213107-6f66-4f0b-8497-7c6fb79b8930" in namespace "secrets-3700" to be "Succeeded or Failed" +Aug 18 00:06:38.708: INFO: Pod "pod-secrets-66213107-6f66-4f0b-8497-7c6fb79b8930": Phase="Pending", Reason="", readiness=false. Elapsed: 13.28188ms +Aug 18 00:06:40.713: INFO: Pod "pod-secrets-66213107-6f66-4f0b-8497-7c6fb79b8930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018401035s +Aug 18 00:06:42.719: INFO: Pod "pod-secrets-66213107-6f66-4f0b-8497-7c6fb79b8930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023946518s +STEP: Saw pod success +Aug 18 00:06:42.719: INFO: Pod "pod-secrets-66213107-6f66-4f0b-8497-7c6fb79b8930" satisfied condition "Succeeded or Failed" +Aug 18 00:06:42.723: INFO: Trying to get logs from node 195.17.65.231 pod pod-secrets-66213107-6f66-4f0b-8497-7c6fb79b8930 container secret-volume-test: +STEP: delete the pod +Aug 18 00:06:42.750: INFO: Waiting for pod pod-secrets-66213107-6f66-4f0b-8497-7c6fb79b8930 to disappear +Aug 18 00:06:42.753: INFO: Pod pod-secrets-66213107-6f66-4f0b-8497-7c6fb79b8930 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:42.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3700" for this suite. +•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":302,"skipped":5713,"failed":0} +SSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:42.765: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should allow substituting values in a volume subpath [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test substitution in volume subpath +Aug 18 00:06:42.807: INFO: Waiting up to 5m0s for pod "var-expansion-900a3e71-f56b-4130-b44b-d9862bc8b53b" in namespace "var-expansion-9020" to be "Succeeded or Failed" +Aug 18 00:06:42.811: INFO: Pod "var-expansion-900a3e71-f56b-4130-b44b-d9862bc8b53b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.55445ms +Aug 18 00:06:44.817: INFO: Pod "var-expansion-900a3e71-f56b-4130-b44b-d9862bc8b53b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009924609s +Aug 18 00:06:46.822: INFO: Pod "var-expansion-900a3e71-f56b-4130-b44b-d9862bc8b53b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014580501s +STEP: Saw pod success +Aug 18 00:06:46.822: INFO: Pod "var-expansion-900a3e71-f56b-4130-b44b-d9862bc8b53b" satisfied condition "Succeeded or Failed" +Aug 18 00:06:46.825: INFO: Trying to get logs from node 195.17.65.231 pod var-expansion-900a3e71-f56b-4130-b44b-d9862bc8b53b container dapi-container: +STEP: delete the pod +Aug 18 00:06:46.846: INFO: Waiting for pod var-expansion-900a3e71-f56b-4130-b44b-d9862bc8b53b to disappear +Aug 18 00:06:46.849: INFO: Pod var-expansion-900a3e71-f56b-4130-b44b-d9862bc8b53b no longer exists +[AfterEach] [sig-node] Variable Expansion + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:46.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-9020" for this suite. +•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":303,"skipped":5726,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:46.864: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 18 00:06:46.890: INFO: Creating ReplicaSet my-hostname-basic-cbadfc5e-9522-4ff5-a715-c73f3651c3e1 +Aug 18 00:06:46.904: INFO: Pod name my-hostname-basic-cbadfc5e-9522-4ff5-a715-c73f3651c3e1: Found 0 pods out of 1 +Aug 18 00:06:51.908: INFO: Pod name my-hostname-basic-cbadfc5e-9522-4ff5-a715-c73f3651c3e1: Found 1 pods out of 1 +Aug 18 00:06:51.908: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cbadfc5e-9522-4ff5-a715-c73f3651c3e1" is running +Aug 18 00:06:51.912: INFO: Pod "my-hostname-basic-cbadfc5e-9522-4ff5-a715-c73f3651c3e1-l4xlv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-18 00:06:46 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-18 00:06:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-18 00:06:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-08-18 00:06:46 +0000 UTC Reason: Message:}]) +Aug 18 00:06:51.912: INFO: Trying to dial the pod +Aug 18 00:06:56.926: INFO: Controller my-hostname-basic-cbadfc5e-9522-4ff5-a715-c73f3651c3e1: Got expected result from replica 1 [my-hostname-basic-cbadfc5e-9522-4ff5-a715-c73f3651c3e1-l4xlv]: "my-hostname-basic-cbadfc5e-9522-4ff5-a715-c73f3651c3e1-l4xlv", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:06:56.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-822" for this suite. + +• [SLOW TEST:10.072 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":346,"completed":304,"skipped":5733,"failed":0} +SSSSS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:06:56.937: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename cronjob +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a ReplaceConcurrent cronjob +STEP: Ensuring a job is scheduled +STEP: Ensuring exactly one is scheduled +STEP: Ensuring exactly one running job exists by listing jobs explicitly +STEP: Ensuring the job is replaced with a new one +STEP: Removing cronjob +[AfterEach] [sig-apps] CronJob + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:08:01.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "cronjob-1140" for this suite. + +• [SLOW TEST:64.081 seconds] +[sig-apps] CronJob +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should replace jobs when ReplaceConcurrent [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":305,"skipped":5738,"failed":0} +S +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:08:01.019: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Create a ReplicaSet +STEP: Verify that the required pods have come up +Aug 18 00:08:01.051: INFO: Pod name sample-pod: Found 0 pods out of 3 +Aug 18 00:08:06.058: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running +Aug 18 00:08:06.062: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets +STEP: DeleteCollection of the ReplicaSets +STEP: After DeleteCollection verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] ReplicaSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:08:06.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-1344" for this suite. + +• [SLOW TEST:5.081 seconds] +[sig-apps] ReplicaSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should list and delete a collection of ReplicaSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":306,"skipped":5739,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:08:06.100: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod pod-subpath-test-secret-7x2d +STEP: Creating a pod to test atomic-volume-subpath +Aug 18 00:08:06.161: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7x2d" in namespace "subpath-726" to be "Succeeded or Failed" +Aug 18 00:08:06.165: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.62641ms +Aug 18 00:08:08.172: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 2.010447785s +Aug 18 00:08:10.180: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 4.019073177s +Aug 18 00:08:12.185: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 6.024023335s +Aug 18 00:08:14.193: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 8.032102764s +Aug 18 00:08:16.201: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 10.039720266s +Aug 18 00:08:18.208: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 12.046851593s +Aug 18 00:08:20.214: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 14.052545269s +Aug 18 00:08:22.219: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 16.057676538s +Aug 18 00:08:24.225: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 18.063672677s +Aug 18 00:08:26.230: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=true. Elapsed: 20.069214234s +Aug 18 00:08:28.238: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Running", Reason="", readiness=false. Elapsed: 22.07631725s +Aug 18 00:08:30.245: INFO: Pod "pod-subpath-test-secret-7x2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083293879s +STEP: Saw pod success +Aug 18 00:08:30.245: INFO: Pod "pod-subpath-test-secret-7x2d" satisfied condition "Succeeded or Failed" +Aug 18 00:08:30.248: INFO: Trying to get logs from node 195.17.65.231 pod pod-subpath-test-secret-7x2d container test-container-subpath-secret-7x2d: +STEP: delete the pod +Aug 18 00:08:30.275: INFO: Waiting for pod pod-subpath-test-secret-7x2d to disappear +Aug 18 00:08:30.278: INFO: Pod pod-subpath-test-secret-7x2d no longer exists +STEP: Deleting pod pod-subpath-test-secret-7x2d +Aug 18 00:08:30.278: INFO: Deleting pod "pod-subpath-test-secret-7x2d" in namespace "subpath-726" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:08:30.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-726" for this suite. + +• [SLOW TEST:24.192 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":346,"completed":307,"skipped":5765,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:08:30.293: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap with name configmap-test-volume-map-586a53c8-2eed-4c98-8177-1b00fed37be2 +STEP: Creating a pod to test consume configMaps +Aug 18 00:08:30.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-70049dff-025a-4d07-b509-5dee5b8cbef5" in namespace "configmap-507" to be "Succeeded or Failed" +Aug 18 00:08:30.340: INFO: Pod "pod-configmaps-70049dff-025a-4d07-b509-5dee5b8cbef5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.723375ms +Aug 18 00:08:32.346: INFO: Pod "pod-configmaps-70049dff-025a-4d07-b509-5dee5b8cbef5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008934886s +Aug 18 00:08:34.354: INFO: Pod "pod-configmaps-70049dff-025a-4d07-b509-5dee5b8cbef5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016767806s +STEP: Saw pod success +Aug 18 00:08:34.354: INFO: Pod "pod-configmaps-70049dff-025a-4d07-b509-5dee5b8cbef5" satisfied condition "Succeeded or Failed" +Aug 18 00:08:34.358: INFO: Trying to get logs from node 195.17.65.231 pod pod-configmaps-70049dff-025a-4d07-b509-5dee5b8cbef5 container agnhost-container: +STEP: delete the pod +Aug 18 00:08:34.379: INFO: Waiting for pod pod-configmaps-70049dff-025a-4d07-b509-5dee5b8cbef5 to disappear +Aug 18 00:08:34.383: INFO: Pod pod-configmaps-70049dff-025a-4d07-b509-5dee5b8cbef5 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:08:34.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-507" for this suite. +•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":308,"skipped":5792,"failed":0} +SSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:08:34.396: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should fail to create ConfigMap with empty key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating configMap that has name configmap-test-emptyKey-53294125-6cdb-46bc-a61e-32d01c3f8099 +[AfterEach] [sig-node] ConfigMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:08:34.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7688" for this suite. +•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":309,"skipped":5803,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:08:34.449: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 18 00:08:34.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff0a6774-11ac-4462-9beb-109096b862aa" in namespace "projected-4317" to be "Succeeded or Failed" +Aug 18 00:08:34.493: INFO: Pod "downwardapi-volume-ff0a6774-11ac-4462-9beb-109096b862aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.665878ms +Aug 18 00:08:36.500: INFO: Pod "downwardapi-volume-ff0a6774-11ac-4462-9beb-109096b862aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01173468s +Aug 18 00:08:38.508: INFO: Pod "downwardapi-volume-ff0a6774-11ac-4462-9beb-109096b862aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020076586s +STEP: Saw pod success +Aug 18 00:08:38.508: INFO: Pod "downwardapi-volume-ff0a6774-11ac-4462-9beb-109096b862aa" satisfied condition "Succeeded or Failed" +Aug 18 00:08:38.512: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-ff0a6774-11ac-4462-9beb-109096b862aa container client-container: +STEP: delete the pod +Aug 18 00:08:38.539: INFO: Waiting for pod downwardapi-volume-ff0a6774-11ac-4462-9beb-109096b862aa to disappear +Aug 18 00:08:38.551: INFO: Pod downwardapi-volume-ff0a6774-11ac-4462-9beb-109096b862aa no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:08:38.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4317" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":310,"skipped":5829,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:08:38.564: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating pod pod-subpath-test-downwardapi-gf2m +STEP: Creating a pod to test atomic-volume-subpath +Aug 18 00:08:38.611: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gf2m" in namespace "subpath-6868" to be "Succeeded or Failed" +Aug 18 00:08:38.619: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113057ms +Aug 18 00:08:40.628: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.016819296s +Aug 18 00:08:42.635: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 4.024530606s +Aug 18 00:08:44.645: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 6.034092586s +Aug 18 00:08:46.652: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 8.04126101s +Aug 18 00:08:48.667: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 10.055836448s +Aug 18 00:08:50.674: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 12.062714422s +Aug 18 00:08:52.680: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 14.068797595s +Aug 18 00:08:54.686: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 16.075357526s +Aug 18 00:08:56.694: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 18.082807067s +Aug 18 00:08:58.701: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=true. Elapsed: 20.090351801s +Aug 18 00:09:00.710: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Running", Reason="", readiness=false. Elapsed: 22.098575782s +Aug 18 00:09:02.717: INFO: Pod "pod-subpath-test-downwardapi-gf2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.105678601s +STEP: Saw pod success +Aug 18 00:09:02.717: INFO: Pod "pod-subpath-test-downwardapi-gf2m" satisfied condition "Succeeded or Failed" +Aug 18 00:09:02.721: INFO: Trying to get logs from node 195.17.65.231 pod pod-subpath-test-downwardapi-gf2m container test-container-subpath-downwardapi-gf2m: +STEP: delete the pod +Aug 18 00:09:02.744: INFO: Waiting for pod pod-subpath-test-downwardapi-gf2m to disappear +Aug 18 00:09:02.747: INFO: Pod pod-subpath-test-downwardapi-gf2m no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-gf2m +Aug 18 00:09:02.747: INFO: Deleting pod "pod-subpath-test-downwardapi-gf2m" in namespace "subpath-6868" +[AfterEach] [sig-storage] Subpath + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:09:02.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-6868" for this suite. + +• [SLOW TEST:24.196 seconds] +[sig-storage] Subpath +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":346,"completed":311,"skipped":5868,"failed":0} +SSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:09:02.761: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: set up a multi version CRD +Aug 18 00:09:02.787: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: mark a version not serverd +STEP: check the unserved version gets removed +STEP: check the other version is not changed +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:09:39.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-8480" for this suite. + +• [SLOW TEST:37.165 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":312,"skipped":5875,"failed":0} +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:09:39.928: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +Aug 18 00:09:45.995: INFO: 85 pods remaining +Aug 18 00:09:45.995: INFO: 80 pods has nil DeletionTimestamp +Aug 18 00:09:45.995: INFO: +Aug 18 00:09:47.032: INFO: 79 pods remaining +Aug 18 00:09:47.032: INFO: 73 pods has nil DeletionTimestamp +Aug 18 00:09:47.032: INFO: +Aug 18 00:09:47.991: INFO: 68 pods remaining +Aug 18 00:09:47.991: INFO: 60 pods has nil DeletionTimestamp +Aug 18 00:09:47.991: INFO: +Aug 18 00:09:48.989: INFO: 53 pods remaining +Aug 18 00:09:48.989: INFO: 40 pods has nil DeletionTimestamp +Aug 18 00:09:48.989: INFO: +Aug 18 00:09:49.991: INFO: 50 pods remaining +Aug 18 00:09:49.991: INFO: 33 pods has nil DeletionTimestamp +Aug 18 00:09:49.991: INFO: +Aug 18 00:09:50.992: INFO: 41 pods remaining +Aug 18 00:09:50.992: INFO: 20 pods has nil DeletionTimestamp +Aug 18 00:09:50.992: INFO: +Aug 18 00:09:51.990: INFO: 31 pods remaining +Aug 18 00:09:51.990: INFO: 0 pods has nil DeletionTimestamp +Aug 18 00:09:51.990: INFO: +Aug 18 00:09:52.988: INFO: 28 pods remaining +Aug 18 00:09:52.988: INFO: 0 pods has nil DeletionTimestamp +Aug 18 00:09:52.988: INFO: +Aug 18 00:09:53.991: INFO: 20 pods remaining +Aug 18 00:09:53.991: INFO: 0 pods has nil DeletionTimestamp +Aug 18 00:09:53.991: INFO: +Aug 18 00:09:54.987: INFO: 13 pods remaining +Aug 18 00:09:54.987: INFO: 0 pods has nil DeletionTimestamp +Aug 18 00:09:54.987: INFO: +Aug 18 00:09:55.992: INFO: 9 pods remaining +Aug 18 00:09:55.992: INFO: 0 pods has nil DeletionTimestamp +Aug 18 00:09:55.992: INFO: +Aug 18 00:09:56.989: INFO: 1 pods remaining +Aug 18 00:09:56.989: INFO: 0 pods has nil DeletionTimestamp +Aug 18 00:09:56.989: INFO: +STEP: Gathering metrics +Aug 18 00:09:58.028: INFO: The status of Pod kube-controller-manager-195.17.32.244 is Running (Ready = true) +Aug 18 00:09:58.115: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:09:58.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-5019" for this suite. + +• [SLOW TEST:18.199 seconds] +[sig-api-machinery] Garbage collector +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":313,"skipped":5892,"failed":0} +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:09:58.127: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a ConfigMap +STEP: Ensuring resource quota status captures configMap creation +STEP: Deleting a ConfigMap +STEP: Ensuring resource quota status released usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:26.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-9620" for this suite. + +• [SLOW TEST:28.102 seconds] +[sig-api-machinery] ResourceQuota +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":314,"skipped":5892,"failed":0} +SS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:26.229: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename container-runtime +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: create the container +STEP: wait for the container to reach Succeeded +STEP: get the container status +STEP: the container should be terminated +STEP: the termination message should be set +Aug 18 00:10:30.306: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container +[AfterEach] [sig-node] Container Runtime + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:30.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-runtime-471" for this suite. +•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":346,"completed":315,"skipped":5894,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:30.341: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 18 00:10:30.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54b20de0-5294-4c8b-82a1-13903d3bcc28" in namespace "projected-4565" to be "Succeeded or Failed" +Aug 18 00:10:30.379: INFO: Pod "downwardapi-volume-54b20de0-5294-4c8b-82a1-13903d3bcc28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427611ms +Aug 18 00:10:32.386: INFO: Pod "downwardapi-volume-54b20de0-5294-4c8b-82a1-13903d3bcc28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011020223s +Aug 18 00:10:34.391: INFO: Pod "downwardapi-volume-54b20de0-5294-4c8b-82a1-13903d3bcc28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016934137s +STEP: Saw pod success +Aug 18 00:10:34.392: INFO: Pod "downwardapi-volume-54b20de0-5294-4c8b-82a1-13903d3bcc28" satisfied condition "Succeeded or Failed" +Aug 18 00:10:34.396: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-54b20de0-5294-4c8b-82a1-13903d3bcc28 container client-container: +STEP: delete the pod +Aug 18 00:10:34.425: INFO: Waiting for pod downwardapi-volume-54b20de0-5294-4c8b-82a1-13903d3bcc28 to disappear +Aug 18 00:10:34.429: INFO: Pod downwardapi-volume-54b20de0-5294-4c8b-82a1-13903d3bcc28 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:34.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-4565" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":316,"skipped":5920,"failed":0} +SSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:34.445: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-715 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-715 +STEP: creating replication controller externalsvc in namespace services-715 +I0818 00:10:34.520735 20 runners.go:193] Created replication controller with name: externalsvc, namespace: services-715, replica count: 2 +I0818 00:10:37.572403 20 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName +Aug 18 00:10:37.605: INFO: Creating new exec pod +Aug 18 00:10:39.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=services-715 exec execpodwws7t -- /bin/sh -x -c nslookup clusterip-service.services-715.svc.cluster.local' +Aug 18 00:10:40.089: INFO: stderr: "+ nslookup clusterip-service.services-715.svc.cluster.local\n" +Aug 18 00:10:40.089: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-715.svc.cluster.local\tcanonical name = externalsvc.services-715.svc.cluster.local.\nName:\texternalsvc.services-715.svc.cluster.local\nAddress: 10.102.214.146\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-715, will wait for the garbage collector to delete the pods +Aug 18 00:10:40.153: INFO: Deleting ReplicationController externalsvc took: 9.397339ms +Aug 18 00:10:40.354: INFO: Terminating ReplicationController externalsvc pods took: 201.427838ms +Aug 18 00:10:41.789: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:41.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-715" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 + +• [SLOW TEST:7.374 seconds] +[sig-network] Services +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should be able to change the type from ClusterIP to ExternalName [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":317,"skipped":5924,"failed":0} +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:41.819: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 18 00:10:41.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b871ee68-7cb1-4448-afab-f3fc6735c8cb" in namespace "downward-api-9998" to be "Succeeded or Failed" +Aug 18 00:10:41.865: INFO: Pod "downwardapi-volume-b871ee68-7cb1-4448-afab-f3fc6735c8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.347982ms +Aug 18 00:10:43.871: INFO: Pod "downwardapi-volume-b871ee68-7cb1-4448-afab-f3fc6735c8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012619522s +Aug 18 00:10:45.886: INFO: Pod "downwardapi-volume-b871ee68-7cb1-4448-afab-f3fc6735c8cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027487199s +STEP: Saw pod success +Aug 18 00:10:45.886: INFO: Pod "downwardapi-volume-b871ee68-7cb1-4448-afab-f3fc6735c8cb" satisfied condition "Succeeded or Failed" +Aug 18 00:10:45.890: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-b871ee68-7cb1-4448-afab-f3fc6735c8cb container client-container: +STEP: delete the pod +Aug 18 00:10:45.909: INFO: Waiting for pod downwardapi-volume-b871ee68-7cb1-4448-afab-f3fc6735c8cb to disappear +Aug 18 00:10:45.911: INFO: Pod downwardapi-volume-b871ee68-7cb1-4448-afab-f3fc6735c8cb no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:45.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-9998" for this suite. +•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":318,"skipped":5942,"failed":0} +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:45.927: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 +[It] should complete a service status lifecycle [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating a Service +STEP: watching for the Service to be added +Aug 18 00:10:45.983: INFO: Found Service test-service-zrzf4 in namespace services-263 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Aug 18 00:10:45.983: INFO: Service test-service-zrzf4 created +STEP: Getting /status +Aug 18 00:10:45.986: INFO: Service test-service-zrzf4 has LoadBalancer: {[]} +STEP: patching the ServiceStatus +STEP: watching for the Service to be patched +Aug 18 00:10:45.996: INFO: observed Service test-service-zrzf4 in namespace services-263 with annotations: map[] & LoadBalancer: {[]} +Aug 18 00:10:45.996: INFO: Found Service test-service-zrzf4 in namespace services-263 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Aug 18 00:10:45.996: INFO: Service test-service-zrzf4 has service status patched +STEP: updating the ServiceStatus +Aug 18 00:10:46.006: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated +Aug 18 00:10:46.008: INFO: Observed Service test-service-zrzf4 in namespace services-263 with annotations: map[] & Conditions: {[]} +Aug 18 00:10:46.009: INFO: Observed event: &Service{ObjectMeta:{test-service-zrzf4 services-263 37dd9717-e02d-49be-a2cc-cb7e1962a01e 92049 0 2022-08-18 00:10:45 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-08-18 00:10:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2022-08-18 00:10:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.101.3.3,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.101.3.3],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Aug 18 00:10:46.009: INFO: Found Service test-service-zrzf4 in namespace services-263 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Aug 18 00:10:46.009: INFO: Service test-service-zrzf4 has service status updated +STEP: patching the service +STEP: watching for the Service to be patched +Aug 18 00:10:46.024: INFO: observed Service test-service-zrzf4 in namespace services-263 with labels: map[test-service-static:true] +Aug 18 00:10:46.024: INFO: observed Service test-service-zrzf4 in namespace services-263 with labels: map[test-service-static:true] +Aug 18 00:10:46.024: INFO: observed Service test-service-zrzf4 in namespace services-263 with labels: map[test-service-static:true] +Aug 18 00:10:46.024: INFO: Found Service test-service-zrzf4 in namespace services-263 with labels: map[test-service:patched test-service-static:true] +Aug 18 00:10:46.024: INFO: Service test-service-zrzf4 patched +STEP: deleting the service +STEP: watching for the Service to be deleted +Aug 18 00:10:46.053: INFO: Observed event: ADDED +Aug 18 00:10:46.053: INFO: Observed event: MODIFIED +Aug 18 00:10:46.053: INFO: Observed event: MODIFIED +Aug 18 00:10:46.054: INFO: Observed event: MODIFIED +Aug 18 00:10:46.054: INFO: Found Service test-service-zrzf4 in namespace services-263 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Aug 18 00:10:46.054: INFO: Service test-service-zrzf4 deleted +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:46.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-263" for this suite. +[AfterEach] [sig-network] Services + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 +•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":319,"skipped":5961,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:46.070: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating secret with name secret-test-4a78b405-7218-4129-8cc9-e3b3a89fe7e5 +STEP: Creating a pod to test consume secrets +Aug 18 00:10:46.113: INFO: Waiting up to 5m0s for pod "pod-secrets-ae49bf4c-3bc1-4e64-8108-f1804ee2e5ff" in namespace "secrets-3611" to be "Succeeded or Failed" +Aug 18 00:10:46.119: INFO: Pod "pod-secrets-ae49bf4c-3bc1-4e64-8108-f1804ee2e5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 5.926152ms +Aug 18 00:10:48.125: INFO: Pod "pod-secrets-ae49bf4c-3bc1-4e64-8108-f1804ee2e5ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011535119s +Aug 18 00:10:50.130: INFO: Pod "pod-secrets-ae49bf4c-3bc1-4e64-8108-f1804ee2e5ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016773131s +STEP: Saw pod success +Aug 18 00:10:50.130: INFO: Pod "pod-secrets-ae49bf4c-3bc1-4e64-8108-f1804ee2e5ff" satisfied condition "Succeeded or Failed" +Aug 18 00:10:50.134: INFO: Trying to get logs from node 195.17.65.231 pod pod-secrets-ae49bf4c-3bc1-4e64-8108-f1804ee2e5ff container secret-env-test: +STEP: delete the pod +Aug 18 00:10:50.159: INFO: Waiting for pod pod-secrets-ae49bf4c-3bc1-4e64-8108-f1804ee2e5ff to disappear +Aug 18 00:10:50.163: INFO: Pod pod-secrets-ae49bf4c-3bc1-4e64-8108-f1804ee2e5ff no longer exists +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:50.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3611" for this suite. +•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":320,"skipped":5976,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:50.177: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward api env vars +Aug 18 00:10:50.212: INFO: Waiting up to 5m0s for pod "downward-api-541fb911-a90d-4d86-9aa9-3e8e637cceb6" in namespace "downward-api-5901" to be "Succeeded or Failed" +Aug 18 00:10:50.217: INFO: Pod "downward-api-541fb911-a90d-4d86-9aa9-3e8e637cceb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.855681ms +Aug 18 00:10:52.222: INFO: Pod "downward-api-541fb911-a90d-4d86-9aa9-3e8e637cceb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009968454s +Aug 18 00:10:54.231: INFO: Pod "downward-api-541fb911-a90d-4d86-9aa9-3e8e637cceb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018770374s +STEP: Saw pod success +Aug 18 00:10:54.231: INFO: Pod "downward-api-541fb911-a90d-4d86-9aa9-3e8e637cceb6" satisfied condition "Succeeded or Failed" +Aug 18 00:10:54.234: INFO: Trying to get logs from node 195.17.65.231 pod downward-api-541fb911-a90d-4d86-9aa9-3e8e637cceb6 container dapi-container: +STEP: delete the pod +Aug 18 00:10:54.263: INFO: Waiting for pod downward-api-541fb911-a90d-4d86-9aa9-3e8e637cceb6 to disappear +Aug 18 00:10:54.266: INFO: Pod downward-api-541fb911-a90d-4d86-9aa9-3e8e637cceb6 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:54.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5901" for this suite. +•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":321,"skipped":6039,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:54.278: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] Pods Set QOS Class + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [sig-node] Pods Extended + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:10:54.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1213" for this suite. +•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":322,"skipped":6116,"failed":0} +S +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:10:54.335: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 +STEP: Creating service test in namespace statefulset-3727 +[It] Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Looking for a node to schedule stateful set and pod +STEP: Creating pod with conflicting port in namespace statefulset-3727 +STEP: Waiting until pod test-pod will start running in namespace statefulset-3727 +STEP: Creating statefulset with conflicting port in namespace statefulset-3727 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3727 +Aug 18 00:10:56.427: INFO: Observed stateful pod in namespace: statefulset-3727, name: ss-0, uid: 5a14651c-d88a-4b8e-8e35-76250cafd967, status phase: Pending. Waiting for statefulset controller to delete. +Aug 18 00:10:56.448: INFO: Observed stateful pod in namespace: statefulset-3727, name: ss-0, uid: 5a14651c-d88a-4b8e-8e35-76250cafd967, status phase: Failed. Waiting for statefulset controller to delete. +Aug 18 00:10:56.459: INFO: Observed stateful pod in namespace: statefulset-3727, name: ss-0, uid: 5a14651c-d88a-4b8e-8e35-76250cafd967, status phase: Failed. Waiting for statefulset controller to delete. +Aug 18 00:10:56.469: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3727 +STEP: Removing pod with conflicting port in namespace statefulset-3727 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3727 and will be in running state +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 +Aug 18 00:10:58.504: INFO: Deleting all statefulset in ns statefulset-3727 +Aug 18 00:10:58.508: INFO: Scaling statefulset ss to 0 +Aug 18 00:11:08.532: INFO: Waiting for statefulset status.replicas updated to 0 +Aug 18 00:11:08.537: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:08.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-3727" for this suite. + +• [SLOW TEST:14.226 seconds] +[sig-apps] StatefulSet +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 + Should recreate evicted statefulset [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":323,"skipped":6117,"failed":0} +S +------------------------------ +[sig-node] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:08.562: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test override arguments +Aug 18 00:11:08.600: INFO: Waiting up to 5m0s for pod "client-containers-3a666acf-4a11-4004-bd25-b6f224333782" in namespace "containers-6891" to be "Succeeded or Failed" +Aug 18 00:11:08.603: INFO: Pod "client-containers-3a666acf-4a11-4004-bd25-b6f224333782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.556826ms +Aug 18 00:11:10.610: INFO: Pod "client-containers-3a666acf-4a11-4004-bd25-b6f224333782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009595914s +Aug 18 00:11:12.617: INFO: Pod "client-containers-3a666acf-4a11-4004-bd25-b6f224333782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01684665s +STEP: Saw pod success +Aug 18 00:11:12.617: INFO: Pod "client-containers-3a666acf-4a11-4004-bd25-b6f224333782" satisfied condition "Succeeded or Failed" +Aug 18 00:11:12.620: INFO: Trying to get logs from node 195.17.65.231 pod client-containers-3a666acf-4a11-4004-bd25-b6f224333782 container agnhost-container: +STEP: delete the pod +Aug 18 00:11:12.643: INFO: Waiting for pod client-containers-3a666acf-4a11-4004-bd25-b6f224333782 to disappear +Aug 18 00:11:12.653: INFO: Pod client-containers-3a666acf-4a11-4004-bd25-b6f224333782 no longer exists +[AfterEach] [sig-node] Docker Containers + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:12.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-6891" for this suite. +•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":324,"skipped":6118,"failed":0} +SSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:12.670: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 18 00:11:12.703: INFO: Got root ca configmap in namespace "svcaccounts-3624" +Aug 18 00:11:12.709: INFO: Deleted root ca configmap in namespace "svcaccounts-3624" +STEP: waiting for a new root ca configmap created +Aug 18 00:11:13.217: INFO: Recreated root ca configmap in namespace "svcaccounts-3624" +Aug 18 00:11:13.223: INFO: Updated root ca configmap in namespace "svcaccounts-3624" +STEP: waiting for the root ca configmap reconciled +Aug 18 00:11:13.728: INFO: Reconciled root ca configmap in namespace "svcaccounts-3624" +[AfterEach] [sig-auth] ServiceAccounts + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:13.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3624" for this suite. +•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":325,"skipped":6121,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:13.743: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Aug 18 00:11:13.778: INFO: Waiting up to 5m0s for pod "pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44" in namespace "emptydir-8312" to be "Succeeded or Failed" +Aug 18 00:11:13.784: INFO: Pod "pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44": Phase="Pending", Reason="", readiness=false. Elapsed: 5.58034ms +Aug 18 00:11:15.792: INFO: Pod "pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44": Phase="Running", Reason="", readiness=true. Elapsed: 2.013383738s +Aug 18 00:11:17.799: INFO: Pod "pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44": Phase="Running", Reason="", readiness=false. Elapsed: 4.020837366s +Aug 18 00:11:19.807: INFO: Pod "pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028276111s +STEP: Saw pod success +Aug 18 00:11:19.807: INFO: Pod "pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44" satisfied condition "Succeeded or Failed" +Aug 18 00:11:19.812: INFO: Trying to get logs from node 195.17.65.231 pod pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44 container test-container: +STEP: delete the pod +Aug 18 00:11:19.837: INFO: Waiting for pod pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44 to disappear +Aug 18 00:11:19.843: INFO: Pod pod-c6478546-2c68-443f-b3ef-f3b00e0fdd44 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:19.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-8312" for this suite. + +• [SLOW TEST:6.112 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":326,"skipped":6168,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:19.858: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[BeforeEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 +STEP: creating an pod +Aug 18 00:11:19.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8513 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Aug 18 00:11:19.972: INFO: stderr: "" +Aug 18 00:11:19.972: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Waiting for log generator to start. +Aug 18 00:11:19.972: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Aug 18 00:11:19.972: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8513" to be "running and ready, or succeeded" +Aug 18 00:11:19.980: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.566643ms +Aug 18 00:11:21.990: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.01721538s +Aug 18 00:11:21.990: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Aug 18 00:11:21.990: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings +Aug 18 00:11:21.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8513 logs logs-generator logs-generator' +Aug 18 00:11:22.067: INFO: stderr: "" +Aug 18 00:11:22.067: INFO: stdout: "I0818 00:11:21.069189 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/h6nq 520\nI0818 00:11:21.269335 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/fqs6 394\nI0818 00:11:21.469511 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/qthg 296\nI0818 00:11:21.669970 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xsq6 367\nI0818 00:11:21.869294 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/gjj 252\n" +STEP: limiting log lines +Aug 18 00:11:22.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8513 logs logs-generator logs-generator --tail=1' +Aug 18 00:11:22.143: INFO: stderr: "" +Aug 18 00:11:22.143: INFO: stdout: "I0818 00:11:22.069698 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/29tn 382\n" +Aug 18 00:11:22.143: INFO: got output "I0818 00:11:22.069698 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/29tn 382\n" +STEP: limiting log bytes +Aug 18 00:11:22.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8513 logs logs-generator logs-generator --limit-bytes=1' +Aug 18 00:11:22.233: INFO: stderr: "" +Aug 18 00:11:22.233: INFO: stdout: "I" +Aug 18 00:11:22.233: INFO: got output "I" +STEP: exposing timestamps +Aug 18 00:11:22.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8513 logs logs-generator logs-generator --tail=1 --timestamps' +Aug 18 00:11:22.307: INFO: stderr: "" +Aug 18 00:11:22.307: INFO: stdout: "2022-08-18T00:11:22.270230203Z I0818 00:11:22.270047 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/ddfl 243\n" +Aug 18 00:11:22.307: INFO: got output "2022-08-18T00:11:22.270230203Z I0818 00:11:22.270047 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/ddfl 243\n" +STEP: restricting to a time range +Aug 18 00:11:24.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8513 logs logs-generator logs-generator --since=1s' +Aug 18 00:11:24.914: INFO: stderr: "" +Aug 18 00:11:24.914: INFO: stdout: "I0818 00:11:24.069404 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/56v 350\nI0818 00:11:24.269782 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/kznf 507\nI0818 00:11:24.470166 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/jjxs 293\nI0818 00:11:24.669309 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/lhmj 331\nI0818 00:11:24.869667 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ptl9 477\n" +Aug 18 00:11:24.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8513 logs logs-generator logs-generator --since=24h' +Aug 18 00:11:25.144: INFO: stderr: "" +Aug 18 00:11:25.144: INFO: stdout: "I0818 00:11:21.069189 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/h6nq 520\nI0818 00:11:21.269335 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/fqs6 394\nI0818 00:11:21.469511 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/qthg 296\nI0818 00:11:21.669970 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/xsq6 367\nI0818 00:11:21.869294 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/gjj 252\nI0818 00:11:22.069698 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/29tn 382\nI0818 00:11:22.270047 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/ddfl 243\nI0818 00:11:22.469333 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/cxs 318\nI0818 00:11:22.669813 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/cmj 299\nI0818 00:11:22.870154 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/x948 425\nI0818 00:11:23.069601 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/4dk4 457\nI0818 00:11:23.269989 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/v4kb 369\nI0818 00:11:23.469393 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/fz8 372\nI0818 00:11:23.669739 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/hr8 380\nI0818 00:11:23.870138 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/gmj 420\nI0818 00:11:24.069404 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/56v 350\nI0818 00:11:24.269782 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/kznf 507\nI0818 00:11:24.470166 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/jjxs 293\nI0818 00:11:24.669309 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/lhmj 331\nI0818 00:11:24.869667 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/ptl9 477\n" +[AfterEach] Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 +Aug 18 00:11:25.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8513 delete pod logs-generator' +Aug 18 00:11:25.941: INFO: stderr: "" +Aug 18 00:11:25.941: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:25.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8513" for this suite. + +• [SLOW TEST:6.098 seconds] +[sig-cli] Kubectl client +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl logs + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1408 + should be able to retrieve and filter logs [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":346,"completed":327,"skipped":6220,"failed":0} +SSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:25.956: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Aug 18 00:11:26.017: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:26.017: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:26.021: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:11:26.021: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:11:27.030: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:27.030: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:27.034: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:11:27.034: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:11:28.032: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:28.032: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:28.037: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 18 00:11:28.037: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: Stop a daemon pod, check that the daemon pod is revived. +Aug 18 00:11:28.070: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:28.070: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:28.074: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 18 00:11:28.074: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:11:29.082: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:29.082: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:29.089: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 18 00:11:29.089: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:11:30.084: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:30.084: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:30.087: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 18 00:11:30.087: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:11:31.081: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:31.081: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:31.084: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 18 00:11:31.084: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:11:32.083: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:32.083: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:32.087: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 18 00:11:32.087: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:11:33.084: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:33.084: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:11:33.088: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 18 00:11:33.088: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6647, will wait for the garbage collector to delete the pods +Aug 18 00:11:33.153: INFO: Deleting DaemonSet.extensions daemon-set took: 8.962349ms +Aug 18 00:11:33.254: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.30292ms +Aug 18 00:11:35.962: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:11:35.962: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 18 00:11:35.965: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"92942"},"items":null} + +Aug 18 00:11:35.968: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"92942"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:35.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-6647" for this suite. + +• [SLOW TEST:10.035 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":328,"skipped":6227,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:35.992: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename hostport +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled +Aug 18 00:11:36.039: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Aug 18 00:11:38.046: INFO: The status of Pod pod1 is Running (Ready = true) +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 195.17.65.231 on the node which pod1 resides and expect scheduled +Aug 18 00:11:38.061: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Aug 18 00:11:40.072: INFO: The status of Pod pod2 is Running (Ready = false) +Aug 18 00:11:42.069: INFO: The status of Pod pod2 is Running (Ready = true) +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 195.17.65.231 but use UDP protocol on the node which pod2 resides +Aug 18 00:11:42.081: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Aug 18 00:11:44.090: INFO: The status of Pod pod3 is Running (Ready = true) +Aug 18 00:11:44.102: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Aug 18 00:11:46.113: INFO: The status of Pod e2e-host-exec is Running (Ready = true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 +Aug 18 00:11:46.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 195.17.65.231 http://127.0.0.1:54323/hostname] Namespace:hostport-3339 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 18 00:11:46.117: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 18 00:11:46.119: INFO: ExecWithOptions: Clientset creation +Aug 18 00:11:46.119: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-3339/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+195.17.65.231+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 195.17.65.231, port: 54323 +Aug 18 00:11:46.198: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://195.17.65.231:54323/hostname] Namespace:hostport-3339 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 18 00:11:46.198: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 18 00:11:46.199: INFO: ExecWithOptions: Clientset creation +Aug 18 00:11:46.199: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-3339/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F195.17.65.231%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 195.17.65.231, port: 54323 UDP +Aug 18 00:11:46.277: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 195.17.65.231 54323] Namespace:hostport-3339 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Aug 18 00:11:46.277: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +Aug 18 00:11:46.278: INFO: ExecWithOptions: Clientset creation +Aug 18 00:11:46.278: INFO: ExecWithOptions: execute(POST https://10.96.0.1:443/api/v1/namespaces/hostport-3339/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+195.17.65.231+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) +[AfterEach] [sig-network] HostPort + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:51.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostport-3339" for this suite. + +• [SLOW TEST:15.384 seconds] +[sig-network] HostPort +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":329,"skipped":6287,"failed":0} +SSSSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:51.378: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sysctl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod with one valid and two invalid sysctls +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:51.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sysctl-5550" for this suite. +•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":330,"skipped":6295,"failed":0} + +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:51.419: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Aug 18 00:11:51.455: INFO: Waiting up to 5m0s for pod "pod-324ace13-a7e6-406b-a7a4-8ad8ade32cee" in namespace "emptydir-7927" to be "Succeeded or Failed" +Aug 18 00:11:51.460: INFO: Pod "pod-324ace13-a7e6-406b-a7a4-8ad8ade32cee": Phase="Pending", Reason="", readiness=false. Elapsed: 5.598033ms +Aug 18 00:11:53.471: INFO: Pod "pod-324ace13-a7e6-406b-a7a4-8ad8ade32cee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01662278s +Aug 18 00:11:55.478: INFO: Pod "pod-324ace13-a7e6-406b-a7a4-8ad8ade32cee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022717196s +STEP: Saw pod success +Aug 18 00:11:55.478: INFO: Pod "pod-324ace13-a7e6-406b-a7a4-8ad8ade32cee" satisfied condition "Succeeded or Failed" +Aug 18 00:11:55.481: INFO: Trying to get logs from node 195.17.65.231 pod pod-324ace13-a7e6-406b-a7a4-8ad8ade32cee container test-container: +STEP: delete the pod +Aug 18 00:11:55.511: INFO: Waiting for pod pod-324ace13-a7e6-406b-a7a4-8ad8ade32cee to disappear +Aug 18 00:11:55.514: INFO: Pod pod-324ace13-a7e6-406b-a7a4-8ad8ade32cee no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:11:55.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-7927" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":331,"skipped":6295,"failed":0} +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:11:55.527: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Aug 18 00:11:55.566: INFO: Waiting up to 5m0s for pod "pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969" in namespace "emptydir-1469" to be "Succeeded or Failed" +Aug 18 00:11:55.569: INFO: Pod "pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.920872ms +Aug 18 00:11:57.576: INFO: Pod "pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010448368s +Aug 18 00:11:59.581: INFO: Pod "pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015326156s +Aug 18 00:12:01.589: INFO: Pod "pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023370747s +STEP: Saw pod success +Aug 18 00:12:01.589: INFO: Pod "pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969" satisfied condition "Succeeded or Failed" +Aug 18 00:12:01.592: INFO: Trying to get logs from node 195.17.65.231 pod pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969 container test-container: +STEP: delete the pod +Aug 18 00:12:01.618: INFO: Waiting for pod pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969 to disappear +Aug 18 00:12:01.621: INFO: Pod pod-c55afacb-4df6-4b12-a2cf-b6bae37e3969 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:12:01.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1469" for this suite. + +• [SLOW TEST:6.107 seconds] +[sig-storage] EmptyDir volumes +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":332,"skipped":6310,"failed":0} +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:12:01.634: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 18 00:12:01.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49bf68a1-6e8f-4471-a8a3-9681c8160649" in namespace "projected-1794" to be "Succeeded or Failed" +Aug 18 00:12:01.680: INFO: Pod "downwardapi-volume-49bf68a1-6e8f-4471-a8a3-9681c8160649": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68911ms +Aug 18 00:12:03.686: INFO: Pod "downwardapi-volume-49bf68a1-6e8f-4471-a8a3-9681c8160649": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008838215s +Aug 18 00:12:05.692: INFO: Pod "downwardapi-volume-49bf68a1-6e8f-4471-a8a3-9681c8160649": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015282s +STEP: Saw pod success +Aug 18 00:12:05.692: INFO: Pod "downwardapi-volume-49bf68a1-6e8f-4471-a8a3-9681c8160649" satisfied condition "Succeeded or Failed" +Aug 18 00:12:05.699: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-49bf68a1-6e8f-4471-a8a3-9681c8160649 container client-container: +STEP: delete the pod +Aug 18 00:12:05.719: INFO: Waiting for pod downwardapi-volume-49bf68a1-6e8f-4471-a8a3-9681c8160649 to disappear +Aug 18 00:12:05.722: INFO: Pod downwardapi-volume-49bf68a1-6e8f-4471-a8a3-9681c8160649 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:12:05.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1794" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":333,"skipped":6310,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:12:05.739: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 18 00:12:05.786: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. +Aug 18 00:12:05.797: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:05.797: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Change node label to blue, check that daemon pod is launched. +Aug 18 00:12:05.824: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:05.824: INFO: Node 195.17.65.231 is running 0 daemon pod, expected 1 +Aug 18 00:12:06.829: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:06.830: INFO: Node 195.17.65.231 is running 0 daemon pod, expected 1 +Aug 18 00:12:07.831: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 18 00:12:07.831: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +STEP: Update the node label to green, and wait for daemons to be unscheduled +Aug 18 00:12:07.859: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 18 00:12:07.859: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set +Aug 18 00:12:08.866: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:08.866: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate +Aug 18 00:12:08.889: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:08.889: INFO: Node 195.17.65.231 is running 0 daemon pod, expected 1 +Aug 18 00:12:09.895: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:09.895: INFO: Node 195.17.65.231 is running 0 daemon pod, expected 1 +Aug 18 00:12:10.896: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:10.896: INFO: Node 195.17.65.231 is running 0 daemon pod, expected 1 +Aug 18 00:12:11.897: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:11.897: INFO: Node 195.17.65.231 is running 0 daemon pod, expected 1 +Aug 18 00:12:12.897: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Aug 18 00:12:12.897: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2761, will wait for the garbage collector to delete the pods +Aug 18 00:12:12.970: INFO: Deleting DaemonSet.extensions daemon-set took: 9.428547ms +Aug 18 00:12:13.071: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.930255ms +Aug 18 00:12:15.176: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:15.176: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Aug 18 00:12:15.178: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"93615"},"items":null} + +Aug 18 00:12:15.181: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"93615"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:12:15.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-2761" for this suite. + +• [SLOW TEST:9.485 seconds] +[sig-apps] Daemon set [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":334,"skipped":6349,"failed":0} +SSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:12:15.224: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:143 +[It] should list and delete a collection of DaemonSets [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Aug 18 00:12:15.297: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:12:15.297: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:12:15.300: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:15.300: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:12:16.314: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:12:16.314: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:12:16.319: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Aug 18 00:12:16.319: INFO: Node 195.17.131.205 is running 0 daemon pod, expected 1 +Aug 18 00:12:17.308: INFO: DaemonSet pods can't tolerate node 195.17.131.206 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:12:17.308: INFO: DaemonSet pods can't tolerate node 195.17.32.244 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Aug 18 00:12:17.312: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Aug 18 00:12:17.312: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set +STEP: listing all DeamonSets +STEP: DeleteCollection of the DaemonSets +STEP: Verify that ReplicaSets have been deleted +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:109 +Aug 18 00:12:17.352: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"93674"},"items":null} + +Aug 18 00:12:17.359: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"93674"},"items":[{"metadata":{"name":"daemon-set-tbzsv","generateName":"daemon-set-","namespace":"daemonsets-4913","uid":"71f63bc7-ab4a-49b6-957a-b2e753e2e323","resourceVersion":"93672","creationTimestamp":"2022-08-18T00:12:15Z","labels":{"controller-revision-hash":"5b46c58f6f","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"52421cc2-779c-4cd8-9705-f3a3157c0b3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-08-18T00:12:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52421cc2-779c-4cd8-9705-f3a3157c0b3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-08-18T00:12:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.111\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-rbmrf","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-rbmrf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"195.17.65.231","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["195.17.65.231"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-18T00:12:15Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-18T00:12:17Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-18T00:12:17Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-18T00:12:15Z"}],"hostIP":"195.17.65.231","podIP":"192.168.1.111","podIPs":[{"ip":"192.168.1.111"}],"startTime":"2022-08-18T00:12:15Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-08-18T00:12:16Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://4055f1bab7792f7bf4da658a6e503a79676050cd15dee82c8dd96ee5170bb60c","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-tfnsk","generateName":"daemon-set-","namespace":"daemonsets-4913","uid":"e17dc103-c906-4614-bf81-2e1391a7e64c","resourceVersion":"93658","creationTimestamp":"2022-08-18T00:12:15Z","labels":{"controller-revision-hash":"5b46c58f6f","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"52421cc2-779c-4cd8-9705-f3a3157c0b3c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-08-18T00:12:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52421cc2-779c-4cd8-9705-f3a3157c0b3c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-08-18T00:12:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.5\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-nfkl9","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-nfkl9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"195.17.131.205","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["195.17.131.205"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-18T00:12:15Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-18T00:12:16Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-18T00:12:16Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-08-18T00:12:15Z"}],"hostIP":"195.17.131.205","podIP":"192.168.2.5","podIPs":[{"ip":"192.168.2.5"}],"startTime":"2022-08-18T00:12:15Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-08-18T00:12:16Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://a64d008fc7318358215180aac2b868560b4da90dbe313e2b5d02a90ae49cad85","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:12:17.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-4913" for this suite. +•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":335,"skipped":6359,"failed":0} +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:12:17.387: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test emptydir volume type on node default medium +Aug 18 00:12:17.417: INFO: Waiting up to 5m0s for pod "pod-a092c6fd-f404-4ba6-8067-4181909ee831" in namespace "emptydir-1826" to be "Succeeded or Failed" +Aug 18 00:12:17.420: INFO: Pod "pod-a092c6fd-f404-4ba6-8067-4181909ee831": Phase="Pending", Reason="", readiness=false. Elapsed: 2.99373ms +Aug 18 00:12:19.427: INFO: Pod "pod-a092c6fd-f404-4ba6-8067-4181909ee831": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009597711s +Aug 18 00:12:21.435: INFO: Pod "pod-a092c6fd-f404-4ba6-8067-4181909ee831": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01730219s +STEP: Saw pod success +Aug 18 00:12:21.435: INFO: Pod "pod-a092c6fd-f404-4ba6-8067-4181909ee831" satisfied condition "Succeeded or Failed" +Aug 18 00:12:21.438: INFO: Trying to get logs from node 195.17.65.231 pod pod-a092c6fd-f404-4ba6-8067-4181909ee831 container test-container: +STEP: delete the pod +Aug 18 00:12:21.464: INFO: Waiting for pod pod-a092c6fd-f404-4ba6-8067-4181909ee831 to disappear +Aug 18 00:12:21.468: INFO: Pod pod-a092c6fd-f404-4ba6-8067-4181909ee831 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:12:21.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1826" for this suite. +•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":336,"skipped":6365,"failed":0} +SSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:12:21.478: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:12:27.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-7405" for this suite. +STEP: Destroying namespace "nsdeletetest-6528" for this suite. +Aug 18 00:12:27.599: INFO: Namespace nsdeletetest-6528 was already deleted +STEP: Destroying namespace "nsdeletetest-8635" for this suite. + +• [SLOW TEST:6.127 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":337,"skipped":6368,"failed":0} +SS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:12:27.606: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5366.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5366.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5366.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5366.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Aug 18 00:12:29.686: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:29.691: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:29.696: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:29.700: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:29.704: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:29.709: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:29.712: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:29.716: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:29.717: INFO: Lookups using dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local] + +Aug 18 00:12:34.723: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:34.727: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:34.737: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:34.744: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:34.749: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:34.754: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:34.758: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:34.763: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:34.763: INFO: Lookups using dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local] + +Aug 18 00:12:39.723: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:39.730: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:39.734: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:39.738: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:39.742: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:39.745: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:39.750: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:39.754: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:39.754: INFO: Lookups using dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local] + +Aug 18 00:12:44.722: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:44.726: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:44.732: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:44.735: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:44.739: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:44.743: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:44.747: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:44.751: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:44.751: INFO: Lookups using dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local] + +Aug 18 00:12:49.723: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:49.728: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:49.733: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:49.737: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:49.741: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:49.745: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:49.749: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:49.755: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:49.755: INFO: Lookups using dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local] + +Aug 18 00:12:54.723: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:54.729: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:54.734: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:54.738: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:54.743: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:54.747: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:54.750: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:54.755: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local from pod dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683: the server could not find the requested resource (get pods dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683) +Aug 18 00:12:54.755: INFO: Lookups using dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5366.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5366.svc.cluster.local jessie_udp@dns-test-service-2.dns-5366.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5366.svc.cluster.local] + +Aug 18 00:12:59.758: INFO: DNS probes using dns-5366/dns-test-18ff3f1c-cfa6-4fde-be73-0ee1b5222683 succeeded + +STEP: deleting the pod +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:12:59.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-5366" for this suite. + +• [SLOW TEST:32.221 seconds] +[sig-network] DNS +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":338,"skipped":6370,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:12:59.831: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should fail to create secret due to empty secret key [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating projection with secret that has name secret-emptykey-test-413b6619-cab5-446c-afd1-93d8869e5714 +[AfterEach] [sig-node] Secrets + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:12:59.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-3246" for this suite. +•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":339,"skipped":6396,"failed":0} +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:12:59.869: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 18 00:12:59.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8506 create -f -' +Aug 18 00:13:01.227: INFO: stderr: "" +Aug 18 00:13:01.227: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Aug 18 00:13:01.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8506 create -f -' +Aug 18 00:13:02.629: INFO: stderr: "" +Aug 18 00:13:02.629: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. +Aug 18 00:13:03.634: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 18 00:13:03.634: INFO: Found 1 / 1 +Aug 18 00:13:03.634: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Aug 18 00:13:03.637: INFO: Selector matched 1 pods for map[app:agnhost] +Aug 18 00:13:03.637: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Aug 18 00:13:03.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8506 describe pod agnhost-primary-vkg69' +Aug 18 00:13:03.721: INFO: stderr: "" +Aug 18 00:13:03.721: INFO: stdout: "Name: agnhost-primary-vkg69\nNamespace: kubectl-8506\nPriority: 0\nNode: 195.17.65.231/195.17.65.231\nStart Time: Thu, 18 Aug 2022 00:13:01 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 192.168.1.132\nIPs:\n IP: 192.168.1.132\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://de41c86d530d242eb5a6eede3b08db8d7d6b33103d643e2794394464e9eb1e6c\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 18 Aug 2022 00:13:02 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5zh8f (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-5zh8f:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-8506/agnhost-primary-vkg69 to 195.17.65.231\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Aug 18 00:13:03.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8506 describe rc agnhost-primary' +Aug 18 00:13:03.815: INFO: stderr: "" +Aug 18 00:13:03.815: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8506\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-vkg69\n" +Aug 18 00:13:03.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8506 describe service agnhost-primary' +Aug 18 00:13:03.901: INFO: stderr: "" +Aug 18 00:13:03.901: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8506\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.105.88.178\nIPs: 10.105.88.178\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.1.132:6379\nSession Affinity: None\nEvents: \n" +Aug 18 00:13:03.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8506 describe node 195.17.131.205' +Aug 18 00:13:04.011: INFO: stderr: "" +Aug 18 00:13:04.011: INFO: stdout: "Name: 195.17.131.205\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=vsphere-vm.cpu-2.mem-8gb.os-unknown\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=195.17.131.205\n kubernetes.io/os=linux\n node.kubernetes.io/instance-type=vsphere-vm.cpu-2.mem-8gb.os-unknown\nAnnotations: alpha.kubernetes.io/provided-node-ip: 195.17.131.205\n cluster.x-k8s.io/cluster-name: prod\n cluster.x-k8s.io/cluster-namespace: eksa-system\n cluster.x-k8s.io/machine: prod-md-0-5b9857b694-g7fn8\n cluster.x-k8s.io/owner-kind: MachineSet\n cluster.x-k8s.io/owner-name: prod-md-0-5b9857b694\n csi.volume.kubernetes.io/nodeid: {\"csi.vsphere.vmware.com\":\"195.17.131.205\"}\n io.cilium.network.ipv4-cilium-host: 192.168.2.27\n io.cilium.network.ipv4-pod-cidr: 192.168.2.0/24\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 17 Aug 2022 22:19:14 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: 195.17.131.205\n AcquireTime: \n RenewTime: Thu, 18 Aug 2022 00:12:56 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 18 Aug 2022 00:08:19 +0000 Wed, 17 Aug 2022 22:19:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 18 Aug 2022 00:08:19 +0000 Wed, 17 Aug 2022 22:19:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 18 Aug 2022 00:08:19 +0000 Wed, 17 Aug 2022 22:19:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 18 Aug 2022 00:08:19 +0000 Wed, 17 Aug 2022 22:21:47 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n Hostname: 195.17.131.205\n InternalIP: 195.17.131.205\n ExternalIP: 195.17.131.205\nCapacity:\n cpu: 2\n ephemeral-storage: 20604116Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7872476Ki\n pods: 110\nAllocatable:\n cpu: 1930m\n ephemeral-storage: 17915011451\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 7770076Ki\n pods: 110\nSystem Info:\n Machine ID: 22701a42fcf6ab95ca697c44500dd844\n System UUID: 22701a42-fcf6-ab95-ca69-7c44500dd844\n Boot ID: 156ba1e8-369b-4a6a-9a30-89c6d36935a6\n Kernel Version: 5.10.130\n OS Image: Bottlerocket OS 1.9.0 (vmware-k8s-1.23)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.6+bottlerocket\n Kubelet Version: v1.23.7-eks-7709a84\n Kube-Proxy Version: v1.23.7-eks-7709a84\nPodCIDR: 192.168.2.0/24\nPodCIDRs: 192.168.2.0/24\nProviderID: vsphere://421a7022-f6fc-95ab-ca69-7c44500dd844\nNon-terminated Pods: (16 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-6f58b86764-4snx7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 110m\n capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-7b679446f7-x2d65 0 (0%) 0 (0%) 0 (0%) 0 (0%) 110m\n capi-system capi-controller-manager-6ff75d8789-8fldg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 110m\n cert-manager cert-manager-67565ccf5d-zf6kt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 111m\n cert-manager cert-manager-cainjector-654854cb95-cb6v8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 111m\n cert-manager cert-manager-webhook-fc46785b4-gvkf6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 111m\n eksa-packages eks-anywhere-packages-ddfc7b44-8zssk 100m (5%) 500m (25%) 50Mi (0%) 300Mi (3%) 108m\n etcdadm-bootstrap-provider-system etcdadm-bootstrap-provider-controller-manager-7d898b8f77-xgmtd 100m (5%) 100m (5%) 50Mi (0%) 100Mi (1%) 110m\n etcdadm-controller-system etcdadm-controller-controller-manager-b6f674477-6lsxb 100m (5%) 100m (5%) 50Mi (0%) 100Mi (1%) 110m\n kube-system cilium-hvkwp 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 111m\n kube-system cilium-operator-5799bc594c-b9rnk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 111m\n kube-system kube-proxy-pdhjb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 113m\n kube-system vsphere-cloud-controller-manager-s5246 200m (10%) 0 (0%) 0 (0%) 0 (0%) 113m\n kube-system vsphere-csi-controller-f67d5c78c-l8hxm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89m\n kube-system vsphere-csi-node-f9msr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 113m\n sonobuoy sonobuoy-systemd-logs-daemon-set-77cbce2d26fa4eea-v7n4s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 94m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 600m (31%) 700m (36%)\n memory 250Mi (3%) 500Mi (6%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" +Aug 18 00:13:04.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-3062978391 --namespace=kubectl-8506 describe namespace kubectl-8506' +Aug 18 00:13:04.088: INFO: stderr: "" +Aug 18 00:13:04.089: INFO: stdout: "Name: kubectl-8506\nLabels: e2e-framework=kubectl\n e2e-run=18b3b74a-d7eb-485e-bb7a-38080b026820\n kubernetes.io/metadata.name=kubectl-8506\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:13:04.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8506" for this suite. +•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":346,"completed":340,"skipped":6417,"failed":0} +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:13:04.100: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename sched-preemption +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 +Aug 18 00:13:04.141: INFO: Waiting up to 1m0s for all nodes to be ready +Aug 18 00:14:04.184: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Create pods that use 4/5 of node resources. +Aug 18 00:14:04.215: INFO: Created pod: pod0-0-sched-preemption-low-priority +Aug 18 00:14:04.232: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Aug 18 00:14:04.257: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Aug 18 00:14:04.271: INFO: Created pod: pod1-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. +STEP: Run a high priority pod that has same requirements as that of lower priority pod +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:14:22.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-preemption-4210" for this suite. +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 + +• [SLOW TEST:78.290 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":341,"skipped":6437,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:14:22.390: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-9fd007f1-4625-4680-89b2-880f4e0af843 +STEP: Creating the pod +Aug 18 00:14:22.457: INFO: The status of Pod pod-projected-configmaps-1166d57d-b5e4-47a9-9070-4499badcd88b is Pending, waiting for it to be Running (with Ready = true) +Aug 18 00:14:24.464: INFO: The status of Pod pod-projected-configmaps-1166d57d-b5e4-47a9-9070-4499badcd88b is Pending, waiting for it to be Running (with Ready = true) +Aug 18 00:14:26.465: INFO: The status of Pod pod-projected-configmaps-1166d57d-b5e4-47a9-9070-4499badcd88b is Running (Ready = true) +STEP: Updating configmap projected-configmap-test-upd-9fd007f1-4625-4680-89b2-880f4e0af843 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:15:52.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1787" for this suite. + +• [SLOW TEST:90.546 seconds] +[sig-storage] Projected configMap +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":342,"skipped":6474,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:15:52.937: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 +[It] should be submitted and removed [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: creating the pod +STEP: setting up watch +STEP: submitting the pod to kubernetes +Aug 18 00:15:52.971: INFO: observed the pod list +STEP: verifying the pod is in kubernetes +STEP: verifying pod creation was observed +STEP: deleting the pod gracefully +STEP: verifying pod deletion was observed +[AfterEach] [sig-node] Pods + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:15:57.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-8191" for this suite. +•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":343,"skipped":6597,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:15:57.745: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: Creating a pod to test downward API volume plugin +Aug 18 00:15:57.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-079783d1-0b29-449b-b665-eeea7d6761a2" in namespace "projected-7670" to be "Succeeded or Failed" +Aug 18 00:15:57.784: INFO: Pod "downwardapi-volume-079783d1-0b29-449b-b665-eeea7d6761a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043932ms +Aug 18 00:15:59.791: INFO: Pod "downwardapi-volume-079783d1-0b29-449b-b665-eeea7d6761a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010978842s +Aug 18 00:16:01.803: INFO: Pod "downwardapi-volume-079783d1-0b29-449b-b665-eeea7d6761a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022456756s +STEP: Saw pod success +Aug 18 00:16:01.803: INFO: Pod "downwardapi-volume-079783d1-0b29-449b-b665-eeea7d6761a2" satisfied condition "Succeeded or Failed" +Aug 18 00:16:01.807: INFO: Trying to get logs from node 195.17.65.231 pod downwardapi-volume-079783d1-0b29-449b-b665-eeea7d6761a2 container client-container: +STEP: delete the pod +Aug 18 00:16:01.837: INFO: Waiting for pod downwardapi-volume-079783d1-0b29-449b-b665-eeea7d6761a2 to disappear +Aug 18 00:16:01.841: INFO: Pod downwardapi-volume-079783d1-0b29-449b-b665-eeea7d6761a2 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:16:01.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7670" for this suite. +•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":344,"skipped":6631,"failed":0} +SS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:16:01.855: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename runtimeclass +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[It] should support RuntimeClasses API operations [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +STEP: getting /apis +STEP: getting /apis/node.k8s.io +STEP: getting /apis/node.k8s.io/v1 +STEP: creating +STEP: watching +Aug 18 00:16:01.909: INFO: starting watch +STEP: getting +STEP: listing +STEP: patching +STEP: updating +Aug 18 00:16:01.949: INFO: waiting for watch events with expected annotations +STEP: deleting +STEP: deleting a collection +[AfterEach] [sig-node] RuntimeClass + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:16:01.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "runtimeclass-4839" for this suite. +•{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":346,"completed":345,"skipped":6633,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 +STEP: Creating a kubernetes client +Aug 18 00:16:02.019: INFO: >>> kubeConfig: /tmp/kubeconfig-3062978391 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 +[It] deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +Aug 18 00:16:02.055: INFO: Creating deployment "webserver-deployment" +Aug 18 00:16:02.062: INFO: Waiting for observed generation 1 +Aug 18 00:16:04.082: INFO: Waiting for all required pods to come up +Aug 18 00:16:04.089: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running +Aug 18 00:16:06.103: INFO: Waiting for deployment "webserver-deployment" to complete +Aug 18 00:16:06.111: INFO: Updating deployment "webserver-deployment" with a non-existent image +Aug 18 00:16:06.123: INFO: Updating deployment webserver-deployment +Aug 18 00:16:06.123: INFO: Waiting for observed generation 2 +Aug 18 00:16:08.134: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Aug 18 00:16:08.139: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Aug 18 00:16:08.142: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Aug 18 00:16:08.154: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Aug 18 00:16:08.154: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Aug 18 00:16:08.157: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Aug 18 00:16:08.164: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Aug 18 00:16:08.164: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Aug 18 00:16:08.177: INFO: Updating deployment webserver-deployment +Aug 18 00:16:08.177: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Aug 18 00:16:08.184: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Aug 18 00:16:08.187: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 +Aug 18 00:16:10.203: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-4499 cf220fb8-1487-4bcc-b911-e0334f0858ae 96781 3 2022-08-18 00:16:02 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0051df2b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-08-18 00:16:08 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2022-08-18 00:16:08 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Aug 18 00:16:10.207: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-4499 541d0044-7741-40b9-8eb1-da90d76ba8ad 96777 3 2022-08-18 00:16:06 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment cf220fb8-1487-4bcc-b911-e0334f0858ae 0xc0051df6c7 0xc0051df6c8}] [] [{kube-controller-manager Update apps/v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf220fb8-1487-4bcc-b911-e0334f0858ae\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0051df768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Aug 18 00:16:10.207: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Aug 18 00:16:10.207: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-4499 096a01ad-f685-4ad6-a3e3-915465c52322 96771 3 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment cf220fb8-1487-4bcc-b911-e0334f0858ae 0xc0051df7c7 0xc0051df7c8}] [] [{kube-controller-manager Update apps/v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf220fb8-1487-4bcc-b911-e0334f0858ae\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-08-18 00:16:04 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0051df858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Aug 18 00:16:10.215: INFO: Pod "webserver-deployment-566f96c878-8qjcr" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-8qjcr webserver-deployment-566f96c878- deployment-4499 58c29bcd-75a7-44c6-907f-dd6d565dc901 96743 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc00ab01377 0xc00ab01378}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wl5b7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wl5b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.215: INFO: Pod "webserver-deployment-566f96c878-bsz88" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-bsz88 webserver-deployment-566f96c878- deployment-4499 aa3979e4-d8b1-4bad-9402-edc568725fe4 96629 0 2022-08-18 00:16:06 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc00ab014f0 0xc00ab014f1}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-98npn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-98npn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:,StartTime:2022-08-18 00:16:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.215: INFO: Pod "webserver-deployment-566f96c878-gpvgd" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-gpvgd webserver-deployment-566f96c878- deployment-4499 3a7c34f0-4139-4842-aca8-b2a2853304b7 96662 0 2022-08-18 00:16:06 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc00ab016d7 0xc00ab016d8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w55kg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w55kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:,StartTime:2022-08-18 00:16:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.216: INFO: Pod "webserver-deployment-566f96c878-h6szx" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-h6szx webserver-deployment-566f96c878- deployment-4499 db194169-ee3d-41d7-b6de-10c1d0817c9e 96647 0 2022-08-18 00:16:06 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc00ab018c7 0xc00ab018c8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h4phd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4phd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:,StartTime:2022-08-18 00:16:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.216: INFO: Pod "webserver-deployment-566f96c878-lk6jp" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-lk6jp webserver-deployment-566f96c878- deployment-4499 a0c73c4a-1225-478e-a5d0-604cf006ddec 96630 0 2022-08-18 00:16:06 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc00ab01ab7 0xc00ab01ab8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zbxl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbxl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:,StartTime:2022-08-18 00:16:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.216: INFO: Pod "webserver-deployment-566f96c878-qqxng" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-qqxng webserver-deployment-566f96c878- deployment-4499 bf715c75-1939-4903-9dbb-78e6f6acafbc 96731 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc00ab01ca7 0xc00ab01ca8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-55wtb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-55wtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.216: INFO: Pod "webserver-deployment-566f96c878-r4l6g" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-r4l6g webserver-deployment-566f96c878- deployment-4499 8c3cb91c-1c6b-4b55-9887-712d6aad0012 96768 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc00ab01e20 0xc00ab01e21}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-czpc9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czpc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:,StartTime:2022-08-18 00:16:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.217: INFO: Pod "webserver-deployment-566f96c878-rh9fb" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-rh9fb webserver-deployment-566f96c878- deployment-4499 476bd4e3-86b6-461a-bc39-5e0b772ca208 96770 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc003b76087 0xc003b76088}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-phfs5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phfs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.217: INFO: Pod "webserver-deployment-566f96c878-t48j7" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-t48j7 webserver-deployment-566f96c878- deployment-4499 47a14a5d-3e07-4e01-9376-854d4762f235 96760 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc003b76210 0xc003b76211}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rm4v2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rm4v2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.217: INFO: Pod "webserver-deployment-566f96c878-tmfz8" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-tmfz8 webserver-deployment-566f96c878- deployment-4499 faa2f5c1-46de-483a-a673-62c1e16ae5ed 96751 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc003b76590 0xc003b76591}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5qq7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qq7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:,StartTime:2022-08-18 00:16:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.218: INFO: Pod "webserver-deployment-566f96c878-vrk9w" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-vrk9w webserver-deployment-566f96c878- deployment-4499 b0d7f533-3ab0-4587-858f-21444f1061e2 96811 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc003b76c67 0xc003b76c68}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qhb6g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhb6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:,StartTime:2022-08-18 00:16:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.218: INFO: Pod "webserver-deployment-566f96c878-w4wr2" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-w4wr2 webserver-deployment-566f96c878- deployment-4499 ba0eec65-faea-4825-a4bc-5f5a0f47b42d 96756 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc003b76f47 0xc003b76f48}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tqxgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqxgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.218: INFO: Pod "webserver-deployment-566f96c878-zt82h" is not available: +&Pod{ObjectMeta:{webserver-deployment-566f96c878-zt82h webserver-deployment-566f96c878- deployment-4499 4dcde4d9-cd3c-4fcb-9247-f8fe3cf0a12e 96621 0 2022-08-18 00:16:06 +0000 UTC map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 541d0044-7741-40b9-8eb1-da90d76ba8ad 0xc003b770c0 0xc003b770c1}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"541d0044-7741-40b9-8eb1-da90d76ba8ad\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7njkp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7njkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:,StartTime:2022-08-18 00:16:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.218: INFO: Pod "webserver-deployment-5d9fdcc779-6764k" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-6764k webserver-deployment-5d9fdcc779- deployment-4499 29479056-4c03-40ee-b654-bdf45354ba46 96741 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b772a7 0xc003b772a8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6n67d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6n67d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.219: INFO: Pod "webserver-deployment-5d9fdcc779-6fj8g" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-6fj8g webserver-deployment-5d9fdcc779- deployment-4499 66f85b17-8c7f-4ffa-92cb-5f1e22ad8cb7 96589 0 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b77410 0xc003b77411}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lsk6h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lsk6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:192.168.2.232,StartTime:2022-08-18 00:16:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:16:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://1ea5d773237cbbdf2d6ceb3edcca160d559e8680a35cf728ca56250426254456,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.219: INFO: Pod "webserver-deployment-5d9fdcc779-7p4bg" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-7p4bg webserver-deployment-5d9fdcc779- deployment-4499 d477490f-4452-428a-ad28-dd0ffc87d44c 96753 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b775f7 0xc003b775f8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rbsjg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rbsjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.220: INFO: Pod "webserver-deployment-5d9fdcc779-9dgd2" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-9dgd2 webserver-deployment-5d9fdcc779- deployment-4499 0f090159-a527-4672-84b7-075eeb24499f 96758 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b77760 0xc003b77761}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lhjrw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhjrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.220: INFO: Pod "webserver-deployment-5d9fdcc779-9j49d" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-9j49d webserver-deployment-5d9fdcc779- deployment-4499 224767c8-a2b4-46c3-8330-331928eda827 96562 0 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b778c0 0xc003b778c1}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ddtl5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ddtl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:192.168.1.79,StartTime:2022-08-18 00:16:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:16:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://32f1bb12b7da4ca1b3467e87fb82f684e878e40b05fb69838f9149402302a695,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.220: INFO: Pod "webserver-deployment-5d9fdcc779-9xxbq" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-9xxbq webserver-deployment-5d9fdcc779- deployment-4499 d07924d7-5bbd-4e92-993e-7b20e92e2125 96586 0 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b77aa7 0xc003b77aa8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.214\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-47bn4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47bn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:192.168.2.214,StartTime:2022-08-18 00:16:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:16:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://5713158dd4abd079cf45bd53acc5c12d63edf4370c4bdf34a3dce2c1dc71a422,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.221: INFO: Pod "webserver-deployment-5d9fdcc779-d8dfj" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-d8dfj webserver-deployment-5d9fdcc779- deployment-4499 cf16b1d9-e298-4bb8-95f7-621768167322 96784 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b77c97 0xc003b77c98}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xjws5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjws5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:,StartTime:2022-08-18 00:16:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.221: INFO: Pod "webserver-deployment-5d9fdcc779-gbpj9" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-gbpj9 webserver-deployment-5d9fdcc779- deployment-4499 afbfcbf8-6873-4a64-bb23-2bd536cd31f7 96757 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b77e77 0xc003b77e78}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dwjhn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwjhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.221: INFO: Pod "webserver-deployment-5d9fdcc779-l4pzb" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-l4pzb webserver-deployment-5d9fdcc779- deployment-4499 570da8f6-dba5-4a28-af70-204cd15d0578 96779 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc003b77fe0 0xc003b77fe1}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dgm7z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgm7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:,StartTime:2022-08-18 00:16:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.221: INFO: Pod "webserver-deployment-5d9fdcc779-mwm72" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-mwm72 webserver-deployment-5d9fdcc779- deployment-4499 58530f9a-c6bc-422e-8603-4ddcdd754a9b 96719 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717e1c7 0xc00717e1c8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jv86m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jv86m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.222: INFO: Pod "webserver-deployment-5d9fdcc779-pfmlm" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-pfmlm webserver-deployment-5d9fdcc779- deployment-4499 3e2a62d0-a57b-4e79-a5be-74acdefffbf1 96592 0 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717e330 0xc00717e331}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.164\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w997c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w997c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:192.168.2.164,StartTime:2022-08-18 00:16:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:16:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0b7eb59ec6478c26129ab9fb664b44010ffb4857559754ef3962e8f11e319d38,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.222: INFO: Pod "webserver-deployment-5d9fdcc779-qptmb" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-qptmb webserver-deployment-5d9fdcc779- deployment-4499 eaab85f9-e705-4ea8-a466-ea17e1cb92d6 96748 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717e517 0xc00717e518}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4bwms,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bwms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:,StartTime:2022-08-18 00:16:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.222: INFO: Pod "webserver-deployment-5d9fdcc779-qxj58" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-qxj58 webserver-deployment-5d9fdcc779- deployment-4499 43bfce7a-70a3-41b3-a546-950fd4702df2 96755 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717e6e7 0xc00717e6e8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mt5nv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mt5nv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.223: INFO: Pod "webserver-deployment-5d9fdcc779-r4rcb" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-r4rcb webserver-deployment-5d9fdcc779- deployment-4499 32fcc8c9-caeb-44de-abeb-8facbcdb8a29 96809 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717e860 0xc00717e861}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wvblr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvblr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:,StartTime:2022-08-18 00:16:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.223: INFO: Pod "webserver-deployment-5d9fdcc779-v8bsk" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-v8bsk webserver-deployment-5d9fdcc779- deployment-4499 7994c46d-0af3-461e-85af-19801ae99003 96584 0 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717ea27 0xc00717ea28}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.69\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zhq2l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhq2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:192.168.2.69,StartTime:2022-08-18 00:16:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:16:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://06fdbd260bea4d71c93360d124746bbef16f1c0bdcf2cbd70a3b3cd0ac0b4c63,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.223: INFO: Pod "webserver-deployment-5d9fdcc779-x6tdd" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-x6tdd webserver-deployment-5d9fdcc779- deployment-4499 9b2bf330-f718-4bdc-8099-10641b888cb5 96555 0 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717ec17 0xc00717ec18}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.133\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cn4jj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cn4jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.131.205,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.131.205,PodIP:192.168.2.133,StartTime:2022-08-18 00:16:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:16:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://d6d44350ed6d7f3db2884cd7c4d5a6dfa3edc348c640863edb489bbaf236e6e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.224: INFO: Pod "webserver-deployment-5d9fdcc779-x7nwn" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-x7nwn webserver-deployment-5d9fdcc779- deployment-4499 0f9b4ea9-8c4e-4a25-b7ac-84db0765ded5 96568 0 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717ee07 0xc00717ee08}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.215\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pmrxw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pmrxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:192.168.1.215,StartTime:2022-08-18 00:16:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:16:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://998ad5984ee415fab11709297ee46707b8d59e6fa291e1d3457b419003e6d94a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.224: INFO: Pod "webserver-deployment-5d9fdcc779-xxskj" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-xxskj webserver-deployment-5d9fdcc779- deployment-4499 96f75358-5129-408f-919f-110e3668b3a0 96742 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717eff7 0xc00717eff8}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ckp49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ckp49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.224: INFO: Pod "webserver-deployment-5d9fdcc779-z2x62" is not available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-z2x62 webserver-deployment-5d9fdcc779- deployment-4499 32519c5e-3270-4d5a-bd6e-c2b92cd42458 96759 0 2022-08-18 00:16:08 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717f160 0xc00717f161}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ztdg2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztdg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Aug 18 00:16:10.224: INFO: Pod "webserver-deployment-5d9fdcc779-zlg5p" is available: +&Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-zlg5p webserver-deployment-5d9fdcc779- deployment-4499 e72ed2a6-0e80-4724-8181-17e927dc02ee 96564 0 2022-08-18 00:16:02 +0000 UTC map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 096a01ad-f685-4ad6-a3e3-915465c52322 0xc00717f2c0 0xc00717f2c1}] [] [{kube-controller-manager Update v1 2022-08-18 00:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"096a01ad-f685-4ad6-a3e3-915465c52322\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-08-18 00:16:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.146\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2j9d6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2j9d6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:195.17.65.231,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-08-18 00:16:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:195.17.65.231,PodIP:192.168.1.146,StartTime:2022-08-18 00:16:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-08-18 00:16:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e25a3d383d862ba88e90b2bf173d2f0362d1d212cdecd4c12a7a59844b5eb3a4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 +Aug 18 00:16:10.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-4499" for this suite. + +• [SLOW TEST:8.219 seconds] +[sig-apps] Deployment +/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 +------------------------------ +{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":346,"skipped":6674,"failed":0} +SSSSSSSSSSSSSSSSSSSSSSSSAug 18 00:16:10.239: INFO: Running AfterSuite actions on all nodes +Aug 18 00:16:10.239: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2 +Aug 18 00:16:10.239: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 +Aug 18 00:16:10.239: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2 +Aug 18 00:16:10.239: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 +Aug 18 00:16:10.239: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 +Aug 18 00:16:10.239: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 +Aug 18 00:16:10.239: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 +Aug 18 00:16:10.239: INFO: Running AfterSuite actions on node 1 +Aug 18 00:16:10.239: INFO: Skipping dumping logs from cluster + +JUnit report was created: /tmp/results/junit_01.xml +{"msg":"Test Suite completed","total":346,"completed":346,"skipped":6698,"failed":0} + +Ran 346 of 7044 Specs in 5844.298 seconds +SUCCESS! -- 346 Passed | 0 Failed | 0 Pending | 6698 Skipped +PASS + +Ginkgo ran 1 suite in 1h37m27.296228899s +Test Suite Passed diff --git a/v1.23/eks-a/junit_01.xml b/v1.23/eks-a/junit_01.xml new file mode 100644 index 0000000000..2f383dad82 --- /dev/null +++ b/v1.23/eks-a/junit_01.xml @@ -0,0 +1,20443 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file