diff --git a/participation-form/Certified_Kubernetes_Form.md b/participation-form/Certified_Kubernetes_Form.md deleted file mode 100644 index 793de6b019..0000000000 --- a/participation-form/Certified_Kubernetes_Form.md +++ /dev/null @@ -1,94 +0,0 @@ -### Certified Kubernetes Conformance Program -#### Participation Form - -Complete this form for each Participant (company or other entity) that desires to participate in the Certified Kubernetes Conformance Program and to use the Certified Kubernetes Marks or Participant Kubernetes Combinations. Kubernetes and the Certified Kubernetes Marks are trademarks of The Linux Foundation. Capitalized terms used herein and not otherwise defined shall have the same meanings set forth in the Program Terms. - -By signing below and submitting this form to The Linux Foundation (by [DocuSign](https://na3.docusign.net/Member/PowerFormSigning.aspx?PowerFormId=ba08f93a-65ca-4c5d-8210-d5c858bb9208) or by emailing the [PDF](https://github.com/cncf/k8s-conformance/raw/master/participation-form/Certified_Kubernetes_Form.pdf) to [conformance@cncf.io](mailto:conformance@cncf.io)): - -1. The Participant agrees to the Terms and Conditions of the Certified Kubernetes Conformance Program (the "**Program Terms**"), available at . -2. The Participant confirms that the products and services identified below as Qualifying Offerings have passed all of the self-tests described in the Certification Guide, and are Qualifying Offerings under the Program Terms. -3. The Participant confirms that it has submitted to the Cloud Native Computing Foundation ("**CNCF**") the results of the self-tests prior to its first public use of the Certified Kubernetes Marks associated with the corresponding version of Kubernetes. -4. The Participant confirms that it will either (a) maintain conformance of the Qualifying Offerings with later versions of Kubernetes, or (b) cease use of the Certified Kubernetes Marks and Participant Kubernetes Combinations at the end of the applicable conformance time period described in the Program Terms. -5. The Participant confirms that it has listed below all Participant Kubernetes Combinations that it intends to use with the Qualifying Offerings. -6. The Participant confirms that it will promptly submit an updated Participant Form to The Linux Foundation prior to (a) using the Certified Kubernetes Marks with Qualifying Offerings not listed here, or (b) using Participant Kubernetes Combinations not listed here. -7. I confirm that I am authorized to make the above statements and to submit this form on behalf of the Participant. - - -#### Participant Information - - -Company / entity name: - -\___________________________________________________ - -Contact address: - -\___________________________________________________ - -\___________________________________________________ - -\___________________________________________________ - -\___________________________________________________ - -Contact telephone: - -\___________________________________________________ - -Contact email: - -\___________________________________________________ - - -Select one: - - - [ ] Participant is a member of CNCF. - - [ ] Participant is a non-profit organization. - - [ ] Neither of the above. **Please contact CNCF to discuss fees for participation in the Conformance Program.** - -#### Qualifying Offerings - -Name, brief description and URLs for more information: - -\___________________________________________________________________________ - -\___________________________________________________________________________ - -\___________________________________________________________________________ - -\___________________________________________________________________________ - - -#### Participant Kubernetes Combinations - -List all Participant Kubernetes Combinations to be used with the Qualifying Offerings, if any: - -(for example, "XYZ Kubernetes" or "XYZ Kubernetes Platform") - -\___________________________________________________________________________ - -\___________________________________________________________________________ - -\___________________________________________________________________________ - -\___________________________________________________________________________ - - -#### Conformance Details - -Initial Version of Kubernetes for Conformance (e.g., v1.8): _______ - -Conformance Date: __________________ - - -#### Signed on behalf of Participant by: - -``` -Signature: __________________________________ - -Name: __________________________________ - -Title: __________________________________ - -Date: __________________________________ -``` diff --git a/v1.16/ntnx-karbon/PRODUCT.yaml b/v1.16/ntnx-karbon/PRODUCT.yaml new file mode 100644 index 0000000000..eca4023ea2 --- /dev/null +++ b/v1.16/ntnx-karbon/PRODUCT.yaml @@ -0,0 +1,8 @@ +vendor: Nutanix +name: Karbon +version: v2.0.1 +documentation_url: https://portal.nutanix.com/page/documents/details/?targetId=Release-Notes-Karbon-v2_0_1%3ARelease-Notes-Karbon-v2_0_1 +website_url: https://www.nutanix.com +product_logo_url: https://www.nutanix.com/wp-content/uploads/2017/11/nutanix.png +type: hosted +description: Managed Kubernetes offering by Nutanix diff --git a/v1.16/ntnx-karbon/README.md b/v1.16/ntnx-karbon/README.md new file mode 100644 index 0000000000..adc917ab92 --- /dev/null +++ b/v1.16/ntnx-karbon/README.md @@ -0,0 +1,42 @@ +Nutanix Karbon +=== + To Reproduce +--- + On Nutanix Prism Central, enable Karbon (managed Kubernetes offering of Nutanix) by following +the instructions mentioned below: +1. Log on to Prism Central. +2. Click the menu icon. +3. In the Services option, click Karbon. +4. `Karbon is successfully enabled` message will indicate that Karbon has been enabled successfully. +5. In the Karbon Console, a message directs you to download a node OS image. +6. Once image downlao is successfully, use the UI cluster create wizard + to fill up all the necessary information: + a) Name of cluster + b) Network to be used. + c) Choose development cluster or production cluster. + d) Choose the K8s version to be deployed to 1.16.* + e) Select the storage container. + f) Initiate cluster creation. + g) UI will display the status of deployment. + +Next, follow the steps to install a fresh cluster, choosing `v1.16` as the Kubernetes +version and `centos` as the base image. + + Once the cluster is deployed, you can download the kubeconfig to your local machine using +the steps described: +1. In the Clusters view, select a cluster from the list by checking the adjacent box. +2. Click the Actions drop-down. +3. Click Download Kubeconfig +4. Under Instructions, click Download. +5. Once the file is downloaded(example: prod1-kubectl.cfg), run the following command: + `export KUBECONFIG=/path/to/prod1-kubectl.cfg` +6. Run the following command to test the cluster: + `kubectl cluster-info` + +Then, build Sonobuoy(the standard tool for running these tests) by running `go get -u -v github.com/heptio/sonobuoy`. +This would build the latest Sonobuoy release [v0.17.2](https://github.com/heptio/sonobuoy/releases/tag/v0.17.2). + +Deploy a Sonobuoy pod to the deployed cluster using `sonobuoy run`, which will initiate the tests to run on +the cluster. Use `sonobuoy status` to track the status of the test run. Once it completes, you +can inspect the logs using `sonobuoy logs` and/or retrieve it using `sonobuoy retrieve` +to copy from the main Sonobuoy pod to a local directory. diff --git a/v1.16/ntnx-karbon/e2e.log b/v1.16/ntnx-karbon/e2e.log new file mode 100644 index 0000000000..20cd16dcb4 --- /dev/null +++ b/v1.16/ntnx-karbon/e2e.log @@ -0,0 +1,13667 @@ +I0603 20:08:34.486084 25 test_context.go:414] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-005848369 +I0603 20:08:34.486360 25 e2e.go:92] Starting e2e run "f3f83a2e-5ec0-40e3-bdab-4bfb0d6ccf94" on Ginkgo node 1 +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1591214913 - Will randomize all specs +Will run 276 of 4731 specs + +Jun 3 20:08:34.499: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +Jun 3 20:08:34.502: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Jun 3 20:08:34.524: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Jun 3 20:08:34.556: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Jun 3 20:08:34.556: INFO: expected 1 pod replicas in namespace 'kube-system', 1 are Running and Ready. +Jun 3 20:08:34.556: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Jun 3 20:08:34.566: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds' (0 seconds elapsed) +Jun 3 20:08:34.566: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy-ds' (0 seconds elapsed) +Jun 3 20:08:34.566: INFO: e2e test version: v1.16.8 +Jun 3 20:08:34.567: INFO: kube-apiserver version: v1.16.8 +Jun 3 20:08:34.567: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +Jun 3 20:08:34.574: INFO: Cluster IP family: ipv4 +SSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:08:34.574: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename daemonsets +Jun 3 20:08:34.654: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +Jun 3 20:08:34.684: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Jun 3 20:08:34.703: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:34.703: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:35.713: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:35.713: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:36.712: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:36.712: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:37.712: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:37.712: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:38.721: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:38.721: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:39.712: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:39.712: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:40.712: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:40.712: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:41.750: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:41.750: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:42.712: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:42.712: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:43.712: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:43.712: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:44.713: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:44.713: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:45.712: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:45.712: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:46.714: INFO: Number of nodes with available pods: 0 +Jun 3 20:08:46.714: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod +Jun 3 20:08:47.714: INFO: Number of nodes with available pods: 3 +Jun 3 20:08:47.714: INFO: Node karbon-certification-ff5a6a-k8s-worker-0 is running more than one daemon pod +Jun 3 20:08:48.711: INFO: Number of nodes with available pods: 5 +Jun 3 20:08:48.711: INFO: Number of running nodes: 5, number of available pods: 5 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Jun 3 20:08:48.757: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:48.757: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:48.757: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:48.757: INFO: Wrong image for pod: daemon-set-lhd64. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:48.757: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:49.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:49.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:49.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:49.765: INFO: Wrong image for pod: daemon-set-lhd64. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:49.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:50.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:50.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:50.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:50.765: INFO: Wrong image for pod: daemon-set-lhd64. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:50.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:51.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:51.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:51.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:51.765: INFO: Wrong image for pod: daemon-set-lhd64. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:51.765: INFO: Pod daemon-set-lhd64 is not available +Jun 3 20:08:51.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:52.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:52.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:52.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:52.765: INFO: Wrong image for pod: daemon-set-lhd64. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:52.765: INFO: Pod daemon-set-lhd64 is not available +Jun 3 20:08:52.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:53.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:53.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:53.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:53.765: INFO: Wrong image for pod: daemon-set-lhd64. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:53.765: INFO: Pod daemon-set-lhd64 is not available +Jun 3 20:08:53.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:54.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:54.765: INFO: Pod daemon-set-bqphw is not available +Jun 3 20:08:54.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:54.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:54.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:55.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:55.765: INFO: Pod daemon-set-bqphw is not available +Jun 3 20:08:55.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:55.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:55.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:56.766: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:56.766: INFO: Pod daemon-set-bqphw is not available +Jun 3 20:08:56.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:56.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:56.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:57.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:57.765: INFO: Pod daemon-set-bqphw is not available +Jun 3 20:08:57.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:57.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:57.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:58.766: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:58.766: INFO: Pod daemon-set-bqphw is not available +Jun 3 20:08:58.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:58.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:58.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:59.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:59.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:59.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:08:59.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:00.766: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:00.766: INFO: Pod daemon-set-5m62x is not available +Jun 3 20:09:00.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:00.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:00.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:01.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:01.765: INFO: Pod daemon-set-5m62x is not available +Jun 3 20:09:01.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:01.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:01.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:02.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:02.766: INFO: Pod daemon-set-5m62x is not available +Jun 3 20:09:02.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:02.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:02.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:03.765: INFO: Wrong image for pod: daemon-set-5m62x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:03.765: INFO: Pod daemon-set-5m62x is not available +Jun 3 20:09:03.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:03.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:03.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:04.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:04.765: INFO: Pod daemon-set-d9fn9 is not available +Jun 3 20:09:04.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:04.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:05.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:05.766: INFO: Pod daemon-set-d9fn9 is not available +Jun 3 20:09:05.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:05.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:06.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:06.765: INFO: Pod daemon-set-d9fn9 is not available +Jun 3 20:09:06.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:06.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:07.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:07.766: INFO: Pod daemon-set-d9fn9 is not available +Jun 3 20:09:07.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:07.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:08.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:08.765: INFO: Pod daemon-set-d9fn9 is not available +Jun 3 20:09:08.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:08.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:09.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:09.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:09.765: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:10.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:10.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:10.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:10.766: INFO: Pod daemon-set-lwgkb is not available +Jun 3 20:09:11.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:11.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:11.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:11.766: INFO: Pod daemon-set-lwgkb is not available +Jun 3 20:09:12.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:12.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:12.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:12.766: INFO: Pod daemon-set-lwgkb is not available +Jun 3 20:09:13.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:13.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:13.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:13.766: INFO: Pod daemon-set-lwgkb is not available +Jun 3 20:09:14.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:14.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:14.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:14.766: INFO: Pod daemon-set-lwgkb is not available +Jun 3 20:09:15.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:15.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:15.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:15.766: INFO: Pod daemon-set-lwgkb is not available +Jun 3 20:09:16.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:16.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:16.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:16.766: INFO: Pod daemon-set-lwgkb is not available +Jun 3 20:09:17.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:17.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:17.766: INFO: Wrong image for pod: daemon-set-lwgkb. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:17.766: INFO: Pod daemon-set-lwgkb is not available +Jun 3 20:09:18.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:18.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:18.765: INFO: Pod daemon-set-tsncc is not available +Jun 3 20:09:19.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:19.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:19.766: INFO: Pod daemon-set-tsncc is not available +Jun 3 20:09:20.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:20.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:20.765: INFO: Pod daemon-set-tsncc is not available +Jun 3 20:09:21.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:21.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:21.766: INFO: Pod daemon-set-tsncc is not available +Jun 3 20:09:22.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:22.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:22.765: INFO: Pod daemon-set-tsncc is not available +Jun 3 20:09:23.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:23.766: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:24.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:24.765: INFO: Wrong image for pod: daemon-set-j6lnz. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:24.765: INFO: Pod daemon-set-j6lnz is not available +Jun 3 20:09:25.765: INFO: Pod daemon-set-5qngb is not available +Jun 3 20:09:25.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:26.765: INFO: Pod daemon-set-5qngb is not available +Jun 3 20:09:26.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:27.766: INFO: Pod daemon-set-5qngb is not available +Jun 3 20:09:27.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:28.765: INFO: Pod daemon-set-5qngb is not available +Jun 3 20:09:28.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:29.765: INFO: Pod daemon-set-5qngb is not available +Jun 3 20:09:29.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:30.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:31.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:31.765: INFO: Pod daemon-set-clc2m is not available +Jun 3 20:09:32.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:32.765: INFO: Pod daemon-set-clc2m is not available +Jun 3 20:09:33.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:33.765: INFO: Pod daemon-set-clc2m is not available +Jun 3 20:09:34.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:34.765: INFO: Pod daemon-set-clc2m is not available +Jun 3 20:09:35.766: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:35.766: INFO: Pod daemon-set-clc2m is not available +Jun 3 20:09:36.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:36.765: INFO: Pod daemon-set-clc2m is not available +Jun 3 20:09:37.765: INFO: Wrong image for pod: daemon-set-clc2m. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine. +Jun 3 20:09:37.765: INFO: Pod daemon-set-clc2m is not available +Jun 3 20:09:38.766: INFO: Pod daemon-set-x7p89 is not available +STEP: Check that daemon pods are still running on every node of the cluster. +Jun 3 20:09:38.777: INFO: Number of nodes with available pods: 4 +Jun 3 20:09:38.777: INFO: Node karbon-certification-ff5a6a-k8s-master-1 is running more than one daemon pod +Jun 3 20:09:39.786: INFO: Number of nodes with available pods: 4 +Jun 3 20:09:39.786: INFO: Node karbon-certification-ff5a6a-k8s-master-1 is running more than one daemon pod +Jun 3 20:09:40.784: INFO: Number of nodes with available pods: 4 +Jun 3 20:09:40.784: INFO: Node karbon-certification-ff5a6a-k8s-master-1 is running more than one daemon pod +Jun 3 20:09:41.787: INFO: Number of nodes with available pods: 4 +Jun 3 20:09:41.787: INFO: Node karbon-certification-ff5a6a-k8s-master-1 is running more than one daemon pod +Jun 3 20:09:42.787: INFO: Number of nodes with available pods: 4 +Jun 3 20:09:42.787: INFO: Node karbon-certification-ff5a6a-k8s-master-1 is running more than one daemon pod +Jun 3 20:09:43.786: INFO: Number of nodes with available pods: 5 +Jun 3 20:09:43.786: INFO: Number of running nodes: 5, number of available pods: 5 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1009, will wait for the garbage collector to delete the pods +Jun 3 20:09:43.864: INFO: Deleting DaemonSet.extensions daemon-set took: 9.016396ms +Jun 3 20:09:44.264: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.314948ms +Jun 3 20:09:54.569: INFO: Number of nodes with available pods: 0 +Jun 3 20:09:54.569: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 3 20:09:54.573: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1009/daemonsets","resourceVersion":"142875"},"items":null} + +Jun 3 20:09:54.576: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1009/pods","resourceVersion":"142875"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:09:54.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-1009" for this suite. +Jun 3 20:10:00.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:10:01.193: INFO: namespace daemonsets-1009 deletion completed in 6.594869213s + +• [SLOW TEST:86.619 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:10:01.193: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation +Jun 3 20:10:01.235: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +Jun 3 20:10:04.903: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:10:19.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-1402" for this suite. +Jun 3 20:10:25.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:10:25.156: INFO: namespace crd-publish-openapi-1402 deletion completed in 6.104033421s + +• [SLOW TEST:23.963 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:10:25.157: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating pod test-webserver-8084c810-ebd1-45d7-bd91-0a0b879752e0 in namespace container-probe-4407 +Jun 3 20:10:29.212: INFO: Started pod test-webserver-8084c810-ebd1-45d7-bd91-0a0b879752e0 in namespace container-probe-4407 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 3 20:10:29.215: INFO: Initial restart count of pod test-webserver-8084c810-ebd1-45d7-bd91-0a0b879752e0 is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:14:29.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-4407" for this suite. +Jun 3 20:14:35.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:14:35.873: INFO: namespace container-probe-4407 deletion completed in 6.107818933s + +• [SLOW TEST:250.716 seconds] +[k8s.io] Probing container +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:14:35.874: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-9080 +I0603 20:14:35.938760 25 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9080, replica count: 1 +I0603 20:14:36.989300 25 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0603 20:14:37.989583 25 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0603 20:14:38.989828 25 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0603 20:14:39.990102 25 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jun 3 20:14:40.113: INFO: Created: latency-svc-4vmk7 +Jun 3 20:14:40.119: INFO: Got endpoints: latency-svc-4vmk7 [28.837442ms] +Jun 3 20:14:40.139: INFO: Created: latency-svc-qzq7v +Jun 3 20:14:40.147: INFO: Got endpoints: latency-svc-qzq7v [27.174369ms] +Jun 3 20:14:40.149: INFO: Created: latency-svc-rwrkk +Jun 3 20:14:40.155: INFO: Got endpoints: latency-svc-rwrkk [35.24382ms] +Jun 3 20:14:40.159: INFO: Created: latency-svc-4hbgm +Jun 3 20:14:40.165: INFO: Got endpoints: latency-svc-4hbgm [44.868396ms] +Jun 3 20:14:40.167: INFO: Created: latency-svc-965r4 +Jun 3 20:14:40.172: INFO: Got endpoints: latency-svc-965r4 [51.377694ms] +Jun 3 20:14:40.176: INFO: Created: latency-svc-wmhs5 +Jun 3 20:14:40.181: INFO: Got endpoints: latency-svc-wmhs5 [61.200752ms] +Jun 3 20:14:40.188: INFO: Created: latency-svc-kldj8 +Jun 3 20:14:40.191: INFO: Created: latency-svc-xt52n +Jun 3 20:14:40.196: INFO: Got endpoints: latency-svc-kldj8 [75.644613ms] +Jun 3 20:14:40.201: INFO: Got endpoints: latency-svc-xt52n [80.759436ms] +Jun 3 20:14:40.205: INFO: Created: latency-svc-j6pt4 +Jun 3 20:14:40.211: INFO: Got endpoints: latency-svc-j6pt4 [90.474707ms] +Jun 3 20:14:40.214: INFO: Created: latency-svc-82qkl +Jun 3 20:14:40.221: INFO: Got endpoints: latency-svc-82qkl [100.649817ms] +Jun 3 20:14:40.224: INFO: Created: latency-svc-v772w +Jun 3 20:14:40.231: INFO: Got endpoints: latency-svc-v772w [110.437637ms] +Jun 3 20:14:40.235: INFO: Created: latency-svc-bjp8k +Jun 3 20:14:40.247: INFO: Got endpoints: latency-svc-bjp8k [126.400075ms] +Jun 3 20:14:40.247: INFO: Created: latency-svc-tmcxj +Jun 3 20:14:40.255: INFO: Got endpoints: latency-svc-tmcxj [134.516146ms] +Jun 3 20:14:40.259: INFO: Created: latency-svc-4w79f +Jun 3 20:14:40.266: INFO: Got endpoints: latency-svc-4w79f [145.847586ms] +Jun 3 20:14:40.273: INFO: Created: latency-svc-xf57x +Jun 3 20:14:40.277: INFO: Got endpoints: latency-svc-xf57x [157.068922ms] +Jun 3 20:14:40.280: INFO: Created: latency-svc-p2ttv +Jun 3 20:14:40.285: INFO: Got endpoints: latency-svc-p2ttv [165.029755ms] +Jun 3 20:14:40.291: INFO: Created: latency-svc-ntnm7 +Jun 3 20:14:40.299: INFO: Got endpoints: latency-svc-ntnm7 [152.333773ms] +Jun 3 20:14:40.301: INFO: Created: latency-svc-9s7wz +Jun 3 20:14:40.307: INFO: Got endpoints: latency-svc-9s7wz [151.583114ms] +Jun 3 20:14:40.309: INFO: Created: latency-svc-5gwfr +Jun 3 20:14:40.316: INFO: Got endpoints: latency-svc-5gwfr [151.325991ms] +Jun 3 20:14:40.318: INFO: Created: latency-svc-jpnvj +Jun 3 20:14:40.325: INFO: Got endpoints: latency-svc-jpnvj [153.512534ms] +Jun 3 20:14:40.333: INFO: Created: latency-svc-gb8d5 +Jun 3 20:14:40.338: INFO: Created: latency-svc-hsrfk +Jun 3 20:14:40.338: INFO: Got endpoints: latency-svc-gb8d5 [157.013841ms] +Jun 3 20:14:40.342: INFO: Got endpoints: latency-svc-hsrfk [145.873727ms] +Jun 3 20:14:40.348: INFO: Created: latency-svc-vfmsn +Jun 3 20:14:40.355: INFO: Got endpoints: latency-svc-vfmsn [154.15907ms] +Jun 3 20:14:40.360: INFO: Created: latency-svc-jzbnl +Jun 3 20:14:40.364: INFO: Got endpoints: latency-svc-jzbnl [22.746079ms] +Jun 3 20:14:40.368: INFO: Created: latency-svc-bgrqq +Jun 3 20:14:40.373: INFO: Got endpoints: latency-svc-bgrqq [162.740014ms] +Jun 3 20:14:40.377: INFO: Created: latency-svc-fbspd +Jun 3 20:14:40.386: INFO: Got endpoints: latency-svc-fbspd [164.792319ms] +Jun 3 20:14:40.391: INFO: Created: latency-svc-g4zhs +Jun 3 20:14:40.399: INFO: Got endpoints: latency-svc-g4zhs [168.224384ms] +Jun 3 20:14:40.404: INFO: Created: latency-svc-dwvmk +Jun 3 20:14:40.411: INFO: Got endpoints: latency-svc-dwvmk [163.793887ms] +Jun 3 20:14:40.413: INFO: Created: latency-svc-9jdpk +Jun 3 20:14:40.419: INFO: Got endpoints: latency-svc-9jdpk [164.069036ms] +Jun 3 20:14:40.421: INFO: Created: latency-svc-mkcwz +Jun 3 20:14:40.428: INFO: Got endpoints: latency-svc-mkcwz [161.301903ms] +Jun 3 20:14:40.433: INFO: Created: latency-svc-82895 +Jun 3 20:14:40.444: INFO: Got endpoints: latency-svc-82895 [166.494444ms] +Jun 3 20:14:40.445: INFO: Created: latency-svc-d82d5 +Jun 3 20:14:40.457: INFO: Got endpoints: latency-svc-d82d5 [171.838056ms] +Jun 3 20:14:40.460: INFO: Created: latency-svc-phfwv +Jun 3 20:14:40.467: INFO: Got endpoints: latency-svc-phfwv [167.575493ms] +Jun 3 20:14:40.470: INFO: Created: latency-svc-j7tdl +Jun 3 20:14:40.479: INFO: Created: latency-svc-q7kdc +Jun 3 20:14:40.484: INFO: Got endpoints: latency-svc-j7tdl [176.820896ms] +Jun 3 20:14:40.485: INFO: Got endpoints: latency-svc-q7kdc [169.23371ms] +Jun 3 20:14:40.490: INFO: Created: latency-svc-z9z2j +Jun 3 20:14:40.497: INFO: Got endpoints: latency-svc-z9z2j [171.89336ms] +Jun 3 20:14:40.501: INFO: Created: latency-svc-fk4rd +Jun 3 20:14:40.507: INFO: Got endpoints: latency-svc-fk4rd [168.690309ms] +Jun 3 20:14:40.510: INFO: Created: latency-svc-5vqss +Jun 3 20:14:40.518: INFO: Got endpoints: latency-svc-5vqss [163.303947ms] +Jun 3 20:14:40.522: INFO: Created: latency-svc-n4b9n +Jun 3 20:14:40.535: INFO: Created: latency-svc-vjp5c +Jun 3 20:14:40.543: INFO: Created: latency-svc-wtjvc +Jun 3 20:14:40.561: INFO: Created: latency-svc-q9nxq +Jun 3 20:14:40.569: INFO: Got endpoints: latency-svc-n4b9n [204.0703ms] +Jun 3 20:14:40.571: INFO: Created: latency-svc-8fkdj +Jun 3 20:14:40.580: INFO: Created: latency-svc-9gmqn +Jun 3 20:14:40.589: INFO: Created: latency-svc-rdkmk +Jun 3 20:14:40.598: INFO: Created: latency-svc-8lq7d +Jun 3 20:14:40.606: INFO: Created: latency-svc-ccvdd +Jun 3 20:14:40.614: INFO: Created: latency-svc-h5mp6 +Jun 3 20:14:40.620: INFO: Got endpoints: latency-svc-vjp5c [246.660042ms] +Jun 3 20:14:40.625: INFO: Created: latency-svc-fdzzz +Jun 3 20:14:40.633: INFO: Created: latency-svc-bf8np +Jun 3 20:14:40.642: INFO: Created: latency-svc-mdnwn +Jun 3 20:14:40.655: INFO: Created: latency-svc-w4n48 +Jun 3 20:14:40.669: INFO: Created: latency-svc-lcnvm +Jun 3 20:14:40.670: INFO: Got endpoints: latency-svc-wtjvc [283.946115ms] +Jun 3 20:14:40.678: INFO: Created: latency-svc-nzkp6 +Jun 3 20:14:40.687: INFO: Created: latency-svc-t56g8 +Jun 3 20:14:40.695: INFO: Created: latency-svc-qcsgd +Jun 3 20:14:40.718: INFO: Got endpoints: latency-svc-q9nxq [318.674135ms] +Jun 3 20:14:40.734: INFO: Created: latency-svc-dbr9p +Jun 3 20:14:40.770: INFO: Got endpoints: latency-svc-8fkdj [359.337867ms] +Jun 3 20:14:40.783: INFO: Created: latency-svc-g79n6 +Jun 3 20:14:40.819: INFO: Got endpoints: latency-svc-9gmqn [399.641343ms] +Jun 3 20:14:40.831: INFO: Created: latency-svc-2qkvv +Jun 3 20:14:40.870: INFO: Got endpoints: latency-svc-rdkmk [442.810225ms] +Jun 3 20:14:40.883: INFO: Created: latency-svc-rq5mc +Jun 3 20:14:40.919: INFO: Got endpoints: latency-svc-8lq7d [475.013817ms] +Jun 3 20:14:40.934: INFO: Created: latency-svc-krf7z +Jun 3 20:14:40.974: INFO: Got endpoints: latency-svc-ccvdd [516.488775ms] +Jun 3 20:14:40.987: INFO: Created: latency-svc-csg5v +Jun 3 20:14:41.019: INFO: Got endpoints: latency-svc-h5mp6 [552.460971ms] +Jun 3 20:14:41.036: INFO: Created: latency-svc-khdls +Jun 3 20:14:41.069: INFO: Got endpoints: latency-svc-fdzzz [585.620827ms] +Jun 3 20:14:41.119: INFO: Created: latency-svc-h99zr +Jun 3 20:14:41.121: INFO: Got endpoints: latency-svc-bf8np [635.619464ms] +Jun 3 20:14:41.136: INFO: Created: latency-svc-gl6jf +Jun 3 20:14:41.169: INFO: Got endpoints: latency-svc-mdnwn [671.167653ms] +Jun 3 20:14:41.182: INFO: Created: latency-svc-79p2v +Jun 3 20:14:41.218: INFO: Got endpoints: latency-svc-w4n48 [711.247323ms] +Jun 3 20:14:41.231: INFO: Created: latency-svc-fmffn +Jun 3 20:14:41.269: INFO: Got endpoints: latency-svc-lcnvm [750.40919ms] +Jun 3 20:14:41.285: INFO: Created: latency-svc-f2cxx +Jun 3 20:14:41.320: INFO: Got endpoints: latency-svc-nzkp6 [750.88491ms] +Jun 3 20:14:41.332: INFO: Created: latency-svc-9vs9k +Jun 3 20:14:41.369: INFO: Got endpoints: latency-svc-t56g8 [748.418307ms] +Jun 3 20:14:41.381: INFO: Created: latency-svc-64nxd +Jun 3 20:14:41.418: INFO: Got endpoints: latency-svc-qcsgd [748.664939ms] +Jun 3 20:14:41.431: INFO: Created: latency-svc-sq8tt +Jun 3 20:14:41.469: INFO: Got endpoints: latency-svc-dbr9p [750.933225ms] +Jun 3 20:14:41.483: INFO: Created: latency-svc-bjljm +Jun 3 20:14:41.520: INFO: Got endpoints: latency-svc-g79n6 [749.456285ms] +Jun 3 20:14:41.531: INFO: Created: latency-svc-2rnpw +Jun 3 20:14:41.569: INFO: Got endpoints: latency-svc-2qkvv [749.954412ms] +Jun 3 20:14:41.581: INFO: Created: latency-svc-ch4qh +Jun 3 20:14:41.619: INFO: Got endpoints: latency-svc-rq5mc [748.882246ms] +Jun 3 20:14:41.633: INFO: Created: latency-svc-pvtpt +Jun 3 20:14:41.669: INFO: Got endpoints: latency-svc-krf7z [749.446539ms] +Jun 3 20:14:41.686: INFO: Created: latency-svc-cfxzh +Jun 3 20:14:41.721: INFO: Got endpoints: latency-svc-csg5v [747.008705ms] +Jun 3 20:14:41.735: INFO: Created: latency-svc-7r2q2 +Jun 3 20:14:41.769: INFO: Got endpoints: latency-svc-khdls [750.251952ms] +Jun 3 20:14:41.783: INFO: Created: latency-svc-bvrlp +Jun 3 20:14:41.819: INFO: Got endpoints: latency-svc-h99zr [749.809554ms] +Jun 3 20:14:41.833: INFO: Created: latency-svc-wdjvt +Jun 3 20:14:41.868: INFO: Got endpoints: latency-svc-gl6jf [747.385571ms] +Jun 3 20:14:41.883: INFO: Created: latency-svc-8fd52 +Jun 3 20:14:41.920: INFO: Got endpoints: latency-svc-79p2v [751.830523ms] +Jun 3 20:14:41.934: INFO: Created: latency-svc-nr4qq +Jun 3 20:14:41.969: INFO: Got endpoints: latency-svc-fmffn [750.939333ms] +Jun 3 20:14:41.990: INFO: Created: latency-svc-g5tl6 +Jun 3 20:14:42.019: INFO: Got endpoints: latency-svc-f2cxx [750.210529ms] +Jun 3 20:14:42.033: INFO: Created: latency-svc-ds78n +Jun 3 20:14:42.070: INFO: Got endpoints: latency-svc-9vs9k [749.979039ms] +Jun 3 20:14:42.123: INFO: Created: latency-svc-qxvsk +Jun 3 20:14:42.126: INFO: Got endpoints: latency-svc-64nxd [756.920606ms] +Jun 3 20:14:42.138: INFO: Created: latency-svc-cx94j +Jun 3 20:14:42.169: INFO: Got endpoints: latency-svc-sq8tt [750.266072ms] +Jun 3 20:14:42.182: INFO: Created: latency-svc-x27w8 +Jun 3 20:14:42.219: INFO: Got endpoints: latency-svc-bjljm [749.753566ms] +Jun 3 20:14:42.231: INFO: Created: latency-svc-sbqs5 +Jun 3 20:14:42.270: INFO: Got endpoints: latency-svc-2rnpw [750.085692ms] +Jun 3 20:14:42.286: INFO: Created: latency-svc-ftfq9 +Jun 3 20:14:42.318: INFO: Got endpoints: latency-svc-ch4qh [749.398419ms] +Jun 3 20:14:42.332: INFO: Created: latency-svc-bzxrk +Jun 3 20:14:42.369: INFO: Got endpoints: latency-svc-pvtpt [749.127161ms] +Jun 3 20:14:42.381: INFO: Created: latency-svc-r8cgg +Jun 3 20:14:42.421: INFO: Got endpoints: latency-svc-cfxzh [752.108775ms] +Jun 3 20:14:42.434: INFO: Created: latency-svc-242k5 +Jun 3 20:14:42.469: INFO: Got endpoints: latency-svc-7r2q2 [747.85154ms] +Jun 3 20:14:42.482: INFO: Created: latency-svc-7sbtq +Jun 3 20:14:42.519: INFO: Got endpoints: latency-svc-bvrlp [749.940585ms] +Jun 3 20:14:42.532: INFO: Created: latency-svc-qnqk7 +Jun 3 20:14:42.569: INFO: Got endpoints: latency-svc-wdjvt [749.845677ms] +Jun 3 20:14:42.581: INFO: Created: latency-svc-q842f +Jun 3 20:14:42.618: INFO: Got endpoints: latency-svc-8fd52 [749.675895ms] +Jun 3 20:14:42.631: INFO: Created: latency-svc-dc8z4 +Jun 3 20:14:42.669: INFO: Got endpoints: latency-svc-nr4qq [747.968544ms] +Jun 3 20:14:42.682: INFO: Created: latency-svc-pwfzx +Jun 3 20:14:42.720: INFO: Got endpoints: latency-svc-g5tl6 [750.136642ms] +Jun 3 20:14:42.733: INFO: Created: latency-svc-skttd +Jun 3 20:14:42.769: INFO: Got endpoints: latency-svc-ds78n [749.436835ms] +Jun 3 20:14:42.783: INFO: Created: latency-svc-4lc78 +Jun 3 20:14:42.819: INFO: Got endpoints: latency-svc-qxvsk [748.885784ms] +Jun 3 20:14:42.832: INFO: Created: latency-svc-6x9gn +Jun 3 20:14:42.869: INFO: Got endpoints: latency-svc-cx94j [742.833721ms] +Jun 3 20:14:42.881: INFO: Created: latency-svc-4t9kz +Jun 3 20:14:42.918: INFO: Got endpoints: latency-svc-x27w8 [749.543847ms] +Jun 3 20:14:42.931: INFO: Created: latency-svc-jz6w7 +Jun 3 20:14:42.971: INFO: Got endpoints: latency-svc-sbqs5 [752.007319ms] +Jun 3 20:14:42.982: INFO: Created: latency-svc-nffwk +Jun 3 20:14:43.020: INFO: Got endpoints: latency-svc-ftfq9 [749.760254ms] +Jun 3 20:14:43.035: INFO: Created: latency-svc-r22fm +Jun 3 20:14:43.069: INFO: Got endpoints: latency-svc-bzxrk [751.083839ms] +Jun 3 20:14:43.121: INFO: Created: latency-svc-qkxxj +Jun 3 20:14:43.127: INFO: Got endpoints: latency-svc-r8cgg [758.605473ms] +Jun 3 20:14:43.142: INFO: Created: latency-svc-2rdvv +Jun 3 20:14:43.169: INFO: Got endpoints: latency-svc-242k5 [748.45991ms] +Jun 3 20:14:43.184: INFO: Created: latency-svc-44j7r +Jun 3 20:14:43.220: INFO: Got endpoints: latency-svc-7sbtq [751.456134ms] +Jun 3 20:14:43.232: INFO: Created: latency-svc-frj6z +Jun 3 20:14:43.268: INFO: Got endpoints: latency-svc-qnqk7 [748.690566ms] +Jun 3 20:14:43.281: INFO: Created: latency-svc-h475k +Jun 3 20:14:43.319: INFO: Got endpoints: latency-svc-q842f [749.641646ms] +Jun 3 20:14:43.333: INFO: Created: latency-svc-tlj9m +Jun 3 20:14:43.370: INFO: Got endpoints: latency-svc-dc8z4 [751.686742ms] +Jun 3 20:14:43.384: INFO: Created: latency-svc-ttjpv +Jun 3 20:14:43.418: INFO: Got endpoints: latency-svc-pwfzx [749.605003ms] +Jun 3 20:14:43.434: INFO: Created: latency-svc-wt6km +Jun 3 20:14:43.469: INFO: Got endpoints: latency-svc-skttd [749.083303ms] +Jun 3 20:14:43.483: INFO: Created: latency-svc-rt4h9 +Jun 3 20:14:43.518: INFO: Got endpoints: latency-svc-4lc78 [749.690653ms] +Jun 3 20:14:43.533: INFO: Created: latency-svc-lsc6q +Jun 3 20:14:43.571: INFO: Got endpoints: latency-svc-6x9gn [752.27455ms] +Jun 3 20:14:43.585: INFO: Created: latency-svc-cq7kb +Jun 3 20:14:43.619: INFO: Got endpoints: latency-svc-4t9kz [750.171534ms] +Jun 3 20:14:43.632: INFO: Created: latency-svc-gbtd5 +Jun 3 20:14:43.671: INFO: Got endpoints: latency-svc-jz6w7 [752.227548ms] +Jun 3 20:14:43.684: INFO: Created: latency-svc-qtw2n +Jun 3 20:14:43.719: INFO: Got endpoints: latency-svc-nffwk [747.835307ms] +Jun 3 20:14:43.730: INFO: Created: latency-svc-txrmp +Jun 3 20:14:43.769: INFO: Got endpoints: latency-svc-r22fm [749.148604ms] +Jun 3 20:14:43.782: INFO: Created: latency-svc-4v2fs +Jun 3 20:14:43.818: INFO: Got endpoints: latency-svc-qkxxj [748.86452ms] +Jun 3 20:14:43.831: INFO: Created: latency-svc-ksdrl +Jun 3 20:14:43.868: INFO: Got endpoints: latency-svc-2rdvv [741.038394ms] +Jun 3 20:14:43.881: INFO: Created: latency-svc-sdzsl +Jun 3 20:14:43.919: INFO: Got endpoints: latency-svc-44j7r [749.658136ms] +Jun 3 20:14:43.933: INFO: Created: latency-svc-pbtvt +Jun 3 20:14:43.970: INFO: Got endpoints: latency-svc-frj6z [749.693329ms] +Jun 3 20:14:43.983: INFO: Created: latency-svc-x2bfg +Jun 3 20:14:44.019: INFO: Got endpoints: latency-svc-h475k [750.977329ms] +Jun 3 20:14:44.033: INFO: Created: latency-svc-6c8vf +Jun 3 20:14:44.069: INFO: Got endpoints: latency-svc-tlj9m [749.981915ms] +Jun 3 20:14:44.088: INFO: Created: latency-svc-95zdl +Jun 3 20:14:44.131: INFO: Got endpoints: latency-svc-ttjpv [761.168329ms] +Jun 3 20:14:44.170: INFO: Created: latency-svc-2zvmg +Jun 3 20:14:44.172: INFO: Got endpoints: latency-svc-wt6km [753.452708ms] +Jun 3 20:14:44.189: INFO: Created: latency-svc-szk88 +Jun 3 20:14:44.220: INFO: Got endpoints: latency-svc-rt4h9 [751.032572ms] +Jun 3 20:14:44.232: INFO: Created: latency-svc-gjs4j +Jun 3 20:14:44.272: INFO: Got endpoints: latency-svc-lsc6q [753.479098ms] +Jun 3 20:14:44.286: INFO: Created: latency-svc-mdwz8 +Jun 3 20:14:44.318: INFO: Got endpoints: latency-svc-cq7kb [747.386315ms] +Jun 3 20:14:44.333: INFO: Created: latency-svc-pxc54 +Jun 3 20:14:44.373: INFO: Got endpoints: latency-svc-gbtd5 [754.656975ms] +Jun 3 20:14:44.386: INFO: Created: latency-svc-7rmnk +Jun 3 20:14:44.419: INFO: Got endpoints: latency-svc-qtw2n [748.508941ms] +Jun 3 20:14:44.432: INFO: Created: latency-svc-khqrf +Jun 3 20:14:44.471: INFO: Got endpoints: latency-svc-txrmp [752.5763ms] +Jun 3 20:14:44.486: INFO: Created: latency-svc-xtmph +Jun 3 20:14:44.519: INFO: Got endpoints: latency-svc-4v2fs [750.053263ms] +Jun 3 20:14:44.533: INFO: Created: latency-svc-wwx2g +Jun 3 20:14:44.569: INFO: Got endpoints: latency-svc-ksdrl [751.011112ms] +Jun 3 20:14:44.585: INFO: Created: latency-svc-grflj +Jun 3 20:14:44.620: INFO: Got endpoints: latency-svc-sdzsl [751.964946ms] +Jun 3 20:14:44.633: INFO: Created: latency-svc-h8s92 +Jun 3 20:14:44.669: INFO: Got endpoints: latency-svc-pbtvt [750.124072ms] +Jun 3 20:14:44.681: INFO: Created: latency-svc-mzmdr +Jun 3 20:14:44.719: INFO: Got endpoints: latency-svc-x2bfg [748.801041ms] +Jun 3 20:14:44.735: INFO: Created: latency-svc-pm9rv +Jun 3 20:14:44.770: INFO: Got endpoints: latency-svc-6c8vf [750.294105ms] +Jun 3 20:14:44.786: INFO: Created: latency-svc-w9vqb +Jun 3 20:14:44.819: INFO: Got endpoints: latency-svc-95zdl [749.875429ms] +Jun 3 20:14:44.833: INFO: Created: latency-svc-js6mt +Jun 3 20:14:44.871: INFO: Got endpoints: latency-svc-2zvmg [740.025546ms] +Jun 3 20:14:44.887: INFO: Created: latency-svc-2mrxn +Jun 3 20:14:44.919: INFO: Got endpoints: latency-svc-szk88 [747.583569ms] +Jun 3 20:14:44.935: INFO: Created: latency-svc-d8xg7 +Jun 3 20:14:44.969: INFO: Got endpoints: latency-svc-gjs4j [749.4638ms] +Jun 3 20:14:44.985: INFO: Created: latency-svc-fb8pq +Jun 3 20:14:45.020: INFO: Got endpoints: latency-svc-mdwz8 [748.011141ms] +Jun 3 20:14:45.035: INFO: Created: latency-svc-pjtk5 +Jun 3 20:14:45.069: INFO: Got endpoints: latency-svc-pxc54 [750.573544ms] +Jun 3 20:14:45.082: INFO: Created: latency-svc-ztgcg +Jun 3 20:14:45.119: INFO: Got endpoints: latency-svc-7rmnk [744.995799ms] +Jun 3 20:14:45.134: INFO: Created: latency-svc-lc4s7 +Jun 3 20:14:45.168: INFO: Got endpoints: latency-svc-khqrf [749.056652ms] +Jun 3 20:14:45.184: INFO: Created: latency-svc-8w8n7 +Jun 3 20:14:45.221: INFO: Got endpoints: latency-svc-xtmph [750.062653ms] +Jun 3 20:14:45.236: INFO: Created: latency-svc-lx4qj +Jun 3 20:14:45.270: INFO: Got endpoints: latency-svc-wwx2g [750.916867ms] +Jun 3 20:14:45.283: INFO: Created: latency-svc-p4cs2 +Jun 3 20:14:45.319: INFO: Got endpoints: latency-svc-grflj [749.660203ms] +Jun 3 20:14:45.333: INFO: Created: latency-svc-wvxxs +Jun 3 20:14:45.369: INFO: Got endpoints: latency-svc-h8s92 [748.363947ms] +Jun 3 20:14:45.383: INFO: Created: latency-svc-8xv8n +Jun 3 20:14:45.421: INFO: Got endpoints: latency-svc-mzmdr [751.295862ms] +Jun 3 20:14:45.435: INFO: Created: latency-svc-6q4f2 +Jun 3 20:14:45.469: INFO: Got endpoints: latency-svc-pm9rv [750.406085ms] +Jun 3 20:14:45.483: INFO: Created: latency-svc-7q74p +Jun 3 20:14:45.525: INFO: Got endpoints: latency-svc-w9vqb [755.753224ms] +Jun 3 20:14:45.542: INFO: Created: latency-svc-v2kwx +Jun 3 20:14:45.570: INFO: Got endpoints: latency-svc-js6mt [750.895113ms] +Jun 3 20:14:45.584: INFO: Created: latency-svc-wp9t2 +Jun 3 20:14:45.619: INFO: Got endpoints: latency-svc-2mrxn [747.933531ms] +Jun 3 20:14:45.635: INFO: Created: latency-svc-99ph8 +Jun 3 20:14:45.670: INFO: Got endpoints: latency-svc-d8xg7 [750.751887ms] +Jun 3 20:14:45.686: INFO: Created: latency-svc-68f67 +Jun 3 20:14:45.719: INFO: Got endpoints: latency-svc-fb8pq [749.274334ms] +Jun 3 20:14:45.732: INFO: Created: latency-svc-2bl4m +Jun 3 20:14:45.769: INFO: Got endpoints: latency-svc-pjtk5 [748.638493ms] +Jun 3 20:14:45.782: INFO: Created: latency-svc-f77wg +Jun 3 20:14:45.820: INFO: Got endpoints: latency-svc-ztgcg [750.631201ms] +Jun 3 20:14:45.831: INFO: Created: latency-svc-ghclr +Jun 3 20:14:45.869: INFO: Got endpoints: latency-svc-lc4s7 [750.195081ms] +Jun 3 20:14:45.883: INFO: Created: latency-svc-gr4zq +Jun 3 20:14:45.919: INFO: Got endpoints: latency-svc-8w8n7 [750.845882ms] +Jun 3 20:14:45.941: INFO: Created: latency-svc-nz5ct +Jun 3 20:14:45.969: INFO: Got endpoints: latency-svc-lx4qj [747.933216ms] +Jun 3 20:14:45.984: INFO: Created: latency-svc-wwfh4 +Jun 3 20:14:46.019: INFO: Got endpoints: latency-svc-p4cs2 [749.414559ms] +Jun 3 20:14:46.035: INFO: Created: latency-svc-jxntj +Jun 3 20:14:46.071: INFO: Got endpoints: latency-svc-wvxxs [751.256641ms] +Jun 3 20:14:46.083: INFO: Created: latency-svc-vgd79 +Jun 3 20:14:46.119: INFO: Got endpoints: latency-svc-8xv8n [750.041398ms] +Jun 3 20:14:46.165: INFO: Created: latency-svc-6wccd +Jun 3 20:14:46.170: INFO: Got endpoints: latency-svc-6q4f2 [749.224098ms] +Jun 3 20:14:46.186: INFO: Created: latency-svc-6hhdh +Jun 3 20:14:46.220: INFO: Got endpoints: latency-svc-7q74p [750.165555ms] +Jun 3 20:14:46.234: INFO: Created: latency-svc-khwcr +Jun 3 20:14:46.270: INFO: Got endpoints: latency-svc-v2kwx [743.971154ms] +Jun 3 20:14:46.284: INFO: Created: latency-svc-x865r +Jun 3 20:14:46.319: INFO: Got endpoints: latency-svc-wp9t2 [749.084561ms] +Jun 3 20:14:46.333: INFO: Created: latency-svc-tg7cr +Jun 3 20:14:46.369: INFO: Got endpoints: latency-svc-99ph8 [749.860851ms] +Jun 3 20:14:46.384: INFO: Created: latency-svc-g9drr +Jun 3 20:14:46.419: INFO: Got endpoints: latency-svc-68f67 [748.612748ms] +Jun 3 20:14:46.437: INFO: Created: latency-svc-xb49g +Jun 3 20:14:46.471: INFO: Got endpoints: latency-svc-2bl4m [751.913165ms] +Jun 3 20:14:46.486: INFO: Created: latency-svc-7bbb7 +Jun 3 20:14:46.519: INFO: Got endpoints: latency-svc-f77wg [750.01677ms] +Jun 3 20:14:46.533: INFO: Created: latency-svc-mhlsz +Jun 3 20:14:46.569: INFO: Got endpoints: latency-svc-ghclr [749.334489ms] +Jun 3 20:14:46.585: INFO: Created: latency-svc-z82dj +Jun 3 20:14:46.620: INFO: Got endpoints: latency-svc-gr4zq [750.627362ms] +Jun 3 20:14:46.634: INFO: Created: latency-svc-7r55h +Jun 3 20:14:46.669: INFO: Got endpoints: latency-svc-nz5ct [749.888781ms] +Jun 3 20:14:46.687: INFO: Created: latency-svc-zszc5 +Jun 3 20:14:46.721: INFO: Got endpoints: latency-svc-wwfh4 [751.697971ms] +Jun 3 20:14:46.736: INFO: Created: latency-svc-qxkhg +Jun 3 20:14:46.769: INFO: Got endpoints: latency-svc-jxntj [749.63418ms] +Jun 3 20:14:46.782: INFO: Created: latency-svc-9vpsq +Jun 3 20:14:46.819: INFO: Got endpoints: latency-svc-vgd79 [747.955623ms] +Jun 3 20:14:46.833: INFO: Created: latency-svc-6mn4q +Jun 3 20:14:46.869: INFO: Got endpoints: latency-svc-6wccd [750.407416ms] +Jun 3 20:14:46.881: INFO: Created: latency-svc-zcngz +Jun 3 20:14:46.919: INFO: Got endpoints: latency-svc-6hhdh [749.390207ms] +Jun 3 20:14:46.933: INFO: Created: latency-svc-xqrtz +Jun 3 20:14:46.970: INFO: Got endpoints: latency-svc-khwcr [749.959183ms] +Jun 3 20:14:46.984: INFO: Created: latency-svc-bfzjh +Jun 3 20:14:47.021: INFO: Got endpoints: latency-svc-x865r [750.942543ms] +Jun 3 20:14:47.036: INFO: Created: latency-svc-sz5t7 +Jun 3 20:14:47.069: INFO: Got endpoints: latency-svc-tg7cr [749.699748ms] +Jun 3 20:14:47.081: INFO: Created: latency-svc-5j4vt +Jun 3 20:14:47.121: INFO: Got endpoints: latency-svc-g9drr [751.224943ms] +Jun 3 20:14:47.150: INFO: Created: latency-svc-krjmc +Jun 3 20:14:47.168: INFO: Got endpoints: latency-svc-xb49g [749.563499ms] +Jun 3 20:14:47.182: INFO: Created: latency-svc-z4942 +Jun 3 20:14:47.219: INFO: Got endpoints: latency-svc-7bbb7 [748.08779ms] +Jun 3 20:14:47.233: INFO: Created: latency-svc-x762q +Jun 3 20:14:47.269: INFO: Got endpoints: latency-svc-mhlsz [750.053299ms] +Jun 3 20:14:47.283: INFO: Created: latency-svc-px4wv +Jun 3 20:14:47.319: INFO: Got endpoints: latency-svc-z82dj [749.432281ms] +Jun 3 20:14:47.332: INFO: Created: latency-svc-ckwt4 +Jun 3 20:14:47.371: INFO: Got endpoints: latency-svc-7r55h [751.307843ms] +Jun 3 20:14:47.384: INFO: Created: latency-svc-s2pk7 +Jun 3 20:14:47.418: INFO: Got endpoints: latency-svc-zszc5 [749.076996ms] +Jun 3 20:14:47.432: INFO: Created: latency-svc-fq29v +Jun 3 20:14:47.469: INFO: Got endpoints: latency-svc-qxkhg [747.654115ms] +Jun 3 20:14:47.484: INFO: Created: latency-svc-p2888 +Jun 3 20:14:47.520: INFO: Got endpoints: latency-svc-9vpsq [750.641017ms] +Jun 3 20:14:47.533: INFO: Created: latency-svc-frnx8 +Jun 3 20:14:47.569: INFO: Got endpoints: latency-svc-6mn4q [750.807056ms] +Jun 3 20:14:47.583: INFO: Created: latency-svc-4qg85 +Jun 3 20:14:47.619: INFO: Got endpoints: latency-svc-zcngz [749.550648ms] +Jun 3 20:14:47.633: INFO: Created: latency-svc-mw92f +Jun 3 20:14:47.669: INFO: Got endpoints: latency-svc-xqrtz [750.14352ms] +Jun 3 20:14:47.682: INFO: Created: latency-svc-m59s6 +Jun 3 20:14:47.718: INFO: Got endpoints: latency-svc-bfzjh [748.037691ms] +Jun 3 20:14:47.735: INFO: Created: latency-svc-f8bgc +Jun 3 20:14:47.770: INFO: Got endpoints: latency-svc-sz5t7 [749.579041ms] +Jun 3 20:14:47.786: INFO: Created: latency-svc-mmd6g +Jun 3 20:14:47.820: INFO: Got endpoints: latency-svc-5j4vt [750.839337ms] +Jun 3 20:14:47.842: INFO: Created: latency-svc-pmjdc +Jun 3 20:14:47.869: INFO: Got endpoints: latency-svc-krjmc [748.563531ms] +Jun 3 20:14:47.885: INFO: Created: latency-svc-9mj9b +Jun 3 20:14:47.920: INFO: Got endpoints: latency-svc-z4942 [751.200261ms] +Jun 3 20:14:47.933: INFO: Created: latency-svc-5prpx +Jun 3 20:14:47.970: INFO: Got endpoints: latency-svc-x762q [750.658932ms] +Jun 3 20:14:48.019: INFO: Got endpoints: latency-svc-px4wv [749.695111ms] +Jun 3 20:14:48.071: INFO: Got endpoints: latency-svc-ckwt4 [752.238374ms] +Jun 3 20:14:48.120: INFO: Got endpoints: latency-svc-s2pk7 [749.528128ms] +Jun 3 20:14:48.170: INFO: Got endpoints: latency-svc-fq29v [751.438193ms] +Jun 3 20:14:48.228: INFO: Got endpoints: latency-svc-p2888 [759.29946ms] +Jun 3 20:14:48.269: INFO: Got endpoints: latency-svc-frnx8 [748.875247ms] +Jun 3 20:14:48.320: INFO: Got endpoints: latency-svc-4qg85 [750.540903ms] +Jun 3 20:14:48.374: INFO: Got endpoints: latency-svc-mw92f [754.769571ms] +Jun 3 20:14:48.419: INFO: Got endpoints: latency-svc-m59s6 [749.360049ms] +Jun 3 20:14:48.469: INFO: Got endpoints: latency-svc-f8bgc [751.419746ms] +Jun 3 20:14:48.520: INFO: Got endpoints: latency-svc-mmd6g [749.30947ms] +Jun 3 20:14:48.569: INFO: Got endpoints: latency-svc-pmjdc [749.365574ms] +Jun 3 20:14:48.621: INFO: Got endpoints: latency-svc-9mj9b [752.210561ms] +Jun 3 20:14:48.670: INFO: Got endpoints: latency-svc-5prpx [750.135596ms] +Jun 3 20:14:48.670: INFO: Latencies: [22.746079ms 27.174369ms 35.24382ms 44.868396ms 51.377694ms 61.200752ms 75.644613ms 80.759436ms 90.474707ms 100.649817ms 110.437637ms 126.400075ms 134.516146ms 145.847586ms 145.873727ms 151.325991ms 151.583114ms 152.333773ms 153.512534ms 154.15907ms 157.013841ms 157.068922ms 161.301903ms 162.740014ms 163.303947ms 163.793887ms 164.069036ms 164.792319ms 165.029755ms 166.494444ms 167.575493ms 168.224384ms 168.690309ms 169.23371ms 171.838056ms 171.89336ms 176.820896ms 204.0703ms 246.660042ms 283.946115ms 318.674135ms 359.337867ms 399.641343ms 442.810225ms 475.013817ms 516.488775ms 552.460971ms 585.620827ms 635.619464ms 671.167653ms 711.247323ms 740.025546ms 741.038394ms 742.833721ms 743.971154ms 744.995799ms 747.008705ms 747.385571ms 747.386315ms 747.583569ms 747.654115ms 747.835307ms 747.85154ms 747.933216ms 747.933531ms 747.955623ms 747.968544ms 748.011141ms 748.037691ms 748.08779ms 748.363947ms 748.418307ms 748.45991ms 748.508941ms 748.563531ms 748.612748ms 748.638493ms 748.664939ms 748.690566ms 748.801041ms 748.86452ms 748.875247ms 748.882246ms 748.885784ms 749.056652ms 749.076996ms 749.083303ms 749.084561ms 749.127161ms 749.148604ms 749.224098ms 749.274334ms 749.30947ms 749.334489ms 749.360049ms 749.365574ms 749.390207ms 749.398419ms 749.414559ms 749.432281ms 749.436835ms 749.446539ms 749.456285ms 749.4638ms 749.528128ms 749.543847ms 749.550648ms 749.563499ms 749.579041ms 749.605003ms 749.63418ms 749.641646ms 749.658136ms 749.660203ms 749.675895ms 749.690653ms 749.693329ms 749.695111ms 749.699748ms 749.753566ms 749.760254ms 749.809554ms 749.845677ms 749.860851ms 749.875429ms 749.888781ms 749.940585ms 749.954412ms 749.959183ms 749.979039ms 749.981915ms 750.01677ms 750.041398ms 750.053263ms 750.053299ms 750.062653ms 750.085692ms 750.124072ms 750.135596ms 750.136642ms 750.14352ms 750.165555ms 750.171534ms 750.195081ms 750.210529ms 750.251952ms 750.266072ms 750.294105ms 750.406085ms 750.407416ms 750.40919ms 750.540903ms 750.573544ms 750.627362ms 750.631201ms 750.641017ms 750.658932ms 750.751887ms 750.807056ms 750.839337ms 750.845882ms 750.88491ms 750.895113ms 750.916867ms 750.933225ms 750.939333ms 750.942543ms 750.977329ms 751.011112ms 751.032572ms 751.083839ms 751.200261ms 751.224943ms 751.256641ms 751.295862ms 751.307843ms 751.419746ms 751.438193ms 751.456134ms 751.686742ms 751.697971ms 751.830523ms 751.913165ms 751.964946ms 752.007319ms 752.108775ms 752.210561ms 752.227548ms 752.238374ms 752.27455ms 752.5763ms 753.452708ms 753.479098ms 754.656975ms 754.769571ms 755.753224ms 756.920606ms 758.605473ms 759.29946ms 761.168329ms] +Jun 3 20:14:48.670: INFO: 50 %ile: 749.436835ms +Jun 3 20:14:48.670: INFO: 90 %ile: 751.697971ms +Jun 3 20:14:48.670: INFO: 99 %ile: 759.29946ms +Jun 3 20:14:48.670: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:14:48.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-9080" for this suite. +Jun 3 20:15:06.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:15:06.780: INFO: namespace svc-latency-9080 deletion completed in 18.105562108s + +• [SLOW TEST:30.907 seconds] +[sig-network] Service endpoints latency +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should not be very high [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:15:06.781: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should release no longer matching pods [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Jun 3 20:15:06.846: INFO: Pod name pod-release: Found 0 pods out of 1 +Jun 3 20:15:11.851: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:15:12.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-3813" for this suite. +Jun 3 20:15:18.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:15:18.993: INFO: namespace replication-controller-3813 deletion completed in 6.111406664s + +• [SLOW TEST:12.213 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:15:18.993: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jun 3 20:15:19.506: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Jun 3 20:15:21.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 3 20:15:23.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 3 20:15:25.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812119, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jun 3 20:15:28.541: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +Jun 3 20:15:28.544: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-926-crds.webhook.example.com via the AdmissionRegistration API +Jun 3 20:15:29.096: INFO: Waiting for webhook configuration to be ready... +STEP: Creating a custom resource that should be mutated by the webhook +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:15:29.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-7605" for this suite. +Jun 3 20:15:35.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:15:35.923: INFO: namespace webhook-7605 deletion completed in 6.103836397s +STEP: Destroying namespace "webhook-7605-markers" for this suite. +Jun 3 20:15:41.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:15:42.024: INFO: namespace webhook-7605-markers deletion completed in 6.100847782s +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 + +• [SLOW TEST:23.046 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:15:42.040: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Jun 3 20:15:50.137: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:15:50.140: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 3 20:15:52.140: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:15:52.144: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 3 20:15:54.140: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:15:54.145: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 3 20:15:56.140: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:15:56.145: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 3 20:15:58.140: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:15:58.145: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 3 20:16:00.140: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:16:00.144: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 3 20:16:02.140: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:16:02.145: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 3 20:16:04.140: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:16:04.144: INFO: Pod pod-with-prestop-exec-hook still exists +Jun 3 20:16:06.140: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jun 3 20:16:06.143: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:16:06.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-7192" for this suite. +Jun 3 20:16:34.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:16:34.262: INFO: namespace container-lifecycle-hook-7192 deletion completed in 28.098346998s + +• [SLOW TEST:52.222 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 + when create a pod with lifecycle hook + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 + should execute prestop exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:16:34.262: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +STEP: Creating a Pod that fits quota +STEP: Ensuring ResourceQuota status captures the pod usage +STEP: Not allowing a pod to be created that exceeds remaining quota +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) +STEP: Ensuring a pod cannot update its resource requirements +STEP: Ensuring attempts to update pod resource requirements did not change quota usage +STEP: Deleting the pod +STEP: Ensuring resource quota status released the pod usage +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:16:47.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-7280" for this suite. +Jun 3 20:16:53.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:16:53.476: INFO: namespace resourcequota-7280 deletion completed in 6.107136235s + +• [SLOW TEST:19.214 seconds] +[sig-api-machinery] ResourceQuota +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:16:53.477: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jun 3 20:16:54.034: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jun 3 20:16:57.060: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API +STEP: Creating a dummy validating-webhook-configuration object +STEP: Deleting the validating-webhook-configuration, which should be possible to remove +STEP: Creating a dummy mutating-webhook-configuration object +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:16:57.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-3549" for this suite. +Jun 3 20:17:03.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:17:03.277: INFO: namespace webhook-3549 deletion completed in 6.110319779s +STEP: Destroying namespace "webhook-3549-markers" for this suite. +Jun 3 20:17:09.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:17:09.389: INFO: namespace webhook-3549-markers deletion completed in 6.111854403s +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 + +• [SLOW TEST:15.928 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:17:09.405: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename webhook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 +STEP: Setting up server cert +STEP: Create role binding to let webhook read extension-apiserver-authentication +STEP: Deploying the webhook pod +STEP: Wait for the deployment to be ready +Jun 3 20:17:10.069: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service +STEP: Verifying the service has paired with the endpoint +Jun 3 20:17:13.090: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +Jun 3 20:17:13.094: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Registering the custom resource webhook via the AdmissionRegistration API +STEP: Creating a custom resource that should be denied by the webhook +STEP: Creating a custom resource whose deletion would be denied by the webhook +STEP: Updating the custom resource with disallowed data should be denied +STEP: Deleting the custom resource should be denied +STEP: Remove the offending key and value from the custom resource data +STEP: Deleting the updated custom resource should be successful +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:17:14.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "webhook-816" for this suite. +Jun 3 20:17:20.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:17:20.381: INFO: namespace webhook-816 deletion completed in 6.116783501s +STEP: Destroying namespace "webhook-816-markers" for this suite. +Jun 3 20:17:26.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:17:26.477: INFO: namespace webhook-816-markers deletion completed in 6.095761528s +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 + +• [SLOW TEST:17.086 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:17:26.492: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating a pod to test emptydir volume type on tmpfs +Jun 3 20:17:26.552: INFO: Waiting up to 5m0s for pod "pod-74c7a236-63db-46a8-b418-e1a2c6e66bf6" in namespace "emptydir-3884" to be "success or failure" +Jun 3 20:17:26.558: INFO: Pod "pod-74c7a236-63db-46a8-b418-e1a2c6e66bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010405ms +Jun 3 20:17:28.563: INFO: Pod "pod-74c7a236-63db-46a8-b418-e1a2c6e66bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010583687s +Jun 3 20:17:30.567: INFO: Pod "pod-74c7a236-63db-46a8-b418-e1a2c6e66bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015046021s +STEP: Saw pod success +Jun 3 20:17:30.568: INFO: Pod "pod-74c7a236-63db-46a8-b418-e1a2c6e66bf6" satisfied condition "success or failure" +Jun 3 20:17:30.570: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-74c7a236-63db-46a8-b418-e1a2c6e66bf6 container test-container: +STEP: delete the pod +Jun 3 20:17:30.602: INFO: Waiting for pod pod-74c7a236-63db-46a8-b418-e1a2c6e66bf6 to disappear +Jun 3 20:17:30.605: INFO: Pod pod-74c7a236-63db-46a8-b418-e1a2c6e66bf6 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:17:30.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3884" for this suite. +Jun 3 20:17:36.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:17:36.715: INFO: namespace emptydir-3884 deletion completed in 6.105420804s + +• [SLOW TEST:10.223 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:17:36.715: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] pod should support shared volumes between containers [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating Pod +STEP: Waiting for the pod running +STEP: Geting the pod +STEP: Reading file content from the nginx-container +Jun 3 20:17:40.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec pod-sharedvolume-e2c73b90-5130-4c6a-a212-08247af70a90 -c busybox-main-container --namespace=emptydir-9232 -- cat /usr/share/volumeshare/shareddata.txt' +Jun 3 20:17:41.231: INFO: stderr: "" +Jun 3 20:17:41.231: INFO: stdout: "Hello from the busy-box sub-container\n" +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:17:41.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9232" for this suite. +Jun 3 20:17:47.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:17:47.346: INFO: namespace emptydir-9232 deletion completed in 6.109588691s + +• [SLOW TEST:10.631 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + pod should support shared volumes between containers [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:17:47.346: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 +STEP: Creating service test in namespace statefulset-5296 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating a new StatefulSet +Jun 3 20:17:47.418: INFO: Found 0 stateful pods, waiting for 3 +Jun 3 20:17:57.423: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 3 20:17:57.423: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 3 20:17:57.423: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Jun 3 20:17:57.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-5296 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jun 3 20:17:57.672: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jun 3 20:17:57.672: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jun 3 20:17:57.672: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine +Jun 3 20:18:07.708: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Jun 3 20:18:17.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-5296 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jun 3 20:18:17.966: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jun 3 20:18:17.966: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jun 3 20:18:17.966: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jun 3 20:18:27.989: INFO: Waiting for StatefulSet statefulset-5296/ss2 to complete update +Jun 3 20:18:27.989: INFO: Waiting for Pod statefulset-5296/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jun 3 20:18:27.989: INFO: Waiting for Pod statefulset-5296/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jun 3 20:18:27.989: INFO: Waiting for Pod statefulset-5296/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jun 3 20:18:37.997: INFO: Waiting for StatefulSet statefulset-5296/ss2 to complete update +Jun 3 20:18:37.998: INFO: Waiting for Pod statefulset-5296/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jun 3 20:18:37.998: INFO: Waiting for Pod statefulset-5296/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jun 3 20:18:47.997: INFO: Waiting for StatefulSet statefulset-5296/ss2 to complete update +Jun 3 20:18:47.997: INFO: Waiting for Pod statefulset-5296/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 +Jun 3 20:18:57.996: INFO: Waiting for StatefulSet statefulset-5296/ss2 to complete update +STEP: Rolling back to a previous revision +Jun 3 20:19:07.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-5296 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jun 3 20:19:08.257: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jun 3 20:19:08.257: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jun 3 20:19:08.257: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jun 3 20:19:08.289: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Jun 3 20:19:18.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-5296 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jun 3 20:19:18.554: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jun 3 20:19:18.554: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jun 3 20:19:18.554: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jun 3 20:19:38.575: INFO: Waiting for StatefulSet statefulset-5296/ss2 to complete update +Jun 3 20:19:38.575: INFO: Waiting for Pod statefulset-5296/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 +Jun 3 20:19:48.583: INFO: Deleting all statefulset in ns statefulset-5296 +Jun 3 20:19:48.586: INFO: Scaling statefulset ss2 to 0 +Jun 3 20:20:18.601: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 3 20:20:18.604: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:20:18.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-5296" for this suite. +Jun 3 20:20:24.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:20:24.717: INFO: namespace statefulset-5296 deletion completed in 6.097147447s + +• [SLOW TEST:157.371 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:20:24.718: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 +[It] should create services for rc [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: creating Redis RC +Jun 3 20:20:24.751: INFO: namespace kubectl-7994 +Jun 3 20:20:24.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-7994' +Jun 3 20:20:25.017: INFO: stderr: "" +Jun 3 20:20:25.017: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 3 20:20:26.022: INFO: Selector matched 1 pods for map[app:redis] +Jun 3 20:20:26.022: INFO: Found 0 / 1 +Jun 3 20:20:27.021: INFO: Selector matched 1 pods for map[app:redis] +Jun 3 20:20:27.021: INFO: Found 1 / 1 +Jun 3 20:20:27.021: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 3 20:20:27.025: INFO: Selector matched 1 pods for map[app:redis] +Jun 3 20:20:27.025: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 3 20:20:27.025: INFO: wait on redis-master startup in kubectl-7994 +Jun 3 20:20:27.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 logs redis-master-dlpm5 redis-master --namespace=kubectl-7994' +Jun 3 20:20:27.146: INFO: stderr: "" +Jun 3 20:20:27.146: INFO: stdout: "1:C 03 Jun 2020 20:20:26.035 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo\n1:C 03 Jun 2020 20:20:26.035 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started\n1:C 03 Jun 2020 20:20:26.035 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n1:M 03 Jun 2020 20:20:26.036 * Running mode=standalone, port=6379.\n1:M 03 Jun 2020 20:20:26.036 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Jun 2020 20:20:26.036 # Server initialized\n1:M 03 Jun 2020 20:20:26.036 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Jun 2020 20:20:26.036 * Ready to accept connections\n" +STEP: exposing RC +Jun 3 20:20:27.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7994' +Jun 3 20:20:27.264: INFO: stderr: "" +Jun 3 20:20:27.264: INFO: stdout: "service/rm2 exposed\n" +Jun 3 20:20:27.268: INFO: Service rm2 in namespace kubectl-7994 found. +STEP: exposing service +Jun 3 20:20:29.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7994' +Jun 3 20:20:29.387: INFO: stderr: "" +Jun 3 20:20:29.387: INFO: stdout: "service/rm3 exposed\n" +Jun 3 20:20:29.391: INFO: Service rm3 in namespace kubectl-7994 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:20:31.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7994" for this suite. +Jun 3 20:20:43.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:20:43.512: INFO: namespace kubectl-7994 deletion completed in 12.11061496s + +• [SLOW TEST:18.794 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Kubectl expose + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 + should create services for rc [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:20:43.512: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +Jun 3 20:20:43.571: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:20:44.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-4754" for this suite. +Jun 3 20:20:50.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:20:50.245: INFO: namespace custom-resource-definition-4754 deletion completed in 6.103427296s + +• [SLOW TEST:6.733 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42 + getting/updating/patching custom resource definition status sub-resource works [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SS +------------------------------ +[k8s.io] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [k8s.io] Security Context + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:20:50.245: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename security-context-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Security Context + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +Jun 3 20:20:50.289: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-fe34fdd2-6e83-4743-99bd-bd3e68727e79" in namespace "security-context-test-131" to be "success or failure" +Jun 3 20:20:50.292: INFO: Pod "busybox-privileged-false-fe34fdd2-6e83-4743-99bd-bd3e68727e79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.926723ms +Jun 3 20:20:52.297: INFO: Pod "busybox-privileged-false-fe34fdd2-6e83-4743-99bd-bd3e68727e79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007906663s +Jun 3 20:20:54.301: INFO: Pod "busybox-privileged-false-fe34fdd2-6e83-4743-99bd-bd3e68727e79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012383488s +Jun 3 20:20:54.301: INFO: Pod "busybox-privileged-false-fe34fdd2-6e83-4743-99bd-bd3e68727e79" satisfied condition "success or failure" +Jun 3 20:20:54.318: INFO: Got logs for pod "busybox-privileged-false-fe34fdd2-6e83-4743-99bd-bd3e68727e79": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [k8s.io] Security Context + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:20:54.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "security-context-test-131" for this suite. +Jun 3 20:21:00.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:21:00.422: INFO: namespace security-context-test-131 deletion completed in 6.098593002s + +• [SLOW TEST:10.177 seconds] +[k8s.io] Security Context +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 + When creating a pod with privileged + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:226 + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:21:00.422: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: create the rc1 +STEP: create the rc2 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well +STEP: delete the rc simpletest-rc-to-be-deleted +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Jun 3 20:21:10.542: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +W0603 20:21:10.542618 25 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:21:10.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-9598" for this suite. +Jun 3 20:21:16.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:21:16.646: INFO: namespace gc-9598 deletion completed in 6.100034788s + +• [SLOW TEST:16.224 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:21:16.647: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating projection with secret that has name projected-secret-test-da6dd4ec-a37e-4872-9b6a-141b71068df1 +STEP: Creating a pod to test consume secrets +Jun 3 20:21:16.696: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4c3fcab0-4083-4ab6-b482-ea0883dc8b07" in namespace "projected-2493" to be "success or failure" +Jun 3 20:21:16.699: INFO: Pod "pod-projected-secrets-4c3fcab0-4083-4ab6-b482-ea0883dc8b07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83211ms +Jun 3 20:21:18.703: INFO: Pod "pod-projected-secrets-4c3fcab0-4083-4ab6-b482-ea0883dc8b07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006366081s +Jun 3 20:21:20.707: INFO: Pod "pod-projected-secrets-4c3fcab0-4083-4ab6-b482-ea0883dc8b07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011263918s +STEP: Saw pod success +Jun 3 20:21:20.707: INFO: Pod "pod-projected-secrets-4c3fcab0-4083-4ab6-b482-ea0883dc8b07" satisfied condition "success or failure" +Jun 3 20:21:20.711: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-secrets-4c3fcab0-4083-4ab6-b482-ea0883dc8b07 container projected-secret-volume-test: +STEP: delete the pod +Jun 3 20:21:20.733: INFO: Waiting for pod pod-projected-secrets-4c3fcab0-4083-4ab6-b482-ea0883dc8b07 to disappear +Jun 3 20:21:20.737: INFO: Pod pod-projected-secrets-4c3fcab0-4083-4ab6-b482-ea0883dc8b07 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:21:20.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2493" for this suite. +Jun 3 20:21:26.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:21:26.845: INFO: namespace projected-2493 deletion completed in 6.104015544s + +• [SLOW TEST:10.199 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:21:26.845: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating configMap with name configmap-test-volume-2f94e310-148f-4420-a17e-2b4828802c80 +STEP: Creating a pod to test consume configMaps +Jun 3 20:21:26.909: INFO: Waiting up to 5m0s for pod "pod-configmaps-5670c372-86bf-4388-9a92-70cca00f4201" in namespace "configmap-1340" to be "success or failure" +Jun 3 20:21:26.915: INFO: Pod "pod-configmaps-5670c372-86bf-4388-9a92-70cca00f4201": Phase="Pending", Reason="", readiness=false. Elapsed: 5.859012ms +Jun 3 20:21:28.919: INFO: Pod "pod-configmaps-5670c372-86bf-4388-9a92-70cca00f4201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010110813s +STEP: Saw pod success +Jun 3 20:21:28.919: INFO: Pod "pod-configmaps-5670c372-86bf-4388-9a92-70cca00f4201" satisfied condition "success or failure" +Jun 3 20:21:28.929: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-5670c372-86bf-4388-9a92-70cca00f4201 container configmap-volume-test: +STEP: delete the pod +Jun 3 20:21:28.953: INFO: Waiting for pod pod-configmaps-5670c372-86bf-4388-9a92-70cca00f4201 to disappear +Jun 3 20:21:28.955: INFO: Pod pod-configmaps-5670c372-86bf-4388-9a92-70cca00f4201 no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:21:28.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-1340" for this suite. +Jun 3 20:21:34.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:21:35.073: INFO: namespace configmap-1340 deletion completed in 6.113991671s + +• [SLOW TEST:8.228 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:21:35.074: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Jun 3 20:21:35.119: INFO: Waiting up to 5m0s for pod "pod-a27e4322-1844-4d47-83eb-a32daf4de977" in namespace "emptydir-1950" to be "success or failure" +Jun 3 20:21:35.121: INFO: Pod "pod-a27e4322-1844-4d47-83eb-a32daf4de977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.721306ms +Jun 3 20:21:37.126: INFO: Pod "pod-a27e4322-1844-4d47-83eb-a32daf4de977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006890145s +STEP: Saw pod success +Jun 3 20:21:37.126: INFO: Pod "pod-a27e4322-1844-4d47-83eb-a32daf4de977" satisfied condition "success or failure" +Jun 3 20:21:37.129: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-a27e4322-1844-4d47-83eb-a32daf4de977 container test-container: +STEP: delete the pod +Jun 3 20:21:37.150: INFO: Waiting for pod pod-a27e4322-1844-4d47-83eb-a32daf4de977 to disappear +Jun 3 20:21:37.153: INFO: Pod pod-a27e4322-1844-4d47-83eb-a32daf4de977 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:21:37.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1950" for this suite. +Jun 3 20:21:43.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:21:43.264: INFO: namespace emptydir-1950 deletion completed in 6.107582312s + +• [SLOW TEST:8.190 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:21:43.264: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: creating the pod +Jun 3 20:21:43.297: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:21:46.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-3503" for this suite. +Jun 3 20:21:52.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:21:52.605: INFO: namespace init-container-3503 deletion completed in 6.112646722s + +• [SLOW TEST:9.341 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:21:52.606: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 +[BeforeEach] Update Demo + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 +[It] should create and stop a replication controller [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: creating a replication controller +Jun 3 20:21:52.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-1598' +Jun 3 20:21:52.849: INFO: stderr: "" +Jun 3 20:21:52.849: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 3 20:21:52.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1598' +Jun 3 20:21:52.960: INFO: stderr: "" +Jun 3 20:21:52.960: INFO: stdout: "update-demo-nautilus-7qqcw update-demo-nautilus-lc52h " +Jun 3 20:21:52.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-7qqcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1598' +Jun 3 20:21:53.055: INFO: stderr: "" +Jun 3 20:21:53.055: INFO: stdout: "" +Jun 3 20:21:53.055: INFO: update-demo-nautilus-7qqcw is created but not running +Jun 3 20:21:58.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1598' +Jun 3 20:21:58.156: INFO: stderr: "" +Jun 3 20:21:58.156: INFO: stdout: "update-demo-nautilus-7qqcw update-demo-nautilus-lc52h " +Jun 3 20:21:58.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-7qqcw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1598' +Jun 3 20:21:58.253: INFO: stderr: "" +Jun 3 20:21:58.253: INFO: stdout: "true" +Jun 3 20:21:58.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-7qqcw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1598' +Jun 3 20:21:58.347: INFO: stderr: "" +Jun 3 20:21:58.347: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 3 20:21:58.347: INFO: validating pod update-demo-nautilus-7qqcw +Jun 3 20:21:58.353: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 3 20:21:58.353: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 3 20:21:58.353: INFO: update-demo-nautilus-7qqcw is verified up and running +Jun 3 20:21:58.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-lc52h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1598' +Jun 3 20:21:58.446: INFO: stderr: "" +Jun 3 20:21:58.446: INFO: stdout: "true" +Jun 3 20:21:58.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-lc52h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1598' +Jun 3 20:21:58.546: INFO: stderr: "" +Jun 3 20:21:58.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 3 20:21:58.546: INFO: validating pod update-demo-nautilus-lc52h +Jun 3 20:21:58.551: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 3 20:21:58.551: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 3 20:21:58.551: INFO: update-demo-nautilus-lc52h is verified up and running +STEP: using delete to clean up resources +Jun 3 20:21:58.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-1598' +Jun 3 20:21:58.654: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 3 20:21:58.654: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jun 3 20:21:58.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1598' +Jun 3 20:21:58.767: INFO: stderr: "No resources found in kubectl-1598 namespace.\n" +Jun 3 20:21:58.767: INFO: stdout: "" +Jun 3 20:21:58.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -l name=update-demo --namespace=kubectl-1598 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 3 20:21:58.863: INFO: stderr: "" +Jun 3 20:21:58.863: INFO: stdout: "update-demo-nautilus-7qqcw\nupdate-demo-nautilus-lc52h\n" +Jun 3 20:21:59.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1598' +Jun 3 20:21:59.471: INFO: stderr: "No resources found in kubectl-1598 namespace.\n" +Jun 3 20:21:59.471: INFO: stdout: "" +Jun 3 20:21:59.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -l name=update-demo --namespace=kubectl-1598 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 3 20:21:59.575: INFO: stderr: "" +Jun 3 20:21:59.575: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:21:59.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1598" for this suite. +Jun 3 20:22:05.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:22:05.682: INFO: namespace kubectl-1598 deletion completed in 6.099822981s + +• [SLOW TEST:13.077 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + Update Demo + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275 + should create and stop a replication controller [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:22:05.683: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating a pod to test downward API volume plugin +Jun 3 20:22:05.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43075a96-4208-4e34-94e3-8c6a918d205b" in namespace "downward-api-3251" to be "success or failure" +Jun 3 20:22:05.732: INFO: Pod "downwardapi-volume-43075a96-4208-4e34-94e3-8c6a918d205b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.936188ms +Jun 3 20:22:07.736: INFO: Pod "downwardapi-volume-43075a96-4208-4e34-94e3-8c6a918d205b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007488084s +STEP: Saw pod success +Jun 3 20:22:07.736: INFO: Pod "downwardapi-volume-43075a96-4208-4e34-94e3-8c6a918d205b" satisfied condition "success or failure" +Jun 3 20:22:07.739: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod downwardapi-volume-43075a96-4208-4e34-94e3-8c6a918d205b container client-container: +STEP: delete the pod +Jun 3 20:22:07.763: INFO: Waiting for pod downwardapi-volume-43075a96-4208-4e34-94e3-8c6a918d205b to disappear +Jun 3 20:22:07.765: INFO: Pod downwardapi-volume-43075a96-4208-4e34-94e3-8c6a918d205b no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:22:07.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-3251" for this suite. +Jun 3 20:22:13.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:22:13.869: INFO: namespace downward-api-3251 deletion completed in 6.099373237s + +• [SLOW TEST:8.187 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:22:13.870: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating a pod to test downward API volume plugin +Jun 3 20:22:13.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-476a1d92-4446-4bc2-bfbe-3bb4bb53b844" in namespace "projected-6856" to be "success or failure" +Jun 3 20:22:13.917: INFO: Pod "downwardapi-volume-476a1d92-4446-4bc2-bfbe-3bb4bb53b844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310989ms +Jun 3 20:22:15.921: INFO: Pod "downwardapi-volume-476a1d92-4446-4bc2-bfbe-3bb4bb53b844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006914021s +STEP: Saw pod success +Jun 3 20:22:15.921: INFO: Pod "downwardapi-volume-476a1d92-4446-4bc2-bfbe-3bb4bb53b844" satisfied condition "success or failure" +Jun 3 20:22:15.925: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod downwardapi-volume-476a1d92-4446-4bc2-bfbe-3bb4bb53b844 container client-container: +STEP: delete the pod +Jun 3 20:22:15.948: INFO: Waiting for pod downwardapi-volume-476a1d92-4446-4bc2-bfbe-3bb4bb53b844 to disappear +Jun 3 20:22:15.951: INFO: Pod downwardapi-volume-476a1d92-4446-4bc2-bfbe-3bb4bb53b844 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:22:15.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6856" for this suite. +Jun 3 20:22:21.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:22:22.059: INFO: namespace projected-6856 deletion completed in 6.103282943s + +• [SLOW TEST:8.189 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] ResourceQuota + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:22:22.059: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename resourcequota +STEP: Waiting for a default service account to be provisioned in namespace +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Counting existing ResourceQuota +STEP: Creating a ResourceQuota +STEP: Ensuring resource quota status is calculated +[AfterEach] [sig-api-machinery] ResourceQuota + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:22:29.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "resourcequota-3335" for this suite. +Jun 3 20:22:35.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:22:35.216: INFO: namespace resourcequota-3335 deletion completed in 6.101065324s + +• [SLOW TEST:13.157 seconds] +[sig-api-machinery] ResourceQuota +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:22:35.217: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating pod pod-subpath-test-downwardapi-vsl4 +STEP: Creating a pod to test atomic-volume-subpath +Jun 3 20:22:35.272: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vsl4" in namespace "subpath-2433" to be "success or failure" +Jun 3 20:22:35.275: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248591ms +Jun 3 20:22:37.280: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 2.007407132s +Jun 3 20:22:39.284: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 4.011785841s +Jun 3 20:22:41.288: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 6.015958162s +Jun 3 20:22:43.293: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 8.020369083s +Jun 3 20:22:45.298: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 10.02561877s +Jun 3 20:22:47.302: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 12.030287054s +Jun 3 20:22:49.307: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 14.034518829s +Jun 3 20:22:51.311: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 16.0390319s +Jun 3 20:22:53.316: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 18.043704154s +Jun 3 20:22:55.321: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 20.048507314s +Jun 3 20:22:57.326: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Running", Reason="", readiness=true. Elapsed: 22.053839808s +Jun 3 20:22:59.331: INFO: Pod "pod-subpath-test-downwardapi-vsl4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058427745s +STEP: Saw pod success +Jun 3 20:22:59.331: INFO: Pod "pod-subpath-test-downwardapi-vsl4" satisfied condition "success or failure" +Jun 3 20:22:59.333: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-subpath-test-downwardapi-vsl4 container test-container-subpath-downwardapi-vsl4: +STEP: delete the pod +Jun 3 20:22:59.365: INFO: Waiting for pod pod-subpath-test-downwardapi-vsl4 to disappear +Jun 3 20:22:59.368: INFO: Pod pod-subpath-test-downwardapi-vsl4 no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-vsl4 +Jun 3 20:22:59.368: INFO: Deleting pod "pod-subpath-test-downwardapi-vsl4" in namespace "subpath-2433" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:22:59.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-2433" for this suite. +Jun 3 20:23:05.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:23:05.474: INFO: namespace subpath-2433 deletion completed in 6.098081924s + +• [SLOW TEST:30.257 seconds] +[sig-storage] Subpath +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:23:05.474: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename crd-publish-openapi +STEP: Waiting for a default service account to be provisioned in namespace +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +Jun 3 20:23:05.507: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: client-side validation (kubectl create and apply) allows request with any unknown properties +Jun 3 20:23:09.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-7873 create -f -' +Jun 3 20:23:09.788: INFO: stderr: "" +Jun 3 20:23:09.788: INFO: stdout: "e2e-test-crd-publish-openapi-5957-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Jun 3 20:23:09.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-7873 delete e2e-test-crd-publish-openapi-5957-crds test-cr' +Jun 3 20:23:09.963: INFO: stderr: "" +Jun 3 20:23:09.963: INFO: stdout: "e2e-test-crd-publish-openapi-5957-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Jun 3 20:23:09.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-7873 apply -f -' +Jun 3 20:23:10.185: INFO: stderr: "" +Jun 3 20:23:10.185: INFO: stdout: "e2e-test-crd-publish-openapi-5957-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Jun 3 20:23:10.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-7873 delete e2e-test-crd-publish-openapi-5957-crds test-cr' +Jun 3 20:23:10.290: INFO: stderr: "" +Jun 3 20:23:10.290: INFO: stdout: "e2e-test-crd-publish-openapi-5957-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR +Jun 3 20:23:10.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 explain e2e-test-crd-publish-openapi-5957-crds' +Jun 3 20:23:10.546: INFO: stderr: "" +Jun 3 20:23:10.546: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5957-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:23:14.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "crd-publish-openapi-7873" for this suite. +Jun 3 20:23:20.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:23:20.312: INFO: namespace crd-publish-openapi-7873 deletion completed in 6.109138088s + +• [SLOW TEST:14.837 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:23:20.312: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Given a Pod with a 'name' label pod-adoption is created +STEP: When a replication controller with a matching selector is created +STEP: Then the orphan pod is adopted +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:23:23.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-1072" for this suite. +Jun 3 20:23:51.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:23:51.497: INFO: namespace replication-controller-1072 deletion completed in 28.105869691s + +• [SLOW TEST:31.186 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 + should adopt matching pods on creation [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:23:51.498: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: Creating a pod to test downward api env vars +Jun 3 20:23:51.538: INFO: Waiting up to 5m0s for pod "downward-api-658dd99d-be62-4581-8f88-60b51230b021" in namespace "downward-api-4707" to be "success or failure" +Jun 3 20:23:51.541: INFO: Pod "downward-api-658dd99d-be62-4581-8f88-60b51230b021": Phase="Pending", Reason="", readiness=false. Elapsed: 2.537395ms +Jun 3 20:23:53.546: INFO: Pod "downward-api-658dd99d-be62-4581-8f88-60b51230b021": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007558904s +STEP: Saw pod success +Jun 3 20:23:53.546: INFO: Pod "downward-api-658dd99d-be62-4581-8f88-60b51230b021" satisfied condition "success or failure" +Jun 3 20:23:53.548: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod downward-api-658dd99d-be62-4581-8f88-60b51230b021 container dapi-container: +STEP: delete the pod +Jun 3 20:23:53.594: INFO: Waiting for pod downward-api-658dd99d-be62-4581-8f88-60b51230b021 to disappear +Jun 3 20:23:53.597: INFO: Pod downward-api-658dd99d-be62-4581-8f88-60b51230b021 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:23:53.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4707" for this suite. +Jun 3 20:23:59.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:23:59.703: INFO: namespace downward-api-4707 deletion completed in 6.101145845s + +• [SLOW TEST:8.205 seconds] +[sig-node] Downward API +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:23:59.703: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-5269 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service +STEP: creating service externalsvc in namespace services-5269 +STEP: creating replication controller externalsvc in namespace services-5269 +I0603 20:23:59.776892 25 runners.go:184] Created replication controller with name: externalsvc, namespace: services-5269, replica count: 2 +I0603 20:24:02.827347 25 runners.go:184] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName +Jun 3 20:24:02.855: INFO: Creating new exec pod +Jun 3 20:24:04.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-5269 execpodhzl4b -- /bin/sh -x -c nslookup nodeport-service' +Jun 3 20:24:05.160: INFO: stderr: "+ nslookup nodeport-service\n" +Jun 3 20:24:05.160: INFO: stdout: "Server:\t\t172.19.0.10\nAddress:\t172.19.0.10#53\n\nnodeport-service.services-5269.svc.cluster.local\tcanonical name = externalsvc.services-5269.svc.cluster.local.\nName:\texternalsvc.services-5269.svc.cluster.local\nAddress: 172.19.140.180\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-5269, will wait for the garbage collector to delete the pods +Jun 3 20:24:05.224: INFO: Deleting ReplicationController externalsvc took: 9.986668ms +Jun 3 20:24:05.625: INFO: Terminating ReplicationController externalsvc pods took: 400.342719ms +Jun 3 20:24:14.352: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 +Jun 3 20:24:14.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5269" for this suite. +Jun 3 20:24:20.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 3 20:24:20.482: INFO: namespace services-5269 deletion completed in 6.108215052s +[AfterEach] [sig-network] Services + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 + +• [SLOW TEST:20.779 seconds] +[sig-network] Services +/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +[BeforeEach] version v1 + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 +STEP: Creating a kubernetes client +Jun 3 20:24:20.483: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 +Jun 3 20:24:20.542: INFO: (0) /api/v1/nodes/karbon-certification-ff5a6a-k8s-master-0:10250/proxy/logs/:
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:24:26.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-5969'
+Jun  3 20:24:26.991: INFO: stderr: ""
+Jun  3 20:24:26.991: INFO: stdout: "replicationcontroller/redis-master created\n"
+Jun  3 20:24:26.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-5969'
+Jun  3 20:24:27.255: INFO: stderr: ""
+Jun  3 20:24:27.256: INFO: stdout: "service/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Jun  3 20:24:28.260: INFO: Selector matched 1 pods for map[app:redis]
+Jun  3 20:24:28.260: INFO: Found 0 / 1
+Jun  3 20:24:29.260: INFO: Selector matched 1 pods for map[app:redis]
+Jun  3 20:24:29.260: INFO: Found 1 / 1
+Jun  3 20:24:29.260: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Jun  3 20:24:29.264: INFO: Selector matched 1 pods for map[app:redis]
+Jun  3 20:24:29.264: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Jun  3 20:24:29.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 describe pod redis-master-s64k6 --namespace=kubectl-5969'
+Jun  3 20:24:29.375: INFO: stderr: ""
+Jun  3 20:24:29.375: INFO: stdout: "Name:         redis-master-s64k6\nNamespace:    kubectl-5969\nPriority:     0\nNode:         karbon-certification-ff5a6a-k8s-worker-2/10.45.43.21\nStart Time:   Wed, 03 Jun 2020 20:24:27 +0000\nLabels:       app=redis\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           172.20.3.35\nIPs:\n  IP:           172.20.3.35\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://a1d27c6599baa964fccba5b7045f38ee2dd78df1ec2fa5c2fda2afb8e12f1c49\n    Image:          docker.io/library/redis:5.0.5-alpine\n    Image ID:       docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 03 Jun 2020 20:24:27 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-htv8j (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-htv8j:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-htv8j\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                                               Message\n  ----    ------     ----       ----                                               -------\n  Normal  Scheduled    default-scheduler                                  Successfully assigned kubectl-5969/redis-master-s64k6 to karbon-certification-ff5a6a-k8s-worker-2\n  Normal  Pulled     2s         kubelet, karbon-certification-ff5a6a-k8s-worker-2  Container image \"docker.io/library/redis:5.0.5-alpine\" already present on machine\n  Normal  Created    2s         kubelet, karbon-certification-ff5a6a-k8s-worker-2  Created container redis-master\n  Normal  Started    2s         kubelet, karbon-certification-ff5a6a-k8s-worker-2  Started container redis-master\n"
+Jun  3 20:24:29.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 describe rc redis-master --namespace=kubectl-5969'
+Jun  3 20:24:29.498: INFO: stderr: ""
+Jun  3 20:24:29.498: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-5969\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        docker.io/library/redis:5.0.5-alpine\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: redis-master-s64k6\n"
+Jun  3 20:24:29.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 describe service redis-master --namespace=kubectl-5969'
+Jun  3 20:24:29.608: INFO: stderr: ""
+Jun  3 20:24:29.608: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-5969\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                172.19.223.223\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         172.20.3.35:6379\nSession Affinity:  None\nEvents:            \n"
+Jun  3 20:24:29.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 describe node karbon-certification-ff5a6a-k8s-master-0'
+Jun  3 20:24:29.737: INFO: stderr: ""
+Jun  3 20:24:29.737: INFO: stdout: "Name:               karbon-certification-ff5a6a-k8s-master-0\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=karbon-certification-ff5a6a-k8s-master-0\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=master\n                    node.kubernetes.io/master=\nAnnotations:        csi.volume.kubernetes.io/nodeid: {\"com.nutanix.csi\":\"karbon-certification-ff5a6a-k8s-master-0\"}\n                    flannel.alpha.coreos.com/backend-data: {\"VtepMAC\":\"5a:30:10:52:55:cf\"}\n                    flannel.alpha.coreos.com/backend-type: vxlan\n                    flannel.alpha.coreos.com/kube-subnet-manager: true\n                    flannel.alpha.coreos.com/public-ip: 10.45.43.24\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Tue, 02 Jun 2020 22:12:14 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 03 Jun 2020 20:23:34 +0000   Tue, 02 Jun 2020 22:12:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 03 Jun 2020 20:23:34 +0000   Tue, 02 Jun 2020 22:12:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 03 Jun 2020 20:23:34 +0000   Tue, 02 Jun 2020 22:12:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 03 Jun 2020 20:23:34 +0000   Tue, 02 Jun 2020 22:12:14 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  10.45.43.24\n  Hostname:    karbon-certification-ff5a6a-k8s-master-0\nCapacity:\n cpu:                4\n ephemeral-storage:  123723328Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             3843996Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  123723328Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             3434396Ki\n pods:               110\nSystem Info:\n Machine ID:                 6273c763d8454b78a0cecacd9243daf5\n System UUID:                6273C763-D845-4B78-A0CE-CACD9243DAF5\n Boot ID:                    ca11ae65-a933-499e-bac6-3a9a429c1d43\n Kernel Version:             3.10.0-1127.el7.x86_64\n OS Image:                   CentOS Linux 7 (Core)\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.8\n Kubelet Version:            v1.16.8\n Kube-Proxy Version:         v1.16.8\nPodCIDR:                     172.20.0.0/24\nPodCIDRs:                    172.20.0.0/24\nNon-terminated Pods:         (7 in total)\n  Namespace                  Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                       ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-apiserver-karbon-certification-ff5a6a-k8s-master-0    300m (7%)     0 (0%)      0 (0%)           0 (0%)         22h\n  kube-system                kube-flannel-ds-hznhg                                      100m (2%)     500m (12%)  50Mi (1%)        50Mi (1%)      22h\n  kube-system                kube-proxy-ds-qrgfl                                        100m (2%)     100m (2%)   70Mi (2%)        70Mi (2%)      22h\n  ntnx-system                csi-node-ntnx-plugin-pdc8c                                 200m (5%)     200m (5%)   400Mi (11%)      400Mi (11%)    18h\n  ntnx-system                fluent-bit-mb264                                           100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      22h\n  ntnx-system                node-exporter-hkj7p                                        112m (2%)     600m (15%)  200Mi (5%)       220Mi (6%)     22h\n  sonobuoy                   sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-58wws    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests     Limits\n  --------           --------     ------\n  cpu                912m (22%)   1500m (37%)\n  memory             770Mi (22%)  790Mi (23%)\n  ephemeral-storage  0 (0%)       0 (0%)\nEvents:              \n"
+Jun  3 20:24:29.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 describe namespace kubectl-5969'
+Jun  3 20:24:29.856: INFO: stderr: ""
+Jun  3 20:24:29.856: INFO: stdout: "Name:         kubectl-5969\nLabels:       e2e-framework=kubectl\n              e2e-run=f3f83a2e-5ec0-40e3-bdab-4bfb0d6ccf94\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:24:29.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-5969" for this suite.
+Jun  3 20:24:41.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:24:41.982: INFO: namespace kubectl-5969 deletion completed in 12.119999035s
+
+• [SLOW TEST:15.264 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl describe
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1000
+    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:24:41.983: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0666 on tmpfs
+Jun  3 20:24:42.030: INFO: Waiting up to 5m0s for pod "pod-452d7f1a-148b-48a7-860c-6c706cc1aff2" in namespace "emptydir-4193" to be "success or failure"
+Jun  3 20:24:42.037: INFO: Pod "pod-452d7f1a-148b-48a7-860c-6c706cc1aff2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390685ms
+Jun  3 20:24:44.042: INFO: Pod "pod-452d7f1a-148b-48a7-860c-6c706cc1aff2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011527536s
+Jun  3 20:24:46.047: INFO: Pod "pod-452d7f1a-148b-48a7-860c-6c706cc1aff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016219517s
+STEP: Saw pod success
+Jun  3 20:24:46.047: INFO: Pod "pod-452d7f1a-148b-48a7-860c-6c706cc1aff2" satisfied condition "success or failure"
+Jun  3 20:24:46.050: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-452d7f1a-148b-48a7-860c-6c706cc1aff2 container test-container: 
+STEP: delete the pod
+Jun  3 20:24:46.074: INFO: Waiting for pod pod-452d7f1a-148b-48a7-860c-6c706cc1aff2 to disappear
+Jun  3 20:24:46.077: INFO: Pod pod-452d7f1a-148b-48a7-860c-6c706cc1aff2 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:24:46.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-4193" for this suite.
+Jun  3 20:24:52.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:24:52.184: INFO: namespace emptydir-4193 deletion completed in 6.103412163s
+
+• [SLOW TEST:10.201 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should mutate custom resource with pruning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:24:52.185: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 20:24:53.163: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jun  3 20:24:55.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812693, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812693, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812693, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812693, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 20:24:58.190: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should mutate custom resource with pruning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:24:58.194: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8959-crds.webhook.example.com via the AdmissionRegistration API
+STEP: Creating a custom resource that should be mutated by the webhook
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:24:59.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-4472" for this suite.
+Jun  3 20:25:05.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:25:05.499: INFO: namespace webhook-4472 deletion completed in 6.112385075s
+STEP: Destroying namespace "webhook-4472-markers" for this suite.
+Jun  3 20:25:11.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:25:11.596: INFO: namespace webhook-4472-markers deletion completed in 6.096259043s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:19.425 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should mutate custom resource with pruning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:25:11.610: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Performing setup for networking test in namespace pod-network-test-8876
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun  3 20:25:11.642: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun  3 20:25:37.777: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.20.2.32:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:25:37.777: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:25:37.926: INFO: Found all expected endpoints: [netserver-0]
+Jun  3 20:25:37.930: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.20.4.12:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:25:37.930: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:25:38.070: INFO: Found all expected endpoints: [netserver-1]
+Jun  3 20:25:38.075: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.20.3.38:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:25:38.075: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:25:38.218: INFO: Found all expected endpoints: [netserver-2]
+Jun  3 20:25:38.222: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.20.0.8:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:25:38.222: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:25:38.371: INFO: Found all expected endpoints: [netserver-3]
+Jun  3 20:25:38.375: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://172.20.1.8:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:25:38.375: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:25:38.507: INFO: Found all expected endpoints: [netserver-4]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:25:38.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-8876" for this suite.
+Jun  3 20:25:50.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:25:50.639: INFO: namespace pod-network-test-8876 deletion completed in 12.12558293s
+
+• [SLOW TEST:39.030 seconds]
+[sig-network] Networking
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:25:50.639: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name s-test-opt-del-db30708c-c77e-449a-abb0-b6bb2eb537d9
+STEP: Creating secret with name s-test-opt-upd-e51132b4-d38b-4fe9-be4d-611e0150b65d
+STEP: Creating the pod
+STEP: Deleting secret s-test-opt-del-db30708c-c77e-449a-abb0-b6bb2eb537d9
+STEP: Updating secret s-test-opt-upd-e51132b4-d38b-4fe9-be4d-611e0150b65d
+STEP: Creating secret with name s-test-opt-create-e927d094-96f2-4b60-a8f9-9a1e30985228
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:25:54.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8431" for this suite.
+Jun  3 20:26:06.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:26:06.912: INFO: namespace projected-8431 deletion completed in 12.11889133s
+
+• [SLOW TEST:16.272 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should receive events on concurrent watches in same order [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:26:06.912: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should receive events on concurrent watches in same order [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: starting a background goroutine to produce watch events
+STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:26:12.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-3338" for this suite.
+Jun  3 20:26:18.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:26:18.465: INFO: namespace watch-3338 deletion completed in 6.191911878s
+
+• [SLOW TEST:11.553 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should receive events on concurrent watches in same order [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:26:18.465: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name projected-configmap-test-volume-map-38f04ee7-34f4-497d-8769-a2e09482247f
+STEP: Creating a pod to test consume configMaps
+Jun  3 20:26:18.518: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a85ec5e8-35fa-4dbd-8e35-98aae7089d88" in namespace "projected-2372" to be "success or failure"
+Jun  3 20:26:18.525: INFO: Pod "pod-projected-configmaps-a85ec5e8-35fa-4dbd-8e35-98aae7089d88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.928724ms
+Jun  3 20:26:20.529: INFO: Pod "pod-projected-configmaps-a85ec5e8-35fa-4dbd-8e35-98aae7089d88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011241943s
+STEP: Saw pod success
+Jun  3 20:26:20.529: INFO: Pod "pod-projected-configmaps-a85ec5e8-35fa-4dbd-8e35-98aae7089d88" satisfied condition "success or failure"
+Jun  3 20:26:20.532: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-projected-configmaps-a85ec5e8-35fa-4dbd-8e35-98aae7089d88 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  3 20:26:20.556: INFO: Waiting for pod pod-projected-configmaps-a85ec5e8-35fa-4dbd-8e35-98aae7089d88 to disappear
+Jun  3 20:26:20.559: INFO: Pod pod-projected-configmaps-a85ec5e8-35fa-4dbd-8e35-98aae7089d88 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:26:20.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-2372" for this suite.
+Jun  3 20:26:26.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:26:26.666: INFO: namespace projected-2372 deletion completed in 6.103220892s
+
+• [SLOW TEST:8.201 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:26:26.666: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0644 on tmpfs
+Jun  3 20:26:26.706: INFO: Waiting up to 5m0s for pod "pod-162315b4-76ce-4072-a2c3-b0675be12bb7" in namespace "emptydir-9917" to be "success or failure"
+Jun  3 20:26:26.712: INFO: Pod "pod-162315b4-76ce-4072-a2c3-b0675be12bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.645124ms
+Jun  3 20:26:28.717: INFO: Pod "pod-162315b4-76ce-4072-a2c3-b0675be12bb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010352514s
+STEP: Saw pod success
+Jun  3 20:26:28.717: INFO: Pod "pod-162315b4-76ce-4072-a2c3-b0675be12bb7" satisfied condition "success or failure"
+Jun  3 20:26:28.719: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-162315b4-76ce-4072-a2c3-b0675be12bb7 container test-container: 
+STEP: delete the pod
+Jun  3 20:26:28.739: INFO: Waiting for pod pod-162315b4-76ce-4072-a2c3-b0675be12bb7 to disappear
+Jun  3 20:26:28.742: INFO: Pod pod-162315b4-76ce-4072-a2c3-b0675be12bb7 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:26:28.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-9917" for this suite.
+Jun  3 20:26:34.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:26:34.847: INFO: namespace emptydir-9917 deletion completed in 6.101221886s
+
+• [SLOW TEST:8.181 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:26:34.847: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test override all
+Jun  3 20:26:34.890: INFO: Waiting up to 5m0s for pod "client-containers-2ff782de-3183-41ee-9786-475a757ca649" in namespace "containers-6961" to be "success or failure"
+Jun  3 20:26:34.893: INFO: Pod "client-containers-2ff782de-3183-41ee-9786-475a757ca649": Phase="Pending", Reason="", readiness=false. Elapsed: 3.750658ms
+Jun  3 20:26:36.898: INFO: Pod "client-containers-2ff782de-3183-41ee-9786-475a757ca649": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008610858s
+STEP: Saw pod success
+Jun  3 20:26:36.898: INFO: Pod "client-containers-2ff782de-3183-41ee-9786-475a757ca649" satisfied condition "success or failure"
+Jun  3 20:26:36.901: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod client-containers-2ff782de-3183-41ee-9786-475a757ca649 container test-container: 
+STEP: delete the pod
+Jun  3 20:26:36.936: INFO: Waiting for pod client-containers-2ff782de-3183-41ee-9786-475a757ca649 to disappear
+Jun  3 20:26:36.939: INFO: Pod client-containers-2ff782de-3183-41ee-9786-475a757ca649 no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:26:36.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-6961" for this suite.
+Jun  3 20:26:42.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:26:43.045: INFO: namespace containers-6961 deletion completed in 6.101920986s
+
+• [SLOW TEST:8.197 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:26:43.045: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name secret-test-f13f93b4-d406-4bf9-971d-9bd49683440f
+STEP: Creating a pod to test consume secrets
+Jun  3 20:26:43.086: INFO: Waiting up to 5m0s for pod "pod-secrets-93e57ab5-c574-43f8-b4cd-9f2aef9a5061" in namespace "secrets-8498" to be "success or failure"
+Jun  3 20:26:43.091: INFO: Pod "pod-secrets-93e57ab5-c574-43f8-b4cd-9f2aef9a5061": Phase="Pending", Reason="", readiness=false. Elapsed: 5.459709ms
+Jun  3 20:26:45.095: INFO: Pod "pod-secrets-93e57ab5-c574-43f8-b4cd-9f2aef9a5061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009428212s
+STEP: Saw pod success
+Jun  3 20:26:45.095: INFO: Pod "pod-secrets-93e57ab5-c574-43f8-b4cd-9f2aef9a5061" satisfied condition "success or failure"
+Jun  3 20:26:45.098: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-secrets-93e57ab5-c574-43f8-b4cd-9f2aef9a5061 container secret-volume-test: 
+STEP: delete the pod
+Jun  3 20:26:45.122: INFO: Waiting for pod pod-secrets-93e57ab5-c574-43f8-b4cd-9f2aef9a5061 to disappear
+Jun  3 20:26:45.126: INFO: Pod pod-secrets-93e57ab5-c574-43f8-b4cd-9f2aef9a5061 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:26:45.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-8498" for this suite.
+Jun  3 20:26:51.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:26:51.233: INFO: namespace secrets-8498 deletion completed in 6.100182291s
+
+• [SLOW TEST:8.188 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should mutate custom resource with different stored version [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:26:51.233: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 20:26:51.910: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jun  3 20:26:53.921: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812811, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812811, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812811, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726812811, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 20:26:56.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should mutate custom resource with different stored version [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:26:56.945: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3830-crds.webhook.example.com via the AdmissionRegistration API
+STEP: Creating a custom resource while v1 is storage version
+STEP: Patching Custom Resource Definition to set v2 as storage
+STEP: Patching the custom resource while v2 is storage version
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:26:58.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-5871" for this suite.
+Jun  3 20:27:04.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:27:04.318: INFO: namespace webhook-5871 deletion completed in 6.134767954s
+STEP: Destroying namespace "webhook-5871-markers" for this suite.
+Jun  3 20:27:10.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:27:10.426: INFO: namespace webhook-5871-markers deletion completed in 6.107067435s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:19.208 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should mutate custom resource with different stored version [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should invoke init containers on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:27:10.442: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
+[It] should invoke init containers on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+Jun  3 20:27:10.478: INFO: PodSpec: initContainers in spec.initContainers
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:27:15.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-2612" for this suite.
+Jun  3 20:27:27.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:27:27.565: INFO: namespace init-container-2612 deletion completed in 12.104729128s
+
+• [SLOW TEST:17.123 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should invoke init containers on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
+  should be possible to delete [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:27:27.565: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[BeforeEach] when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
+[It] should be possible to delete [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:27:27.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-8211" for this suite.
+Jun  3 20:27:33.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:27:33.741: INFO: namespace kubelet-test-8211 deletion completed in 6.107386788s
+
+• [SLOW TEST:6.176 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
+    should be possible to delete [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:27:33.741: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating secret secrets-7232/secret-test-5245fd60-88a7-44c0-9b46-8304ebd5369b
+STEP: Creating a pod to test consume secrets
+Jun  3 20:27:33.789: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c3fa062-002d-4505-a1e7-bb60f24efcca" in namespace "secrets-7232" to be "success or failure"
+Jun  3 20:27:33.794: INFO: Pod "pod-configmaps-3c3fa062-002d-4505-a1e7-bb60f24efcca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.880312ms
+Jun  3 20:27:35.799: INFO: Pod "pod-configmaps-3c3fa062-002d-4505-a1e7-bb60f24efcca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009840225s
+Jun  3 20:27:37.803: INFO: Pod "pod-configmaps-3c3fa062-002d-4505-a1e7-bb60f24efcca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013849027s
+STEP: Saw pod success
+Jun  3 20:27:37.803: INFO: Pod "pod-configmaps-3c3fa062-002d-4505-a1e7-bb60f24efcca" satisfied condition "success or failure"
+Jun  3 20:27:37.809: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-3c3fa062-002d-4505-a1e7-bb60f24efcca container env-test: 
+STEP: delete the pod
+Jun  3 20:27:37.836: INFO: Waiting for pod pod-configmaps-3c3fa062-002d-4505-a1e7-bb60f24efcca to disappear
+Jun  3 20:27:37.839: INFO: Pod pod-configmaps-3c3fa062-002d-4505-a1e7-bb60f24efcca no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:27:37.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-7232" for this suite.
+Jun  3 20:27:43.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:27:43.953: INFO: namespace secrets-7232 deletion completed in 6.110087194s
+
+• [SLOW TEST:10.212 seconds]
+[sig-api-machinery] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:27:43.953: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-test-volume-f27caf03-aeec-45d2-b914-87b14fb0989f
+STEP: Creating a pod to test consume configMaps
+Jun  3 20:27:44.007: INFO: Waiting up to 5m0s for pod "pod-configmaps-a377a614-b1b5-454b-8e7e-410ebd03afc5" in namespace "configmap-1122" to be "success or failure"
+Jun  3 20:27:44.010: INFO: Pod "pod-configmaps-a377a614-b1b5-454b-8e7e-410ebd03afc5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.300735ms
+Jun  3 20:27:46.016: INFO: Pod "pod-configmaps-a377a614-b1b5-454b-8e7e-410ebd03afc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008346403s
+Jun  3 20:27:48.020: INFO: Pod "pod-configmaps-a377a614-b1b5-454b-8e7e-410ebd03afc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012480984s
+STEP: Saw pod success
+Jun  3 20:27:48.020: INFO: Pod "pod-configmaps-a377a614-b1b5-454b-8e7e-410ebd03afc5" satisfied condition "success or failure"
+Jun  3 20:27:48.023: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-a377a614-b1b5-454b-8e7e-410ebd03afc5 container configmap-volume-test: 
+STEP: delete the pod
+Jun  3 20:27:48.045: INFO: Waiting for pod pod-configmaps-a377a614-b1b5-454b-8e7e-410ebd03afc5 to disappear
+Jun  3 20:27:48.047: INFO: Pod pod-configmaps-a377a614-b1b5-454b-8e7e-410ebd03afc5 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:27:48.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-1122" for this suite.
+Jun  3 20:27:54.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:27:54.157: INFO: namespace configmap-1122 deletion completed in 6.10551545s
+
+• [SLOW TEST:10.204 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[k8s.io] [sig-node] PreStop 
+  should call prestop when killing a pod  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] [sig-node] PreStop
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:27:54.157: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename prestop
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] [sig-node] PreStop
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:173
+[It] should call prestop when killing a pod  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating server pod server in namespace prestop-915
+STEP: Waiting for pods to come up.
+STEP: Creating tester pod tester in namespace prestop-915
+STEP: Deleting pre-stop pod
+Jun  3 20:28:05.243: INFO: Saw: {
+	"Hostname": "server",
+	"Sent": null,
+	"Received": {
+		"prestop": 1
+	},
+	"Errors": null,
+	"Log": [
+		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
+		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
+	],
+	"StillContactingPeers": true
+}
+STEP: Deleting the server pod
+[AfterEach] [k8s.io] [sig-node] PreStop
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:28:05.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "prestop-915" for this suite.
+Jun  3 20:28:49.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:28:49.375: INFO: namespace prestop-915 deletion completed in 44.117227422s
+
+• [SLOW TEST:55.218 seconds]
+[k8s.io] [sig-node] PreStop
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should call prestop when killing a pod  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:28:49.375: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
+STEP: Setting up data
+[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating pod pod-subpath-test-configmap-ch8t
+STEP: Creating a pod to test atomic-volume-subpath
+Jun  3 20:28:49.434: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ch8t" in namespace "subpath-5294" to be "success or failure"
+Jun  3 20:28:49.438: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Pending", Reason="", readiness=false. Elapsed: 3.999795ms
+Jun  3 20:28:51.442: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00814734s
+Jun  3 20:28:53.447: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 4.012694825s
+Jun  3 20:28:55.451: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 6.016814939s
+Jun  3 20:28:57.456: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 8.021415825s
+Jun  3 20:28:59.461: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 10.026347998s
+Jun  3 20:29:01.466: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 12.031298938s
+Jun  3 20:29:03.470: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 14.036209877s
+Jun  3 20:29:05.475: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 16.041000229s
+Jun  3 20:29:07.479: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 18.045160095s
+Jun  3 20:29:09.484: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 20.050038651s
+Jun  3 20:29:11.488: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Running", Reason="", readiness=true. Elapsed: 22.054215515s
+Jun  3 20:29:13.493: INFO: Pod "pod-subpath-test-configmap-ch8t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058654923s
+STEP: Saw pod success
+Jun  3 20:29:13.493: INFO: Pod "pod-subpath-test-configmap-ch8t" satisfied condition "success or failure"
+Jun  3 20:29:13.496: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-subpath-test-configmap-ch8t container test-container-subpath-configmap-ch8t: 
+STEP: delete the pod
+Jun  3 20:29:13.529: INFO: Waiting for pod pod-subpath-test-configmap-ch8t to disappear
+Jun  3 20:29:13.532: INFO: Pod pod-subpath-test-configmap-ch8t no longer exists
+STEP: Deleting pod pod-subpath-test-configmap-ch8t
+Jun  3 20:29:13.532: INFO: Deleting pod "pod-subpath-test-configmap-ch8t" in namespace "subpath-5294"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:29:13.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-5294" for this suite.
+Jun  3 20:29:19.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:29:19.642: INFO: namespace subpath-5294 deletion completed in 6.103085521s
+
+• [SLOW TEST:30.266 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  Atomic writer volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
+    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-auth] ServiceAccounts 
+  should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:29:19.642: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename svcaccounts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: getting the auto-created API token
+STEP: reading a file in the container
+Jun  3 20:29:22.236: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2920 pod-service-account-05e55be8-0908-4bab-b897-3d176af9b086 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
+STEP: reading a file in the container
+Jun  3 20:29:22.482: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2920 pod-service-account-05e55be8-0908-4bab-b897-3d176af9b086 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
+STEP: reading a file in the container
+Jun  3 20:29:22.721: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2920 pod-service-account-05e55be8-0908-4bab-b897-3d176af9b086 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
+[AfterEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:29:22.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "svcaccounts-2920" for this suite.
+Jun  3 20:29:28.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:29:29.087: INFO: namespace svcaccounts-2920 deletion completed in 6.117967272s
+
+• [SLOW TEST:9.445 seconds]
+[sig-auth] ServiceAccounts
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
+  should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow substituting values in a container's command [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:29:29.087: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test substitution in container's command
+Jun  3 20:29:29.129: INFO: Waiting up to 5m0s for pod "var-expansion-6f03ab9c-870c-45ec-9b59-d5b0a0f226a1" in namespace "var-expansion-1572" to be "success or failure"
+Jun  3 20:29:29.134: INFO: Pod "var-expansion-6f03ab9c-870c-45ec-9b59-d5b0a0f226a1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.185924ms
+Jun  3 20:29:31.138: INFO: Pod "var-expansion-6f03ab9c-870c-45ec-9b59-d5b0a0f226a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009307105s
+STEP: Saw pod success
+Jun  3 20:29:31.139: INFO: Pod "var-expansion-6f03ab9c-870c-45ec-9b59-d5b0a0f226a1" satisfied condition "success or failure"
+Jun  3 20:29:31.142: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod var-expansion-6f03ab9c-870c-45ec-9b59-d5b0a0f226a1 container dapi-container: 
+STEP: delete the pod
+Jun  3 20:29:31.168: INFO: Waiting for pod var-expansion-6f03ab9c-870c-45ec-9b59-d5b0a0f226a1 to disappear
+Jun  3 20:29:31.171: INFO: Pod var-expansion-6f03ab9c-870c-45ec-9b59-d5b0a0f226a1 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:29:31.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "var-expansion-1572" for this suite.
+Jun  3 20:29:37.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:29:37.281: INFO: namespace var-expansion-1572 deletion completed in 6.105269191s
+
+• [SLOW TEST:8.193 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should allow substituting values in a container's command [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:29:37.281: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating projection with secret that has name projected-secret-test-map-ab7d2c98-688d-4d7d-9a87-5fb602dcd51c
+STEP: Creating a pod to test consume secrets
+Jun  3 20:29:37.324: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-640bbfa1-b8d1-412f-9fbc-d56067c5af21" in namespace "projected-8636" to be "success or failure"
+Jun  3 20:29:37.332: INFO: Pod "pod-projected-secrets-640bbfa1-b8d1-412f-9fbc-d56067c5af21": Phase="Pending", Reason="", readiness=false. Elapsed: 7.316018ms
+Jun  3 20:29:39.337: INFO: Pod "pod-projected-secrets-640bbfa1-b8d1-412f-9fbc-d56067c5af21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012419984s
+STEP: Saw pod success
+Jun  3 20:29:39.337: INFO: Pod "pod-projected-secrets-640bbfa1-b8d1-412f-9fbc-d56067c5af21" satisfied condition "success or failure"
+Jun  3 20:29:39.340: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-projected-secrets-640bbfa1-b8d1-412f-9fbc-d56067c5af21 container projected-secret-volume-test: 
+STEP: delete the pod
+Jun  3 20:29:39.364: INFO: Waiting for pod pod-projected-secrets-640bbfa1-b8d1-412f-9fbc-d56067c5af21 to disappear
+Jun  3 20:29:39.367: INFO: Pod pod-projected-secrets-640bbfa1-b8d1-412f-9fbc-d56067c5af21 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:29:39.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8636" for this suite.
+Jun  3 20:29:45.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:29:45.472: INFO: namespace projected-8636 deletion completed in 6.101094145s
+
+• [SLOW TEST:8.191 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute poststart http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:29:45.472: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute poststart http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the pod with lifecycle hook
+STEP: check poststart hook
+STEP: delete the pod with lifecycle hook
+Jun  3 20:29:49.557: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:29:49.560: INFO: Pod pod-with-poststart-http-hook still exists
+Jun  3 20:29:51.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:29:51.564: INFO: Pod pod-with-poststart-http-hook still exists
+Jun  3 20:29:53.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:29:53.566: INFO: Pod pod-with-poststart-http-hook still exists
+Jun  3 20:29:55.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:29:55.566: INFO: Pod pod-with-poststart-http-hook still exists
+Jun  3 20:29:57.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:29:57.565: INFO: Pod pod-with-poststart-http-hook still exists
+Jun  3 20:29:59.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:29:59.567: INFO: Pod pod-with-poststart-http-hook still exists
+Jun  3 20:30:01.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:30:01.565: INFO: Pod pod-with-poststart-http-hook still exists
+Jun  3 20:30:03.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:30:03.565: INFO: Pod pod-with-poststart-http-hook still exists
+Jun  3 20:30:05.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun  3 20:30:05.565: INFO: Pod pod-with-poststart-http-hook no longer exists
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:30:05.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-7405" for this suite.
+Jun  3 20:30:17.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:30:17.686: INFO: namespace container-lifecycle-hook-7405 deletion completed in 12.116000978s
+
+• [SLOW TEST:32.213 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
+    should execute poststart http hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[k8s.io] Pods 
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:30:17.686: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
+[It] should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+STEP: setting up watch
+STEP: submitting the pod to kubernetes
+Jun  3 20:30:17.731: INFO: observed the pod list
+STEP: verifying the pod is in kubernetes
+STEP: verifying pod creation was observed
+STEP: deleting the pod gracefully
+STEP: verifying the kubelet observed the termination notice
+STEP: verifying pod deletion was observed
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:30:34.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-3209" for this suite.
+Jun  3 20:30:40.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:30:40.398: INFO: namespace pods-3209 deletion completed in 6.10612391s
+
+• [SLOW TEST:22.712 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:30:40.399: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test override arguments
+Jun  3 20:30:40.439: INFO: Waiting up to 5m0s for pod "client-containers-5d719ae1-7da1-4e24-9cfe-f98b65c60820" in namespace "containers-9648" to be "success or failure"
+Jun  3 20:30:40.444: INFO: Pod "client-containers-5d719ae1-7da1-4e24-9cfe-f98b65c60820": Phase="Pending", Reason="", readiness=false. Elapsed: 5.416793ms
+Jun  3 20:30:42.448: INFO: Pod "client-containers-5d719ae1-7da1-4e24-9cfe-f98b65c60820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009497269s
+STEP: Saw pod success
+Jun  3 20:30:42.448: INFO: Pod "client-containers-5d719ae1-7da1-4e24-9cfe-f98b65c60820" satisfied condition "success or failure"
+Jun  3 20:30:42.451: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod client-containers-5d719ae1-7da1-4e24-9cfe-f98b65c60820 container test-container: 
+STEP: delete the pod
+Jun  3 20:30:42.472: INFO: Waiting for pod client-containers-5d719ae1-7da1-4e24-9cfe-f98b65c60820 to disappear
+Jun  3 20:30:42.475: INFO: Pod client-containers-5d719ae1-7da1-4e24-9cfe-f98b65c60820 no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:30:42.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-9648" for this suite.
+Jun  3 20:30:48.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:30:48.580: INFO: namespace containers-9648 deletion completed in 6.100897835s
+
+• [SLOW TEST:8.182 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should be able to change the type from ClusterIP to ExternalName [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:30:48.581: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6062
+STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
+STEP: creating service externalsvc in namespace services-6062
+STEP: creating replication controller externalsvc in namespace services-6062
+I0603 20:30:48.655414      25 runners.go:184] Created replication controller with name: externalsvc, namespace: services-6062, replica count: 2
+I0603 20:30:51.706861      25 runners.go:184] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+STEP: changing the ClusterIP service to type=ExternalName
+Jun  3 20:30:51.727: INFO: Creating new exec pod
+Jun  3 20:30:53.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-6062 execpod7vbhp -- /bin/sh -x -c nslookup clusterip-service'
+Jun  3 20:30:53.985: INFO: stderr: "+ nslookup clusterip-service\n"
+Jun  3 20:30:53.985: INFO: stdout: "Server:\t\t172.19.0.10\nAddress:\t172.19.0.10#53\n\nclusterip-service.services-6062.svc.cluster.local\tcanonical name = externalsvc.services-6062.svc.cluster.local.\nName:\texternalsvc.services-6062.svc.cluster.local\nAddress: 172.19.193.220\n\n"
+STEP: deleting ReplicationController externalsvc in namespace services-6062, will wait for the garbage collector to delete the pods
+Jun  3 20:30:54.050: INFO: Deleting ReplicationController externalsvc took: 10.704595ms
+Jun  3 20:30:54.450: INFO: Terminating ReplicationController externalsvc pods took: 400.291301ms
+Jun  3 20:31:04.383: INFO: Cleaning up the ClusterIP to ExternalName test service
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:31:04.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-6062" for this suite.
+Jun  3 20:31:10.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:31:10.510: INFO: namespace services-6062 deletion completed in 6.103248956s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95
+
+• [SLOW TEST:21.930 seconds]
+[sig-network] Services
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should be able to change the type from ClusterIP to ExternalName [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] ReplicaSet 
+  should adopt matching pods on creation and release no longer matching pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:31:10.511: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename replicaset
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Given a Pod with a 'name' label pod-adoption-release is created
+STEP: When a replicaset with a matching selector is created
+STEP: Then the orphan pod is adopted
+STEP: When the matched label of one of its pods change
+Jun  3 20:31:13.587: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
+STEP: Then the pod is released
+[AfterEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:31:14.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replicaset-7768" for this suite.
+Jun  3 20:31:26.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:31:26.724: INFO: namespace replicaset-7768 deletion completed in 12.113760256s
+
+• [SLOW TEST:16.213 seconds]
+[sig-apps] ReplicaSet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should adopt matching pods on creation and release no longer matching pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[k8s.io] Container Runtime blackbox test when starting a container that exits 
+  should run with the expected status [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:31:26.724: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should run with the expected status [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
+STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
+STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:31:50.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-4929" for this suite.
+Jun  3 20:31:56.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:31:56.127: INFO: namespace container-runtime-4929 deletion completed in 6.113018968s
+
+• [SLOW TEST:29.404 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  blackbox test
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
+    when starting a container that exits
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
+      should run with the expected status [NodeConformance] [Conformance]
+      /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:31:56.128: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name secret-test-98f75712-b4fa-445e-bb35-e394b38aa829
+STEP: Creating a pod to test consume secrets
+Jun  3 20:31:56.173: INFO: Waiting up to 5m0s for pod "pod-secrets-562f6207-a667-44de-8227-66c4613ce0d8" in namespace "secrets-4578" to be "success or failure"
+Jun  3 20:31:56.176: INFO: Pod "pod-secrets-562f6207-a667-44de-8227-66c4613ce0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.318727ms
+Jun  3 20:31:58.183: INFO: Pod "pod-secrets-562f6207-a667-44de-8227-66c4613ce0d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009746482s
+STEP: Saw pod success
+Jun  3 20:31:58.183: INFO: Pod "pod-secrets-562f6207-a667-44de-8227-66c4613ce0d8" satisfied condition "success or failure"
+Jun  3 20:31:58.186: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-secrets-562f6207-a667-44de-8227-66c4613ce0d8 container secret-env-test: 
+STEP: delete the pod
+Jun  3 20:31:58.218: INFO: Waiting for pod pod-secrets-562f6207-a667-44de-8227-66c4613ce0d8 to disappear
+Jun  3 20:31:58.221: INFO: Pod pod-secrets-562f6207-a667-44de-8227-66c4613ce0d8 no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:31:58.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-4578" for this suite.
+Jun  3 20:32:04.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:32:04.326: INFO: namespace secrets-4578 deletion completed in 6.100841276s
+
+• [SLOW TEST:8.199 seconds]
+[sig-api-machinery] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:32:04.327: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun  3 20:32:18.410: INFO: DNS probes using dns-1644/dns-test-ec746ba5-3ec6-4b8d-a5d7-8b9c8adcfd64 succeeded
+
+STEP: deleting the pod
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:32:18.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-1644" for this suite.
+Jun  3 20:32:24.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:32:24.529: INFO: namespace dns-1644 deletion completed in 6.101975623s
+
+• [SLOW TEST:20.202 seconds]
+[sig-network] DNS
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should fail to create secret due to empty secret key [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:32:24.529: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should fail to create secret due to empty secret key [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating projection with secret that has name secret-emptykey-test-89ab45d0-7e34-4bb3-905f-82eb28d95cb4
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:32:24.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-5612" for this suite.
+Jun  3 20:32:30.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:32:30.668: INFO: namespace secrets-5612 deletion completed in 6.100230398s
+
+• [SLOW TEST:6.139 seconds]
+[sig-api-machinery] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
+  should fail to create secret due to empty secret key [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:32:30.668: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 20:32:30.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b70c28af-ecae-4aeb-bd59-9ada55009c7d" in namespace "projected-6176" to be "success or failure"
+Jun  3 20:32:30.715: INFO: Pod "downwardapi-volume-b70c28af-ecae-4aeb-bd59-9ada55009c7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.229366ms
+Jun  3 20:32:32.720: INFO: Pod "downwardapi-volume-b70c28af-ecae-4aeb-bd59-9ada55009c7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007603155s
+STEP: Saw pod success
+Jun  3 20:32:32.720: INFO: Pod "downwardapi-volume-b70c28af-ecae-4aeb-bd59-9ada55009c7d" satisfied condition "success or failure"
+Jun  3 20:32:32.723: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod downwardapi-volume-b70c28af-ecae-4aeb-bd59-9ada55009c7d container client-container: 
+STEP: delete the pod
+Jun  3 20:32:32.754: INFO: Waiting for pod downwardapi-volume-b70c28af-ecae-4aeb-bd59-9ada55009c7d to disappear
+Jun  3 20:32:32.760: INFO: Pod downwardapi-volume-b70c28af-ecae-4aeb-bd59-9ada55009c7d no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:32:32.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6176" for this suite.
+Jun  3 20:32:38.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:32:38.873: INFO: namespace projected-6176 deletion completed in 6.108978422s
+
+• [SLOW TEST:8.205 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:32:38.873: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-test-upd-b13b06c2-e743-4518-8215-41faf931c4e4
+STEP: Creating the pod
+STEP: Waiting for pod with text data
+STEP: Waiting for pod with binary data
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:32:40.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-218" for this suite.
+Jun  3 20:32:56.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:32:57.073: INFO: namespace configmap-218 deletion completed in 16.117119279s
+
+• [SLOW TEST:18.200 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:32:57.073: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for the rc to be deleted
+STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
+STEP: Gathering metrics
+Jun  3 20:33:37.158: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+W0603 20:33:37.158575      25 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:33:37.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-4244" for this suite.
+Jun  3 20:33:43.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:33:43.266: INFO: namespace gc-4244 deletion completed in 6.103791508s
+
+• [SLOW TEST:46.193 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should orphan pods created by rc if delete options say so [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  works for multiple CRDs of different groups [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:33:43.266: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] works for multiple CRDs of different groups [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
+Jun  3 20:33:43.299: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:33:47.015: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:34:01.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-3654" for this suite.
+Jun  3 20:34:07.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:34:07.204: INFO: namespace crd-publish-openapi-3654 deletion completed in 6.100729075s
+
+• [SLOW TEST:23.938 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  works for multiple CRDs of different groups [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:34:07.204: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0666 on node default medium
+Jun  3 20:34:07.246: INFO: Waiting up to 5m0s for pod "pod-bf2aee4a-76f3-42a2-a2cc-c8e4e39156aa" in namespace "emptydir-4284" to be "success or failure"
+Jun  3 20:34:07.248: INFO: Pod "pod-bf2aee4a-76f3-42a2-a2cc-c8e4e39156aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241631ms
+Jun  3 20:34:09.251: INFO: Pod "pod-bf2aee4a-76f3-42a2-a2cc-c8e4e39156aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005549447s
+STEP: Saw pod success
+Jun  3 20:34:09.251: INFO: Pod "pod-bf2aee4a-76f3-42a2-a2cc-c8e4e39156aa" satisfied condition "success or failure"
+Jun  3 20:34:09.255: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-bf2aee4a-76f3-42a2-a2cc-c8e4e39156aa container test-container: 
+STEP: delete the pod
+Jun  3 20:34:09.276: INFO: Waiting for pod pod-bf2aee4a-76f3-42a2-a2cc-c8e4e39156aa to disappear
+Jun  3 20:34:09.279: INFO: Pod pod-bf2aee4a-76f3-42a2-a2cc-c8e4e39156aa no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:34:09.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-4284" for this suite.
+Jun  3 20:34:15.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:34:15.383: INFO: namespace emptydir-4284 deletion completed in 6.100059007s
+
+• [SLOW TEST:8.179 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:34:15.384: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward api env vars
+Jun  3 20:34:15.426: INFO: Waiting up to 5m0s for pod "downward-api-416ca72d-974d-484c-8a00-f270a8f84057" in namespace "downward-api-1322" to be "success or failure"
+Jun  3 20:34:15.429: INFO: Pod "downward-api-416ca72d-974d-484c-8a00-f270a8f84057": Phase="Pending", Reason="", readiness=false. Elapsed: 3.495528ms
+Jun  3 20:34:17.434: INFO: Pod "downward-api-416ca72d-974d-484c-8a00-f270a8f84057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007945915s
+Jun  3 20:34:19.438: INFO: Pod "downward-api-416ca72d-974d-484c-8a00-f270a8f84057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012696464s
+STEP: Saw pod success
+Jun  3 20:34:19.438: INFO: Pod "downward-api-416ca72d-974d-484c-8a00-f270a8f84057" satisfied condition "success or failure"
+Jun  3 20:34:19.441: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downward-api-416ca72d-974d-484c-8a00-f270a8f84057 container dapi-container: 
+STEP: delete the pod
+Jun  3 20:34:19.471: INFO: Waiting for pod downward-api-416ca72d-974d-484c-8a00-f270a8f84057 to disappear
+Jun  3 20:34:19.475: INFO: Pod downward-api-416ca72d-974d-484c-8a00-f270a8f84057 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:34:19.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-1322" for this suite.
+Jun  3 20:34:25.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:34:25.575: INFO: namespace downward-api-1322 deletion completed in 6.095379077s
+
+• [SLOW TEST:10.191 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
+  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy through a service and a pod  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] version v1
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:34:25.575: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy through a service and a pod  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: starting an echo server on multiple ports
+STEP: creating replication controller proxy-service-zv2j8 in namespace proxy-5094
+I0603 20:34:25.635224      25 runners.go:184] Created replication controller with name: proxy-service-zv2j8, namespace: proxy-5094, replica count: 1
+I0603 20:34:26.685706      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I0603 20:34:27.685954      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I0603 20:34:28.686204      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0603 20:34:29.686559      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0603 20:34:30.686770      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0603 20:34:31.687025      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0603 20:34:32.687287      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0603 20:34:33.687654      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0603 20:34:34.687960      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0603 20:34:35.688179      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0603 20:34:36.688390      25 runners.go:184] proxy-service-zv2j8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Jun  3 20:34:36.696: INFO: setup took 11.086054916s, starting test cases
+STEP: running 16 cases, 20 attempts per case, 320 total attempts
+Jun  3 20:34:36.701: INFO: (0) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 5.441578ms)
+Jun  3 20:34:36.701: INFO: (0) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 5.4992ms)
+Jun  3 20:34:36.702: INFO: (0) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 5.912499ms)
+Jun  3 20:34:36.702: INFO: (0) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 6.440748ms)
+Jun  3 20:34:36.703: INFO: (0) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 6.800685ms)
+Jun  3 20:34:36.703: INFO: (0) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.954991ms)
+Jun  3 20:34:36.703: INFO: (0) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 7.318028ms)
+Jun  3 20:34:36.704: INFO: (0) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 7.716201ms)
+Jun  3 20:34:36.704: INFO: (0) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 7.926222ms)
+Jun  3 20:34:36.707: INFO: (0) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 10.980275ms)
+Jun  3 20:34:36.707: INFO: (0) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 11.211612ms)
+Jun  3 20:34:36.708: INFO: (0) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: ... (200; 5.281645ms)
+Jun  3 20:34:36.717: INFO: (1) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 6.565538ms)
+Jun  3 20:34:36.717: INFO: (1) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test<... (200; 8.245972ms)
+Jun  3 20:34:36.719: INFO: (1) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 8.463639ms)
+Jun  3 20:34:36.719: INFO: (1) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 8.724128ms)
+Jun  3 20:34:36.719: INFO: (1) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 8.754675ms)
+Jun  3 20:34:36.720: INFO: (1) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 9.029575ms)
+Jun  3 20:34:36.720: INFO: (1) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 9.244412ms)
+Jun  3 20:34:36.720: INFO: (1) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 9.592987ms)
+Jun  3 20:34:36.720: INFO: (1) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 9.571812ms)
+Jun  3 20:34:36.726: INFO: (2) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 5.541469ms)
+Jun  3 20:34:36.726: INFO: (2) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 5.684479ms)
+Jun  3 20:34:36.727: INFO: (2) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 6.37527ms)
+Jun  3 20:34:36.727: INFO: (2) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 6.456006ms)
+Jun  3 20:34:36.727: INFO: (2) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 6.599573ms)
+Jun  3 20:34:36.728: INFO: (2) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 7.232485ms)
+Jun  3 20:34:36.728: INFO: (2) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 7.928199ms)
+Jun  3 20:34:36.729: INFO: (2) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 8.015174ms)
+Jun  3 20:34:36.731: INFO: (2) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: ... (200; 10.211299ms)
+Jun  3 20:34:36.731: INFO: (2) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 10.802704ms)
+Jun  3 20:34:36.731: INFO: (2) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 10.886681ms)
+Jun  3 20:34:36.732: INFO: (2) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 11.080586ms)
+Jun  3 20:34:36.732: INFO: (2) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 11.382581ms)
+Jun  3 20:34:36.738: INFO: (3) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test (200; 6.869714ms)
+Jun  3 20:34:36.740: INFO: (3) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 7.578843ms)
+Jun  3 20:34:36.740: INFO: (3) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 8.133866ms)
+Jun  3 20:34:36.740: INFO: (3) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 8.209261ms)
+Jun  3 20:34:36.741: INFO: (3) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 8.591264ms)
+Jun  3 20:34:36.741: INFO: (3) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 8.644743ms)
+Jun  3 20:34:36.741: INFO: (3) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 9.141736ms)
+Jun  3 20:34:36.741: INFO: (3) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 9.167489ms)
+Jun  3 20:34:36.741: INFO: (3) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 9.145457ms)
+Jun  3 20:34:36.742: INFO: (3) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 10.02476ms)
+Jun  3 20:34:36.742: INFO: (3) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 10.013414ms)
+Jun  3 20:34:36.747: INFO: (4) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 4.191618ms)
+Jun  3 20:34:36.747: INFO: (4) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 4.285074ms)
+Jun  3 20:34:36.749: INFO: (4) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 6.177382ms)
+Jun  3 20:34:36.749: INFO: (4) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 6.839866ms)
+Jun  3 20:34:36.749: INFO: (4) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.246887ms)
+Jun  3 20:34:36.750: INFO: (4) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 7.294559ms)
+Jun  3 20:34:36.750: INFO: (4) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 6.51053ms)
+Jun  3 20:34:36.750: INFO: (4) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.841793ms)
+Jun  3 20:34:36.750: INFO: (4) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 7.315303ms)
+Jun  3 20:34:36.750: INFO: (4) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 7.198887ms)
+Jun  3 20:34:36.750: INFO: (4) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test<... (200; 4.295988ms)
+Jun  3 20:34:36.756: INFO: (5) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 4.964853ms)
+Jun  3 20:34:36.756: INFO: (5) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 5.367539ms)
+Jun  3 20:34:36.757: INFO: (5) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 5.74118ms)
+Jun  3 20:34:36.757: INFO: (5) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 6.114166ms)
+Jun  3 20:34:36.757: INFO: (5) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 6.732396ms)
+Jun  3 20:34:36.758: INFO: (5) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 6.67315ms)
+Jun  3 20:34:36.758: INFO: (5) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 7.228578ms)
+Jun  3 20:34:36.758: INFO: (5) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 7.586616ms)
+Jun  3 20:34:36.759: INFO: (5) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 7.769193ms)
+Jun  3 20:34:36.759: INFO: (5) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 8.00763ms)
+Jun  3 20:34:36.759: INFO: (5) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 8.018557ms)
+Jun  3 20:34:36.759: INFO: (5) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 8.032628ms)
+Jun  3 20:34:36.763: INFO: (6) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 3.845743ms)
+Jun  3 20:34:36.764: INFO: (6) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 5.238605ms)
+Jun  3 20:34:36.764: INFO: (6) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 5.267679ms)
+Jun  3 20:34:36.765: INFO: (6) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 5.518792ms)
+Jun  3 20:34:36.766: INFO: (6) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 6.396089ms)
+Jun  3 20:34:36.767: INFO: (6) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test (200; 7.933466ms)
+Jun  3 20:34:36.767: INFO: (6) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 8.042647ms)
+Jun  3 20:34:36.767: INFO: (6) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 8.243373ms)
+Jun  3 20:34:36.767: INFO: (6) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 8.315191ms)
+Jun  3 20:34:36.767: INFO: (6) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 8.21388ms)
+Jun  3 20:34:36.768: INFO: (6) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 8.372321ms)
+Jun  3 20:34:36.768: INFO: (6) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 8.357233ms)
+Jun  3 20:34:36.768: INFO: (6) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 9.141971ms)
+Jun  3 20:34:36.768: INFO: (6) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 9.034254ms)
+Jun  3 20:34:36.771: INFO: (7) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 2.962723ms)
+Jun  3 20:34:36.772: INFO: (7) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 3.381008ms)
+Jun  3 20:34:36.772: INFO: (7) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 3.71002ms)
+Jun  3 20:34:36.772: INFO: (7) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 3.903293ms)
+Jun  3 20:34:36.774: INFO: (7) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 5.033895ms)
+Jun  3 20:34:36.774: INFO: (7) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 5.354571ms)
+Jun  3 20:34:36.774: INFO: (7) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 5.992745ms)
+Jun  3 20:34:36.775: INFO: (7) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 6.335697ms)
+Jun  3 20:34:36.775: INFO: (7) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.55901ms)
+Jun  3 20:34:36.775: INFO: (7) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 6.70645ms)
+Jun  3 20:34:36.776: INFO: (7) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 7.157439ms)
+Jun  3 20:34:36.776: INFO: (7) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 7.220293ms)
+Jun  3 20:34:36.776: INFO: (7) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 7.581658ms)
+Jun  3 20:34:36.777: INFO: (7) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: ... (200; 8.388328ms)
+Jun  3 20:34:36.785: INFO: (8) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 8.405772ms)
+Jun  3 20:34:36.785: INFO: (8) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 8.378177ms)
+Jun  3 20:34:36.785: INFO: (8) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 8.481193ms)
+Jun  3 20:34:36.785: INFO: (8) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 8.488387ms)
+Jun  3 20:34:36.786: INFO: (8) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 8.892365ms)
+Jun  3 20:34:36.786: INFO: (8) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 9.139522ms)
+Jun  3 20:34:36.786: INFO: (8) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test (200; 10.385973ms)
+Jun  3 20:34:36.788: INFO: (8) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 10.782457ms)
+Jun  3 20:34:36.789: INFO: (8) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 11.812754ms)
+Jun  3 20:34:36.793: INFO: (9) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 3.90213ms)
+Jun  3 20:34:36.794: INFO: (9) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 4.647542ms)
+Jun  3 20:34:36.794: INFO: (9) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 5.163327ms)
+Jun  3 20:34:36.794: INFO: (9) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 4.709249ms)
+Jun  3 20:34:36.794: INFO: (9) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 4.985292ms)
+Jun  3 20:34:36.795: INFO: (9) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 5.270355ms)
+Jun  3 20:34:36.795: INFO: (9) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 5.753399ms)
+Jun  3 20:34:36.795: INFO: (9) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 5.648039ms)
+Jun  3 20:34:36.795: INFO: (9) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test<... (200; 6.352697ms)
+Jun  3 20:34:36.796: INFO: (9) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 6.938858ms)
+Jun  3 20:34:36.796: INFO: (9) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 6.893118ms)
+Jun  3 20:34:36.796: INFO: (9) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 6.99726ms)
+Jun  3 20:34:36.796: INFO: (9) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 7.057712ms)
+Jun  3 20:34:36.797: INFO: (9) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 7.788196ms)
+Jun  3 20:34:36.801: INFO: (10) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 3.315914ms)
+Jun  3 20:34:36.801: INFO: (10) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 3.635186ms)
+Jun  3 20:34:36.802: INFO: (10) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 5.113982ms)
+Jun  3 20:34:36.802: INFO: (10) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: ... (200; 6.095694ms)
+Jun  3 20:34:36.804: INFO: (10) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 6.084069ms)
+Jun  3 20:34:36.804: INFO: (10) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.483389ms)
+Jun  3 20:34:36.804: INFO: (10) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 6.339026ms)
+Jun  3 20:34:36.804: INFO: (10) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.150083ms)
+Jun  3 20:34:36.805: INFO: (10) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 7.113543ms)
+Jun  3 20:34:36.805: INFO: (10) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 7.824387ms)
+Jun  3 20:34:36.805: INFO: (10) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 7.558091ms)
+Jun  3 20:34:36.805: INFO: (10) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 7.58017ms)
+Jun  3 20:34:36.807: INFO: (10) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 8.673138ms)
+Jun  3 20:34:36.807: INFO: (10) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 8.770084ms)
+Jun  3 20:34:36.814: INFO: (11) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 7.045609ms)
+Jun  3 20:34:36.814: INFO: (11) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.715515ms)
+Jun  3 20:34:36.814: INFO: (11) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 7.174727ms)
+Jun  3 20:34:36.815: INFO: (11) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 7.258349ms)
+Jun  3 20:34:36.815: INFO: (11) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 7.45861ms)
+Jun  3 20:34:36.815: INFO: (11) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 7.608494ms)
+Jun  3 20:34:36.815: INFO: (11) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test<... (200; 8.408625ms)
+Jun  3 20:34:36.817: INFO: (11) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 9.556878ms)
+Jun  3 20:34:36.817: INFO: (11) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 9.57285ms)
+Jun  3 20:34:36.817: INFO: (11) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 9.332466ms)
+Jun  3 20:34:36.817: INFO: (11) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 9.592268ms)
+Jun  3 20:34:36.817: INFO: (11) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 9.391194ms)
+Jun  3 20:34:36.817: INFO: (11) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 9.795871ms)
+Jun  3 20:34:36.818: INFO: (11) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 10.361111ms)
+Jun  3 20:34:36.821: INFO: (12) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 2.938599ms)
+Jun  3 20:34:36.821: INFO: (12) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 3.411809ms)
+Jun  3 20:34:36.821: INFO: (12) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 3.180853ms)
+Jun  3 20:34:36.823: INFO: (12) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 5.067291ms)
+Jun  3 20:34:36.823: INFO: (12) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: ... (200; 5.92773ms)
+Jun  3 20:34:36.824: INFO: (12) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 5.852512ms)
+Jun  3 20:34:36.824: INFO: (12) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 6.44689ms)
+Jun  3 20:34:36.824: INFO: (12) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 6.509762ms)
+Jun  3 20:34:36.824: INFO: (12) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 6.66218ms)
+Jun  3 20:34:36.825: INFO: (12) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 6.681019ms)
+Jun  3 20:34:36.825: INFO: (12) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 7.395521ms)
+Jun  3 20:34:36.825: INFO: (12) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 7.420925ms)
+Jun  3 20:34:36.826: INFO: (12) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 7.636256ms)
+Jun  3 20:34:36.826: INFO: (12) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 7.637979ms)
+Jun  3 20:34:36.826: INFO: (12) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 7.588003ms)
+Jun  3 20:34:36.829: INFO: (13) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: ... (200; 3.862743ms)
+Jun  3 20:34:36.830: INFO: (13) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 3.74156ms)
+Jun  3 20:34:36.830: INFO: (13) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 4.136328ms)
+Jun  3 20:34:36.830: INFO: (13) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 4.363567ms)
+Jun  3 20:34:36.831: INFO: (13) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 5.163385ms)
+Jun  3 20:34:36.831: INFO: (13) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 5.329986ms)
+Jun  3 20:34:36.832: INFO: (13) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 5.282199ms)
+Jun  3 20:34:36.832: INFO: (13) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 5.342346ms)
+Jun  3 20:34:36.832: INFO: (13) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 5.507182ms)
+Jun  3 20:34:36.832: INFO: (13) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 5.847638ms)
+Jun  3 20:34:36.832: INFO: (13) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 5.789695ms)
+Jun  3 20:34:36.832: INFO: (13) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 6.273314ms)
+Jun  3 20:34:36.832: INFO: (13) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 6.227516ms)
+Jun  3 20:34:36.833: INFO: (13) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 7.192718ms)
+Jun  3 20:34:36.834: INFO: (13) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 7.428174ms)
+Jun  3 20:34:36.837: INFO: (14) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 3.526415ms)
+Jun  3 20:34:36.838: INFO: (14) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 4.383908ms)
+Jun  3 20:34:36.839: INFO: (14) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 4.501388ms)
+Jun  3 20:34:36.840: INFO: (14) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test (200; 6.535861ms)
+Jun  3 20:34:36.840: INFO: (14) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.30173ms)
+Jun  3 20:34:36.841: INFO: (14) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 7.01345ms)
+Jun  3 20:34:36.842: INFO: (14) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 7.260332ms)
+Jun  3 20:34:36.842: INFO: (14) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 6.953659ms)
+Jun  3 20:34:36.842: INFO: (14) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 7.459554ms)
+Jun  3 20:34:36.842: INFO: (14) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 7.690243ms)
+Jun  3 20:34:36.842: INFO: (14) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 8.486767ms)
+Jun  3 20:34:36.842: INFO: (14) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 7.55598ms)
+Jun  3 20:34:36.843: INFO: (14) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 7.679842ms)
+Jun  3 20:34:36.843: INFO: (14) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 7.702182ms)
+Jun  3 20:34:36.843: INFO: (14) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 7.710699ms)
+Jun  3 20:34:36.847: INFO: (15) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 3.341332ms)
+Jun  3 20:34:36.847: INFO: (15) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test (200; 4.218921ms)
+Jun  3 20:34:36.849: INFO: (15) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 4.981724ms)
+Jun  3 20:34:36.849: INFO: (15) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 4.989663ms)
+Jun  3 20:34:36.849: INFO: (15) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 6.139033ms)
+Jun  3 20:34:36.850: INFO: (15) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 5.895473ms)
+Jun  3 20:34:36.850: INFO: (15) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 6.450577ms)
+Jun  3 20:34:36.850: INFO: (15) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 6.299587ms)
+Jun  3 20:34:36.851: INFO: (15) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 6.97268ms)
+Jun  3 20:34:36.854: INFO: (16) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 3.260806ms)
+Jun  3 20:34:36.855: INFO: (16) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 3.753549ms)
+Jun  3 20:34:36.855: INFO: (16) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 4.169317ms)
+Jun  3 20:34:36.857: INFO: (16) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 5.580484ms)
+Jun  3 20:34:36.857: INFO: (16) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 5.5607ms)
+Jun  3 20:34:36.858: INFO: (16) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 6.910762ms)
+Jun  3 20:34:36.858: INFO: (16) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 6.597201ms)
+Jun  3 20:34:36.858: INFO: (16) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 6.598079ms)
+Jun  3 20:34:36.859: INFO: (16) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 7.214136ms)
+Jun  3 20:34:36.860: INFO: (16) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 7.904179ms)
+Jun  3 20:34:36.860: INFO: (16) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 8.453662ms)
+Jun  3 20:34:36.860: INFO: (16) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 8.386353ms)
+Jun  3 20:34:36.860: INFO: (16) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test (200; 6.282465ms)
+Jun  3 20:34:36.867: INFO: (17) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 6.591777ms)
+Jun  3 20:34:36.867: INFO: (17) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 6.996546ms)
+Jun  3 20:34:36.867: INFO: (17) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:1080/proxy/: test<... (200; 7.008126ms)
+Jun  3 20:34:36.867: INFO: (17) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test (200; 6.350774ms)
+Jun  3 20:34:36.876: INFO: (18) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 6.553406ms)
+Jun  3 20:34:36.877: INFO: (18) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test<... (200; 6.896582ms)
+Jun  3 20:34:36.877: INFO: (18) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 7.126868ms)
+Jun  3 20:34:36.877: INFO: (18) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 7.125153ms)
+Jun  3 20:34:36.877: INFO: (18) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 7.545187ms)
+Jun  3 20:34:36.877: INFO: (18) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 7.161626ms)
+Jun  3 20:34:36.878: INFO: (18) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 7.92536ms)
+Jun  3 20:34:36.879: INFO: (18) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 8.35679ms)
+Jun  3 20:34:36.882: INFO: (19) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:443/proxy/: test<... (200; 3.638475ms)
+Jun  3 20:34:36.884: INFO: (19) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 4.128895ms)
+Jun  3 20:34:36.885: INFO: (19) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:460/proxy/: tls baz (200; 5.589126ms)
+Jun  3 20:34:36.885: INFO: (19) /api/v1/namespaces/proxy-5094/pods/https:proxy-service-zv2j8-mvvs6:462/proxy/: tls qux (200; 5.521106ms)
+Jun  3 20:34:36.885: INFO: (19) /api/v1/namespaces/proxy-5094/pods/proxy-service-zv2j8-mvvs6/proxy/: test (200; 5.300125ms)
+Jun  3 20:34:36.886: INFO: (19) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:162/proxy/: bar (200; 6.145693ms)
+Jun  3 20:34:36.886: INFO: (19) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname2/proxy/: bar (200; 7.196245ms)
+Jun  3 20:34:36.886: INFO: (19) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname2/proxy/: tls qux (200; 6.782636ms)
+Jun  3 20:34:36.887: INFO: (19) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:160/proxy/: foo (200; 6.610301ms)
+Jun  3 20:34:36.887: INFO: (19) /api/v1/namespaces/proxy-5094/pods/http:proxy-service-zv2j8-mvvs6:1080/proxy/: ... (200; 6.820385ms)
+Jun  3 20:34:36.887: INFO: (19) /api/v1/namespaces/proxy-5094/services/proxy-service-zv2j8:portname1/proxy/: foo (200; 7.933856ms)
+Jun  3 20:34:36.887: INFO: (19) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname2/proxy/: bar (200; 8.82368ms)
+Jun  3 20:34:36.887: INFO: (19) /api/v1/namespaces/proxy-5094/services/http:proxy-service-zv2j8:portname1/proxy/: foo (200; 7.778667ms)
+Jun  3 20:34:36.888: INFO: (19) /api/v1/namespaces/proxy-5094/services/https:proxy-service-zv2j8:tlsportname1/proxy/: tls baz (200; 8.266186ms)
+STEP: deleting ReplicationController proxy-service-zv2j8 in namespace proxy-5094, will wait for the garbage collector to delete the pods
+Jun  3 20:34:36.949: INFO: Deleting ReplicationController proxy-service-zv2j8 took: 8.510828ms
+Jun  3 20:34:37.350: INFO: Terminating ReplicationController proxy-service-zv2j8 pods took: 400.267725ms
+[AfterEach] version v1
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:34:39.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "proxy-5094" for this suite.
+Jun  3 20:34:45.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:34:45.171: INFO: namespace proxy-5094 deletion completed in 6.115058931s
+
+• [SLOW TEST:19.596 seconds]
+[sig-network] Proxy
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  version v1
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
+    should proxy through a service and a pod  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should verify ResourceQuota with best effort scope. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:34:45.171: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should verify ResourceQuota with best effort scope. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a ResourceQuota with best effort scope
+STEP: Ensuring ResourceQuota status is calculated
+STEP: Creating a ResourceQuota with not best effort scope
+STEP: Ensuring ResourceQuota status is calculated
+STEP: Creating a best-effort pod
+STEP: Ensuring resource quota with best effort scope captures the pod usage
+STEP: Ensuring resource quota with not best effort ignored the pod usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+STEP: Creating a not best-effort pod
+STEP: Ensuring resource quota with not best effort scope captures the pod usage
+STEP: Ensuring resource quota with best effort scope ignored the pod usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:35:01.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-1987" for this suite.
+Jun  3 20:35:07.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:35:07.405: INFO: namespace resourcequota-1987 deletion completed in 6.101274198s
+
+• [SLOW TEST:22.235 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should verify ResourceQuota with best effort scope. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
+  should have an terminated reason [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:35:07.406: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[BeforeEach] when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
+[It] should have an terminated reason [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:35:11.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-8154" for this suite.
+Jun  3 20:35:17.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:35:17.572: INFO: namespace kubelet-test-8154 deletion completed in 6.10889058s
+
+• [SLOW TEST:10.166 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
+    should have an terminated reason [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should delete old replica sets [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:35:17.572: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+[It] deployment should delete old replica sets [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:35:17.613: INFO: Pod name cleanup-pod: Found 0 pods out of 1
+Jun  3 20:35:22.618: INFO: Pod name cleanup-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Jun  3 20:35:22.618: INFO: Creating deployment test-cleanup-deployment
+STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
+Jun  3 20:35:22.637: INFO: Deployment "test-cleanup-deployment":
+&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-8116 /apis/apps/v1/namespaces/deployment-8116/deployments/test-cleanup-deployment f9867ed4-d73e-496c-bc1b-7c058c35901e 150317 1 2020-06-03 20:35:22 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004179658  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}
+
+Jun  3 20:35:22.641: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
+Jun  3 20:35:22.641: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
+Jun  3 20:35:22.641: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-8116 /apis/apps/v1/namespaces/deployment-8116/replicasets/test-cleanup-controller 83a72228-4064-4d83-be20-56121b148a8a 150318 1 2020-06-03 20:35:17 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment f9867ed4-d73e-496c-bc1b-7c058c35901e 0xc0041799c7 0xc0041799c8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004179a28  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 20:35:22.645: INFO: Pod "test-cleanup-controller-jtlq9" is available:
+&Pod{ObjectMeta:{test-cleanup-controller-jtlq9 test-cleanup-controller- deployment-8116 /api/v1/namespaces/deployment-8116/pods/test-cleanup-controller-jtlq9 cc17f5fd-0fe7-461f-ad95-1b8408675f76 150308 0 2020-06-03 20:35:17 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 83a72228-4064-4d83-be20-56121b148a8a 0xc004179d37 0xc004179d38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cw5nz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cw5nz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cw5nz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:35:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:35:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:35:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:35:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:172.20.2.50,StartTime:2020-06-03 20:35:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 20:35:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c17c23c40412f21a7de1605fe4f45b981969c5006bda4b74e0db12e78cecea0e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.2.50,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:35:22.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-8116" for this suite.
+Jun  3 20:35:28.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:35:28.756: INFO: namespace deployment-8116 deletion completed in 6.103881618s
+
+• [SLOW TEST:11.184 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  deployment should delete old replica sets [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:35:28.757: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name secret-test-4125425f-2666-483c-b698-c25bd1b44793
+STEP: Creating a pod to test consume secrets
+Jun  3 20:35:28.831: INFO: Waiting up to 5m0s for pod "pod-secrets-e302709a-fd14-473d-9022-32a472ea0740" in namespace "secrets-725" to be "success or failure"
+Jun  3 20:35:28.834: INFO: Pod "pod-secrets-e302709a-fd14-473d-9022-32a472ea0740": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250869ms
+Jun  3 20:35:30.838: INFO: Pod "pod-secrets-e302709a-fd14-473d-9022-32a472ea0740": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006548793s
+STEP: Saw pod success
+Jun  3 20:35:30.838: INFO: Pod "pod-secrets-e302709a-fd14-473d-9022-32a472ea0740" satisfied condition "success or failure"
+Jun  3 20:35:30.841: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-secrets-e302709a-fd14-473d-9022-32a472ea0740 container secret-volume-test: 
+STEP: delete the pod
+Jun  3 20:35:30.862: INFO: Waiting for pod pod-secrets-e302709a-fd14-473d-9022-32a472ea0740 to disappear
+Jun  3 20:35:30.865: INFO: Pod pod-secrets-e302709a-fd14-473d-9022-32a472ea0740 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:35:30.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-725" for this suite.
+Jun  3 20:35:36.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:35:36.968: INFO: namespace secrets-725 deletion completed in 6.099621415s
+STEP: Destroying namespace "secret-namespace-437" for this suite.
+Jun  3 20:35:42.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:35:43.067: INFO: namespace secret-namespace-437 deletion completed in 6.098846827s
+
+• [SLOW TEST:14.310 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
+  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should be able to update and delete ResourceQuota. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:35:43.067: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to update and delete ResourceQuota. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a ResourceQuota
+STEP: Getting a ResourceQuota
+STEP: Updating a ResourceQuota
+STEP: Verifying a ResourceQuota was modified
+STEP: Deleting a ResourceQuota
+STEP: Verifying the deleted ResourceQuota
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:35:43.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-5768" for this suite.
+Jun  3 20:35:49.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:35:49.230: INFO: namespace resourcequota-5768 deletion completed in 6.101447604s
+
+• [SLOW TEST:6.164 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to update and delete ResourceQuota. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Update Demo 
+  should scale a replication controller  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:35:49.231: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Update Demo
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
+[It] should scale a replication controller  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating a replication controller
+Jun  3 20:35:49.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-9558'
+Jun  3 20:35:49.812: INFO: stderr: ""
+Jun  3 20:35:49.812: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun  3 20:35:49.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9558'
+Jun  3 20:35:49.926: INFO: stderr: ""
+Jun  3 20:35:49.926: INFO: stdout: "update-demo-nautilus-6f2hf update-demo-nautilus-mvlwg "
+Jun  3 20:35:49.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:35:50.023: INFO: stderr: ""
+Jun  3 20:35:50.023: INFO: stdout: ""
+Jun  3 20:35:50.023: INFO: update-demo-nautilus-6f2hf is created but not running
+Jun  3 20:35:55.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9558'
+Jun  3 20:35:55.126: INFO: stderr: ""
+Jun  3 20:35:55.126: INFO: stdout: "update-demo-nautilus-6f2hf update-demo-nautilus-mvlwg "
+Jun  3 20:35:55.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:35:55.217: INFO: stderr: ""
+Jun  3 20:35:55.217: INFO: stdout: "true"
+Jun  3 20:35:55.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:35:55.306: INFO: stderr: ""
+Jun  3 20:35:55.306: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  3 20:35:55.306: INFO: validating pod update-demo-nautilus-6f2hf
+Jun  3 20:35:55.311: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  3 20:35:55.311: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  3 20:35:55.311: INFO: update-demo-nautilus-6f2hf is verified up and running
+Jun  3 20:35:55.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-mvlwg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:35:55.401: INFO: stderr: ""
+Jun  3 20:35:55.401: INFO: stdout: "true"
+Jun  3 20:35:55.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-mvlwg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:35:55.494: INFO: stderr: ""
+Jun  3 20:35:55.494: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  3 20:35:55.494: INFO: validating pod update-demo-nautilus-mvlwg
+Jun  3 20:35:55.499: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  3 20:35:55.499: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  3 20:35:55.499: INFO: update-demo-nautilus-mvlwg is verified up and running
+STEP: scaling down the replication controller
+Jun  3 20:35:55.503: INFO: scanned /root for discovery docs: 
+Jun  3 20:35:55.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9558'
+Jun  3 20:35:56.623: INFO: stderr: ""
+Jun  3 20:35:56.623: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun  3 20:35:56.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9558'
+Jun  3 20:35:56.719: INFO: stderr: ""
+Jun  3 20:35:56.719: INFO: stdout: "update-demo-nautilus-6f2hf update-demo-nautilus-mvlwg "
+STEP: Replicas for name=update-demo: expected=1 actual=2
+Jun  3 20:36:01.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9558'
+Jun  3 20:36:01.826: INFO: stderr: ""
+Jun  3 20:36:01.827: INFO: stdout: "update-demo-nautilus-6f2hf update-demo-nautilus-mvlwg "
+STEP: Replicas for name=update-demo: expected=1 actual=2
+Jun  3 20:36:06.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9558'
+Jun  3 20:36:06.927: INFO: stderr: ""
+Jun  3 20:36:06.927: INFO: stdout: "update-demo-nautilus-6f2hf "
+Jun  3 20:36:06.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:07.021: INFO: stderr: ""
+Jun  3 20:36:07.021: INFO: stdout: "true"
+Jun  3 20:36:07.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:07.126: INFO: stderr: ""
+Jun  3 20:36:07.126: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  3 20:36:07.126: INFO: validating pod update-demo-nautilus-6f2hf
+Jun  3 20:36:07.130: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  3 20:36:07.130: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  3 20:36:07.130: INFO: update-demo-nautilus-6f2hf is verified up and running
+STEP: scaling up the replication controller
+Jun  3 20:36:07.132: INFO: scanned /root for discovery docs: 
+Jun  3 20:36:07.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9558'
+Jun  3 20:36:08.256: INFO: stderr: ""
+Jun  3 20:36:08.256: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun  3 20:36:08.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9558'
+Jun  3 20:36:08.358: INFO: stderr: ""
+Jun  3 20:36:08.358: INFO: stdout: "update-demo-nautilus-6f2hf update-demo-nautilus-qshw7 "
+Jun  3 20:36:08.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:08.458: INFO: stderr: ""
+Jun  3 20:36:08.458: INFO: stdout: "true"
+Jun  3 20:36:08.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:08.559: INFO: stderr: ""
+Jun  3 20:36:08.559: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  3 20:36:08.559: INFO: validating pod update-demo-nautilus-6f2hf
+Jun  3 20:36:08.563: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  3 20:36:08.563: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  3 20:36:08.563: INFO: update-demo-nautilus-6f2hf is verified up and running
+Jun  3 20:36:08.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-qshw7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:08.662: INFO: stderr: ""
+Jun  3 20:36:08.662: INFO: stdout: ""
+Jun  3 20:36:08.662: INFO: update-demo-nautilus-qshw7 is created but not running
+Jun  3 20:36:13.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9558'
+Jun  3 20:36:13.766: INFO: stderr: ""
+Jun  3 20:36:13.766: INFO: stdout: "update-demo-nautilus-6f2hf update-demo-nautilus-qshw7 "
+Jun  3 20:36:13.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:13.868: INFO: stderr: ""
+Jun  3 20:36:13.868: INFO: stdout: "true"
+Jun  3 20:36:13.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-6f2hf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:13.964: INFO: stderr: ""
+Jun  3 20:36:13.964: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  3 20:36:13.964: INFO: validating pod update-demo-nautilus-6f2hf
+Jun  3 20:36:13.967: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  3 20:36:13.967: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  3 20:36:13.967: INFO: update-demo-nautilus-6f2hf is verified up and running
+Jun  3 20:36:13.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-qshw7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:14.072: INFO: stderr: ""
+Jun  3 20:36:14.072: INFO: stdout: "true"
+Jun  3 20:36:14.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-qshw7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9558'
+Jun  3 20:36:14.164: INFO: stderr: ""
+Jun  3 20:36:14.164: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  3 20:36:14.164: INFO: validating pod update-demo-nautilus-qshw7
+Jun  3 20:36:14.170: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  3 20:36:14.170: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  3 20:36:14.170: INFO: update-demo-nautilus-qshw7 is verified up and running
+STEP: using delete to clean up resources
+Jun  3 20:36:14.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-9558'
+Jun  3 20:36:14.268: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun  3 20:36:14.268: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
+Jun  3 20:36:14.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9558'
+Jun  3 20:36:14.371: INFO: stderr: "No resources found in kubectl-9558 namespace.\n"
+Jun  3 20:36:14.371: INFO: stdout: ""
+Jun  3 20:36:14.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -l name=update-demo --namespace=kubectl-9558 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun  3 20:36:14.461: INFO: stderr: ""
+Jun  3 20:36:14.461: INFO: stdout: "update-demo-nautilus-6f2hf\nupdate-demo-nautilus-qshw7\n"
+Jun  3 20:36:14.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9558'
+Jun  3 20:36:15.058: INFO: stderr: "No resources found in kubectl-9558 namespace.\n"
+Jun  3 20:36:15.058: INFO: stdout: ""
+Jun  3 20:36:15.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -l name=update-demo --namespace=kubectl-9558 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun  3 20:36:15.151: INFO: stderr: ""
+Jun  3 20:36:15.151: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:36:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-9558" for this suite.
+Jun  3 20:36:27.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:36:27.254: INFO: namespace kubectl-9558 deletion completed in 12.098580855s
+
+• [SLOW TEST:38.023 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Update Demo
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275
+    should scale a replication controller  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-apps] Job 
+  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:36:27.254: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename job
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a job
+STEP: Ensuring job reaches completions
+[AfterEach] [sig-apps] Job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:36:35.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "job-82" for this suite.
+Jun  3 20:36:41.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:36:41.453: INFO: namespace job-82 deletion completed in 6.102177474s
+
+• [SLOW TEST:14.199 seconds]
+[sig-apps] Job
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-cli] Kubectl client Kubectl logs 
+  should be able to retrieve and filter logs  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:36:41.453: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl logs
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1274
+STEP: creating an pod
+Jun  3 20:36:41.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.6 --namespace=kubectl-1349 -- logs-generator --log-lines-total 100 --run-duration 20s'
+Jun  3 20:36:41.598: INFO: stderr: ""
+Jun  3 20:36:41.598: INFO: stdout: "pod/logs-generator created\n"
+[It] should be able to retrieve and filter logs  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Waiting for log generator to start.
+Jun  3 20:36:41.598: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
+Jun  3 20:36:41.599: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1349" to be "running and ready, or succeeded"
+Jun  3 20:36:41.602: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.389542ms
+Jun  3 20:36:43.606: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112347s
+Jun  3 20:36:45.610: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.011779136s
+Jun  3 20:36:45.610: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
+Jun  3 20:36:45.610: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
+STEP: checking for a matching strings
+Jun  3 20:36:45.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 logs logs-generator logs-generator --namespace=kubectl-1349'
+Jun  3 20:36:45.733: INFO: stderr: ""
+Jun  3 20:36:45.733: INFO: stdout: "I0603 20:36:42.658534       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/m8l 205\nI0603 20:36:42.858810       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/6s78 299\nI0603 20:36:43.058874       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/5vp 590\nI0603 20:36:43.258895       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/fv2 262\nI0603 20:36:43.458758       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/tg57 208\nI0603 20:36:43.658770       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/2ml9 580\nI0603 20:36:43.858759       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/pmm 223\nI0603 20:36:44.058697       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/6pqt 474\nI0603 20:36:44.258712       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/tqf 537\nI0603 20:36:44.458735       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/8mgp 330\nI0603 20:36:44.658818       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/vkq 292\nI0603 20:36:44.858729       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/lcw 579\nI0603 20:36:45.058753       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/xjp 390\nI0603 20:36:45.258651       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/ph9m 275\nI0603 20:36:45.458772       1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/6qk7 291\nI0603 20:36:45.658742       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/5mg 319\n"
+STEP: limiting log lines
+Jun  3 20:36:45.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 logs logs-generator logs-generator --namespace=kubectl-1349 --tail=1'
+Jun  3 20:36:45.875: INFO: stderr: ""
+Jun  3 20:36:45.875: INFO: stdout: "I0603 20:36:45.858849       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/p5xd 380\n"
+STEP: limiting log bytes
+Jun  3 20:36:45.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 logs logs-generator logs-generator --namespace=kubectl-1349 --limit-bytes=1'
+Jun  3 20:36:46.003: INFO: stderr: ""
+Jun  3 20:36:46.003: INFO: stdout: "I"
+STEP: exposing timestamps
+Jun  3 20:36:46.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 logs logs-generator logs-generator --namespace=kubectl-1349 --tail=1 --timestamps'
+Jun  3 20:36:46.128: INFO: stderr: ""
+Jun  3 20:36:46.128: INFO: stdout: "2020-06-03T20:36:46.058941246Z I0603 20:36:46.058748       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/sr4 520\n"
+STEP: restricting to a time range
+Jun  3 20:36:48.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 logs logs-generator logs-generator --namespace=kubectl-1349 --since=1s'
+Jun  3 20:36:48.739: INFO: stderr: ""
+Jun  3 20:36:48.739: INFO: stdout: "I0603 20:36:47.858801       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/579 533\nI0603 20:36:48.058758       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/xps 331\nI0603 20:36:48.258691       1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/hnp8 352\nI0603 20:36:48.458776       1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/7s27 258\nI0603 20:36:48.658805       1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/nc5r 275\n"
+Jun  3 20:36:48.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 logs logs-generator logs-generator --namespace=kubectl-1349 --since=24h'
+Jun  3 20:36:48.852: INFO: stderr: ""
+Jun  3 20:36:48.852: INFO: stdout: "I0603 20:36:42.658534       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/m8l 205\nI0603 20:36:42.858810       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/6s78 299\nI0603 20:36:43.058874       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/5vp 590\nI0603 20:36:43.258895       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/fv2 262\nI0603 20:36:43.458758       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/tg57 208\nI0603 20:36:43.658770       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/2ml9 580\nI0603 20:36:43.858759       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/pmm 223\nI0603 20:36:44.058697       1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/6pqt 474\nI0603 20:36:44.258712       1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/tqf 537\nI0603 20:36:44.458735       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/8mgp 330\nI0603 20:36:44.658818       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/vkq 292\nI0603 20:36:44.858729       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/lcw 579\nI0603 20:36:45.058753       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/xjp 390\nI0603 20:36:45.258651       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/ph9m 275\nI0603 20:36:45.458772       1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/6qk7 291\nI0603 20:36:45.658742       1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/5mg 319\nI0603 20:36:45.858849       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/p5xd 380\nI0603 20:36:46.058748       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/sr4 520\nI0603 20:36:46.258782       1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/2mn 248\nI0603 20:36:46.458764       1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/cwkq 317\nI0603 20:36:46.658742       1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/szqg 488\nI0603 20:36:46.858829       1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/j7p 367\nI0603 20:36:47.058777       1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/d22k 321\nI0603 20:36:47.258761       1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/p67 510\nI0603 20:36:47.458868       1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/s6fb 259\nI0603 20:36:47.658703       1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/rrzx 309\nI0603 20:36:47.858801       1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/579 533\nI0603 20:36:48.058758       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/xps 331\nI0603 20:36:48.258691       1 logs_generator.go:76] 28 POST /api/v1/namespaces/kube-system/pods/hnp8 352\nI0603 20:36:48.458776       1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/7s27 258\nI0603 20:36:48.658805       1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/nc5r 275\n"
+[AfterEach] Kubectl logs
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1280
+Jun  3 20:36:48.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete pod logs-generator --namespace=kubectl-1349'
+Jun  3 20:36:50.869: INFO: stderr: ""
+Jun  3 20:36:50.869: INFO: stdout: "pod \"logs-generator\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:36:50.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-1349" for this suite.
+Jun  3 20:36:56.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:36:56.971: INFO: namespace kubectl-1349 deletion completed in 6.097747526s
+
+• [SLOW TEST:15.518 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl logs
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1270
+    should be able to retrieve and filter logs  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:36:56.972: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 20:36:57.045: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed68ecdb-0408-4404-8875-3475b5b84f87" in namespace "projected-6498" to be "success or failure"
+Jun  3 20:36:57.052: INFO: Pod "downwardapi-volume-ed68ecdb-0408-4404-8875-3475b5b84f87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569593ms
+Jun  3 20:36:59.058: INFO: Pod "downwardapi-volume-ed68ecdb-0408-4404-8875-3475b5b84f87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012595881s
+STEP: Saw pod success
+Jun  3 20:36:59.058: INFO: Pod "downwardapi-volume-ed68ecdb-0408-4404-8875-3475b5b84f87" satisfied condition "success or failure"
+Jun  3 20:36:59.062: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-ed68ecdb-0408-4404-8875-3475b5b84f87 container client-container: 
+STEP: delete the pod
+Jun  3 20:36:59.089: INFO: Waiting for pod downwardapi-volume-ed68ecdb-0408-4404-8875-3475b5b84f87 to disappear
+Jun  3 20:36:59.092: INFO: Pod downwardapi-volume-ed68ecdb-0408-4404-8875-3475b5b84f87 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:36:59.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6498" for this suite.
+Jun  3 20:37:05.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:37:05.207: INFO: namespace projected-6498 deletion completed in 6.11014561s
+
+• [SLOW TEST:8.236 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:37:05.208: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name projected-secret-test-283316d7-750a-4062-93d8-108f3524db59
+STEP: Creating a pod to test consume secrets
+Jun  3 20:37:05.250: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b213a8a-aafd-4048-a21b-693d5058d85e" in namespace "projected-2148" to be "success or failure"
+Jun  3 20:37:05.255: INFO: Pod "pod-projected-secrets-3b213a8a-aafd-4048-a21b-693d5058d85e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.159089ms
+Jun  3 20:37:07.260: INFO: Pod "pod-projected-secrets-3b213a8a-aafd-4048-a21b-693d5058d85e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010015767s
+STEP: Saw pod success
+Jun  3 20:37:07.260: INFO: Pod "pod-projected-secrets-3b213a8a-aafd-4048-a21b-693d5058d85e" satisfied condition "success or failure"
+Jun  3 20:37:07.262: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-secrets-3b213a8a-aafd-4048-a21b-693d5058d85e container secret-volume-test: 
+STEP: delete the pod
+Jun  3 20:37:07.285: INFO: Waiting for pod pod-projected-secrets-3b213a8a-aafd-4048-a21b-693d5058d85e to disappear
+Jun  3 20:37:07.288: INFO: Pod pod-projected-secrets-3b213a8a-aafd-4048-a21b-693d5058d85e no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:37:07.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-2148" for this suite.
+Jun  3 20:37:13.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:37:13.409: INFO: namespace projected-2148 deletion completed in 6.115461634s
+
+• [SLOW TEST:8.202 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-auth] ServiceAccounts 
+  should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:37:13.410: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename svcaccounts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: getting the auto-created API token
+Jun  3 20:37:13.968: INFO: created pod pod-service-account-defaultsa
+Jun  3 20:37:13.968: INFO: pod pod-service-account-defaultsa service account token volume mount: true
+Jun  3 20:37:13.973: INFO: created pod pod-service-account-mountsa
+Jun  3 20:37:13.973: INFO: pod pod-service-account-mountsa service account token volume mount: true
+Jun  3 20:37:13.984: INFO: created pod pod-service-account-nomountsa
+Jun  3 20:37:13.984: INFO: pod pod-service-account-nomountsa service account token volume mount: false
+Jun  3 20:37:13.993: INFO: created pod pod-service-account-defaultsa-mountspec
+Jun  3 20:37:13.993: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
+Jun  3 20:37:14.003: INFO: created pod pod-service-account-mountsa-mountspec
+Jun  3 20:37:14.003: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
+Jun  3 20:37:14.028: INFO: created pod pod-service-account-nomountsa-mountspec
+Jun  3 20:37:14.028: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
+Jun  3 20:37:14.039: INFO: created pod pod-service-account-defaultsa-nomountspec
+Jun  3 20:37:14.039: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
+Jun  3 20:37:14.050: INFO: created pod pod-service-account-mountsa-nomountspec
+Jun  3 20:37:14.050: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
+Jun  3 20:37:14.065: INFO: created pod pod-service-account-nomountsa-nomountspec
+Jun  3 20:37:14.065: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
+[AfterEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:37:14.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "svcaccounts-6878" for this suite.
+Jun  3 20:37:20.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:37:20.197: INFO: namespace svcaccounts-6878 deletion completed in 6.122020761s
+
+• [SLOW TEST:6.788 seconds]
+[sig-auth] ServiceAccounts
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
+  should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
+  should include custom resource definition resources in discovery documents [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:37:20.197: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename custom-resource-definition
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should include custom resource definition resources in discovery documents [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: fetching the /apis discovery document
+STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
+STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
+STEP: fetching the /apis/apiextensions.k8s.io discovery document
+STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
+STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
+STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
+[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:37:20.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "custom-resource-definition-1460" for this suite.
+Jun  3 20:37:26.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:37:26.350: INFO: namespace custom-resource-definition-1460 deletion completed in 6.107127627s
+
+• [SLOW TEST:6.153 seconds]
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should include custom resource definition resources in discovery documents [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:37:26.351: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
+[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:37:26.383: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:37:28.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-7872" for this suite.
+Jun  3 20:38:16.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:38:16.545: INFO: namespace pods-7872 deletion completed in 48.111222757s
+
+• [SLOW TEST:50.194 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:38:16.545: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a ReplicationController
+STEP: Ensuring resource quota status captures replication controller creation
+STEP: Deleting a ReplicationController
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:38:27.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-551" for this suite.
+Jun  3 20:38:33.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:38:33.748: INFO: namespace resourcequota-551 deletion completed in 6.110397463s
+
+• [SLOW TEST:17.203 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:38:33.748: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 20:38:33.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb392074-b85d-4a04-9f1d-16f64501ce86" in namespace "projected-1113" to be "success or failure"
+Jun  3 20:38:33.798: INFO: Pod "downwardapi-volume-bb392074-b85d-4a04-9f1d-16f64501ce86": Phase="Pending", Reason="", readiness=false. Elapsed: 5.388769ms
+Jun  3 20:38:35.802: INFO: Pod "downwardapi-volume-bb392074-b85d-4a04-9f1d-16f64501ce86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009186764s
+STEP: Saw pod success
+Jun  3 20:38:35.802: INFO: Pod "downwardapi-volume-bb392074-b85d-4a04-9f1d-16f64501ce86" satisfied condition "success or failure"
+Jun  3 20:38:35.805: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod downwardapi-volume-bb392074-b85d-4a04-9f1d-16f64501ce86 container client-container: 
+STEP: delete the pod
+Jun  3 20:38:35.835: INFO: Waiting for pod downwardapi-volume-bb392074-b85d-4a04-9f1d-16f64501ce86 to disappear
+Jun  3 20:38:35.838: INFO: Pod downwardapi-volume-bb392074-b85d-4a04-9f1d-16f64501ce86 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:38:35.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-1113" for this suite.
+Jun  3 20:38:41.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:38:41.952: INFO: namespace projected-1113 deletion completed in 6.110265871s
+
+• [SLOW TEST:8.204 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:38:41.953: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
+STEP: Gathering metrics
+Jun  3 20:39:12.524: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:39:12.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+W0603 20:39:12.524167      25 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+STEP: Destroying namespace "gc-3833" for this suite.
+Jun  3 20:39:18.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:39:18.625: INFO: namespace gc-3833 deletion completed in 6.097604122s
+
+• [SLOW TEST:36.672 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[k8s.io] KubeletManagedEtcHosts 
+  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] KubeletManagedEtcHosts
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:39:18.625: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Setting up the test
+STEP: Creating hostNetwork=false pod
+STEP: Creating hostNetwork=true pod
+STEP: Running the test
+STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
+Jun  3 20:39:22.694: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:22.694: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:22.830: INFO: Exec stderr: ""
+Jun  3 20:39:22.830: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:22.830: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:22.959: INFO: Exec stderr: ""
+Jun  3 20:39:22.959: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:22.959: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:23.090: INFO: Exec stderr: ""
+Jun  3 20:39:23.090: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:23.091: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:23.226: INFO: Exec stderr: ""
+STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
+Jun  3 20:39:23.226: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:23.226: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:23.354: INFO: Exec stderr: ""
+Jun  3 20:39:23.354: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:23.354: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:23.485: INFO: Exec stderr: ""
+STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
+Jun  3 20:39:23.485: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:23.486: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:23.621: INFO: Exec stderr: ""
+Jun  3 20:39:23.621: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:23.621: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:23.751: INFO: Exec stderr: ""
+Jun  3 20:39:23.751: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:23.751: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:23.889: INFO: Exec stderr: ""
+Jun  3 20:39:23.889: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3669 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 20:39:23.889: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 20:39:24.036: INFO: Exec stderr: ""
+[AfterEach] [k8s.io] KubeletManagedEtcHosts
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:39:24.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-kubelet-etc-hosts-3669" for this suite.
+Jun  3 20:40:08.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:40:08.146: INFO: namespace e2e-kubelet-etc-hosts-3669 deletion completed in 44.10508302s
+
+• [SLOW TEST:49.521 seconds]
+[k8s.io] KubeletManagedEtcHosts
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
+  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:40:08.147: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename security-context-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
+[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:40:08.188: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-10450ebf-7825-49e5-9ef8-0dc9344ab0a2" in namespace "security-context-test-719" to be "success or failure"
+Jun  3 20:40:08.192: INFO: Pod "alpine-nnp-false-10450ebf-7825-49e5-9ef8-0dc9344ab0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.285995ms
+Jun  3 20:40:10.196: INFO: Pod "alpine-nnp-false-10450ebf-7825-49e5-9ef8-0dc9344ab0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007498128s
+Jun  3 20:40:12.200: INFO: Pod "alpine-nnp-false-10450ebf-7825-49e5-9ef8-0dc9344ab0a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011535506s
+Jun  3 20:40:12.200: INFO: Pod "alpine-nnp-false-10450ebf-7825-49e5-9ef8-0dc9344ab0a2" satisfied condition "success or failure"
+[AfterEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:40:12.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "security-context-test-719" for this suite.
+Jun  3 20:40:18.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:40:18.323: INFO: namespace security-context-test-719 deletion completed in 6.101006899s
+
+• [SLOW TEST:10.177 seconds]
+[k8s.io] Security Context
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when creating containers with AllowPrivilegeEscalation
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277
+    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:40:18.324: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a simple DaemonSet "daemon-set"
+STEP: Check that daemon pods launch on every node of the cluster.
+Jun  3 20:40:18.413: INFO: Number of nodes with available pods: 0
+Jun  3 20:40:18.413: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 20:40:19.421: INFO: Number of nodes with available pods: 0
+Jun  3 20:40:19.421: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 20:40:20.425: INFO: Number of nodes with available pods: 3
+Jun  3 20:40:20.425: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 20:40:21.421: INFO: Number of nodes with available pods: 5
+Jun  3 20:40:21.422: INFO: Number of running nodes: 5, number of available pods: 5
+STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
+Jun  3 20:40:21.443: INFO: Number of nodes with available pods: 4
+Jun  3 20:40:21.443: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 20:40:22.452: INFO: Number of nodes with available pods: 4
+Jun  3 20:40:22.452: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 20:40:23.452: INFO: Number of nodes with available pods: 5
+Jun  3 20:40:23.452: INFO: Number of running nodes: 5, number of available pods: 5
+STEP: Wait for the failed daemon pod to be completely deleted.
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5331, will wait for the garbage collector to delete the pods
+Jun  3 20:40:23.522: INFO: Deleting DaemonSet.extensions daemon-set took: 11.337672ms
+Jun  3 20:40:23.922: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.257483ms
+Jun  3 20:40:34.626: INFO: Number of nodes with available pods: 0
+Jun  3 20:40:34.626: INFO: Number of running nodes: 0, number of available pods: 0
+Jun  3 20:40:34.629: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5331/daemonsets","resourceVersion":"151644"},"items":null}
+
+Jun  3 20:40:34.631: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5331/pods","resourceVersion":"151644"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:40:34.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-5331" for this suite.
+Jun  3 20:40:40.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:40:40.766: INFO: namespace daemonsets-5331 deletion completed in 6.112764498s
+
+• [SLOW TEST:22.443 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-cli] Kubectl client Kubectl run pod 
+  should create a pod from an image when restart is Never  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:40:40.766: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl run pod
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1668
+[It] should create a pod from an image when restart is Never  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: running the image docker.io/library/httpd:2.4.38-alpine
+Jun  3 20:40:40.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9629'
+Jun  3 20:40:40.928: INFO: stderr: ""
+Jun  3 20:40:40.928: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
+STEP: verifying the pod e2e-test-httpd-pod was created
+[AfterEach] Kubectl run pod
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1673
+Jun  3 20:40:40.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete pods e2e-test-httpd-pod --namespace=kubectl-9629'
+Jun  3 20:40:54.274: INFO: stderr: ""
+Jun  3 20:40:54.275: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:40:54.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-9629" for this suite.
+Jun  3 20:41:00.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:41:00.389: INFO: namespace kubectl-9629 deletion completed in 6.108214456s
+
+• [SLOW TEST:19.622 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl run pod
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1664
+    should create a pod from an image when restart is Never  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-api-machinery] Namespaces [Serial] 
+  should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:41:00.389: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename namespaces
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a test namespace
+STEP: Waiting for a default service account to be provisioned in namespace
+STEP: Creating a pod in the namespace
+STEP: Waiting for the pod to have running status
+STEP: Deleting the namespace
+STEP: Waiting for the namespace to be removed.
+STEP: Recreating the namespace
+STEP: Verifying there are no pods in the namespace
+[AfterEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:41:29.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "namespaces-2885" for this suite.
+Jun  3 20:41:35.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:41:35.667: INFO: namespace namespaces-2885 deletion completed in 6.098033605s
+STEP: Destroying namespace "nsdeletetest-1555" for this suite.
+Jun  3 20:41:35.669: INFO: Namespace nsdeletetest-1555 was already deleted
+STEP: Destroying namespace "nsdeletetest-9510" for this suite.
+Jun  3 20:41:41.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:41:41.770: INFO: namespace nsdeletetest-9510 deletion completed in 6.100131382s
+
+• [SLOW TEST:41.381 seconds]
+[sig-api-machinery] Namespaces [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl version 
+  should check is all data is printed  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:41:41.771: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should check is all data is printed  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:41:41.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 version'
+Jun  3 20:41:41.916: INFO: stderr: ""
+Jun  3 20:41:41.916: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.8\", GitCommit:\"ec6eb119b81be488b030e849b9e64fda4caaf33c\", GitTreeState:\"clean\", BuildDate:\"2020-03-12T21:00:06Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.8\", GitCommit:\"ec6eb119b81be488b030e849b9e64fda4caaf33c\", GitTreeState:\"clean\", BuildDate:\"2020-03-12T20:52:22Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:41:41.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-75" for this suite.
+Jun  3 20:41:47.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:41:48.027: INFO: namespace kubectl-75 deletion completed in 6.106748742s
+
+• [SLOW TEST:6.257 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl version
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1380
+    should check is all data is printed  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:41:48.027: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0644 on node default medium
+Jun  3 20:41:48.071: INFO: Waiting up to 5m0s for pod "pod-f9401ee9-dee5-4827-89ba-ee5098c9f5d7" in namespace "emptydir-9672" to be "success or failure"
+Jun  3 20:41:48.079: INFO: Pod "pod-f9401ee9-dee5-4827-89ba-ee5098c9f5d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.003909ms
+Jun  3 20:41:50.084: INFO: Pod "pod-f9401ee9-dee5-4827-89ba-ee5098c9f5d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012770757s
+STEP: Saw pod success
+Jun  3 20:41:50.084: INFO: Pod "pod-f9401ee9-dee5-4827-89ba-ee5098c9f5d7" satisfied condition "success or failure"
+Jun  3 20:41:50.087: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-f9401ee9-dee5-4827-89ba-ee5098c9f5d7 container test-container: 
+STEP: delete the pod
+Jun  3 20:41:50.117: INFO: Waiting for pod pod-f9401ee9-dee5-4827-89ba-ee5098c9f5d7 to disappear
+Jun  3 20:41:50.120: INFO: Pod pod-f9401ee9-dee5-4827-89ba-ee5098c9f5d7 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:41:50.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-9672" for this suite.
+Jun  3 20:41:56.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:41:56.229: INFO: namespace emptydir-9672 deletion completed in 6.105898296s
+
+• [SLOW TEST:8.202 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[k8s.io] [sig-node] Events 
+  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] [sig-node] Events
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:41:56.229: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename events
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: retrieving the pod
+Jun  3 20:42:00.284: INFO: &Pod{ObjectMeta:{send-events-29bb060b-f6e3-42c1-904e-b5e2d62bb43b  events-2839 /api/v1/namespaces/events-2839/pods/send-events-29bb060b-f6e3-42c1-904e-b5e2d62bb43b c47e28af-5db8-4568-b6ca-e413743e49a6 151951 0 2020-06-03 20:41:56 +0000 UTC   map[name:foo time:262626366] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ddcsb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ddcsb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ddcsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:41:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:41:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:41:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:41:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:172.20.2.65,StartTime:2020-06-03 20:41:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 20:41:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727,ContainerID:docker://698db39e56f4987e02255b9687d0ce14be6fda02add6628623d9c29401647e2c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.2.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+
+STEP: checking for scheduler event about the pod
+Jun  3 20:42:02.288: INFO: Saw scheduler event for our pod.
+STEP: checking for kubelet event about the pod
+Jun  3 20:42:04.293: INFO: Saw kubelet event for our pod.
+STEP: deleting the pod
+[AfterEach] [k8s.io] [sig-node] Events
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:42:04.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "events-2839" for this suite.
+Jun  3 20:42:48.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:42:48.405: INFO: namespace events-2839 deletion completed in 44.101853921s
+
+• [SLOW TEST:52.176 seconds]
+[k8s.io] [sig-node] Events
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl run rc 
+  should create an rc from an image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:42:48.405: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl run rc
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1439
+[It] should create an rc from an image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: running the image docker.io/library/httpd:2.4.38-alpine
+Jun  3 20:42:48.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1004'
+Jun  3 20:42:48.547: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun  3 20:42:48.547: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
+STEP: verifying the rc e2e-test-httpd-rc was created
+STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
+STEP: confirm that you can get logs from an rc
+Jun  3 20:42:48.559: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-k6w64]
+Jun  3 20:42:48.559: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-k6w64" in namespace "kubectl-1004" to be "running and ready"
+Jun  3 20:42:48.565: INFO: Pod "e2e-test-httpd-rc-k6w64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039769ms
+Jun  3 20:42:50.570: INFO: Pod "e2e-test-httpd-rc-k6w64": Phase="Running", Reason="", readiness=true. Elapsed: 2.010867583s
+Jun  3 20:42:50.570: INFO: Pod "e2e-test-httpd-rc-k6w64" satisfied condition "running and ready"
+Jun  3 20:42:50.570: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-k6w64]
+Jun  3 20:42:50.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 logs rc/e2e-test-httpd-rc --namespace=kubectl-1004'
+Jun  3 20:42:50.694: INFO: stderr: ""
+Jun  3 20:42:50.694: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.20.2.66. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.20.2.66. Set the 'ServerName' directive globally to suppress this message\n[Wed Jun 03 20:42:49.628786 2020] [mpm_event:notice] [pid 1:tid 139916860058472] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Jun 03 20:42:49.628842 2020] [core:notice] [pid 1:tid 139916860058472] AH00094: Command line: 'httpd -D FOREGROUND'\n"
+[AfterEach] Kubectl run rc
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
+Jun  3 20:42:50.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete rc e2e-test-httpd-rc --namespace=kubectl-1004'
+Jun  3 20:42:50.800: INFO: stderr: ""
+Jun  3 20:42:50.800: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:42:50.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-1004" for this suite.
+Jun  3 20:42:56.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:42:56.916: INFO: namespace kubectl-1004 deletion completed in 6.110937701s
+
+• [SLOW TEST:8.511 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl run rc
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1435
+    should create an rc from an image  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:42:56.916: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-test-volume-map-5149fb39-0de7-4102-a1e7-8ee7ef087e5f
+STEP: Creating a pod to test consume configMaps
+Jun  3 20:42:56.966: INFO: Waiting up to 5m0s for pod "pod-configmaps-a03d742a-7ace-4ecd-a9a4-b6ce6c28e50d" in namespace "configmap-1767" to be "success or failure"
+Jun  3 20:42:56.971: INFO: Pod "pod-configmaps-a03d742a-7ace-4ecd-a9a4-b6ce6c28e50d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235039ms
+Jun  3 20:42:58.974: INFO: Pod "pod-configmaps-a03d742a-7ace-4ecd-a9a4-b6ce6c28e50d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007385878s
+STEP: Saw pod success
+Jun  3 20:42:58.974: INFO: Pod "pod-configmaps-a03d742a-7ace-4ecd-a9a4-b6ce6c28e50d" satisfied condition "success or failure"
+Jun  3 20:42:58.977: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-a03d742a-7ace-4ecd-a9a4-b6ce6c28e50d container configmap-volume-test: 
+STEP: delete the pod
+Jun  3 20:42:58.998: INFO: Waiting for pod pod-configmaps-a03d742a-7ace-4ecd-a9a4-b6ce6c28e50d to disappear
+Jun  3 20:42:59.001: INFO: Pod pod-configmaps-a03d742a-7ace-4ecd-a9a4-b6ce6c28e50d no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:42:59.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-1767" for this suite.
+Jun  3 20:43:05.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:43:05.119: INFO: namespace configmap-1767 deletion completed in 6.113146778s
+
+• [SLOW TEST:8.203 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:43:05.119: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name cm-test-opt-del-7bd6ed52-f768-4675-bb9d-477ac0991b78
+STEP: Creating configMap with name cm-test-opt-upd-b9b46c81-68f5-4d45-9e45-44e6c5a2379b
+STEP: Creating the pod
+STEP: Deleting configmap cm-test-opt-del-7bd6ed52-f768-4675-bb9d-477ac0991b78
+STEP: Updating configmap cm-test-opt-upd-b9b46c81-68f5-4d45-9e45-44e6c5a2379b
+STEP: Creating configMap with name cm-test-opt-create-879971cf-7697-4450-b5c6-f41369532a11
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:44:13.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-7950" for this suite.
+Jun  3 20:44:25.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:44:25.722: INFO: namespace projected-7950 deletion completed in 12.102591655s
+
+• [SLOW TEST:80.603 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a service. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:44:25.722: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a service. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a Service
+STEP: Ensuring resource quota status captures service creation
+STEP: Deleting a Service
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:44:36.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-286" for this suite.
+Jun  3 20:44:42.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:44:42.926: INFO: namespace resourcequota-286 deletion completed in 6.104515663s
+
+• [SLOW TEST:17.204 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a service. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:44:42.926: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0666 on tmpfs
+Jun  3 20:44:42.967: INFO: Waiting up to 5m0s for pod "pod-f5d9ea80-1527-4cea-97aa-c3d67856f2dc" in namespace "emptydir-4151" to be "success or failure"
+Jun  3 20:44:42.973: INFO: Pod "pod-f5d9ea80-1527-4cea-97aa-c3d67856f2dc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.604035ms
+Jun  3 20:44:44.977: INFO: Pod "pod-f5d9ea80-1527-4cea-97aa-c3d67856f2dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010014027s
+STEP: Saw pod success
+Jun  3 20:44:44.977: INFO: Pod "pod-f5d9ea80-1527-4cea-97aa-c3d67856f2dc" satisfied condition "success or failure"
+Jun  3 20:44:44.979: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-f5d9ea80-1527-4cea-97aa-c3d67856f2dc container test-container: 
+STEP: delete the pod
+Jun  3 20:44:45.022: INFO: Waiting for pod pod-f5d9ea80-1527-4cea-97aa-c3d67856f2dc to disappear
+Jun  3 20:44:45.025: INFO: Pod pod-f5d9ea80-1527-4cea-97aa-c3d67856f2dc no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:44:45.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-4151" for this suite.
+Jun  3 20:44:51.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:44:51.138: INFO: namespace emptydir-4151 deletion completed in 6.107170715s
+
+• [SLOW TEST:8.211 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:44:51.138: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
+[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating pod busybox-7321d17b-4cc9-4434-8d05-27fe3acb73ae in namespace container-probe-7311
+Jun  3 20:44:53.187: INFO: Started pod busybox-7321d17b-4cc9-4434-8d05-27fe3acb73ae in namespace container-probe-7311
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun  3 20:44:53.190: INFO: Initial restart count of pod busybox-7321d17b-4cc9-4434-8d05-27fe3acb73ae is 0
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:48:53.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-7311" for this suite.
+Jun  3 20:48:59.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:48:59.859: INFO: namespace container-probe-7311 deletion completed in 6.110532327s
+
+• [SLOW TEST:248.721 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:48:59.860: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 20:48:59.898: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9618b6c-79da-4245-ba4f-7736a8961573" in namespace "downward-api-3792" to be "success or failure"
+Jun  3 20:48:59.904: INFO: Pod "downwardapi-volume-d9618b6c-79da-4245-ba4f-7736a8961573": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41817ms
+Jun  3 20:49:01.910: INFO: Pod "downwardapi-volume-d9618b6c-79da-4245-ba4f-7736a8961573": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012161713s
+STEP: Saw pod success
+Jun  3 20:49:01.910: INFO: Pod "downwardapi-volume-d9618b6c-79da-4245-ba4f-7736a8961573" satisfied condition "success or failure"
+Jun  3 20:49:01.913: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod downwardapi-volume-d9618b6c-79da-4245-ba4f-7736a8961573 container client-container: 
+STEP: delete the pod
+Jun  3 20:49:01.947: INFO: Waiting for pod downwardapi-volume-d9618b6c-79da-4245-ba4f-7736a8961573 to disappear
+Jun  3 20:49:01.950: INFO: Pod downwardapi-volume-d9618b6c-79da-4245-ba4f-7736a8961573 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:49:01.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-3792" for this suite.
+Jun  3 20:49:07.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:49:08.071: INFO: namespace downward-api-3792 deletion completed in 6.116521949s
+
+• [SLOW TEST:8.212 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should unconditionally reject operations on fail closed webhook [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:49:08.071: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 20:49:08.398: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jun  3 20:49:10.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726814148, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726814148, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726814148, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726814148, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 20:49:13.429: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should unconditionally reject operations on fail closed webhook [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
+STEP: create a namespace for the webhook
+STEP: create a configmap should be unconditionally rejected by the webhook
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:49:13.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-9463" for this suite.
+Jun  3 20:49:19.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:49:19.642: INFO: namespace webhook-9463 deletion completed in 6.102022796s
+STEP: Destroying namespace "webhook-9463-markers" for this suite.
+Jun  3 20:49:25.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:49:25.755: INFO: namespace webhook-9463-markers deletion completed in 6.113106077s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:17.700 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should unconditionally reject operations on fail closed webhook [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
+  should be submitted and removed [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:49:25.772: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Delete Grace Period
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
+[It] should be submitted and removed [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+STEP: setting up selector
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+Jun  3 20:49:27.868: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-005848369 proxy -p 0'
+STEP: deleting the pod gracefully
+STEP: verifying the kubelet observed the termination notice
+Jun  3 20:49:37.967: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
+[AfterEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:49:37.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-3296" for this suite.
+Jun  3 20:49:43.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:49:44.090: INFO: namespace pods-3296 deletion completed in 6.115981717s
+
+• [SLOW TEST:18.319 seconds]
+[k8s.io] [sig-node] Pods Extended
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  [k8s.io] Delete Grace Period
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+    should be submitted and removed [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute prestop http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:49:44.090: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute prestop http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the pod with lifecycle hook
+STEP: delete the pod with lifecycle hook
+Jun  3 20:49:48.166: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun  3 20:49:48.168: INFO: Pod pod-with-prestop-http-hook still exists
+Jun  3 20:49:50.169: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun  3 20:49:50.173: INFO: Pod pod-with-prestop-http-hook still exists
+Jun  3 20:49:52.169: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun  3 20:49:52.173: INFO: Pod pod-with-prestop-http-hook still exists
+Jun  3 20:49:54.169: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun  3 20:49:54.173: INFO: Pod pod-with-prestop-http-hook still exists
+Jun  3 20:49:56.169: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun  3 20:49:56.174: INFO: Pod pod-with-prestop-http-hook no longer exists
+STEP: check prestop hook
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:49:56.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-2476" for this suite.
+Jun  3 20:50:08.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:50:08.324: INFO: namespace container-lifecycle-hook-2476 deletion completed in 12.1359604s
+
+• [SLOW TEST:24.233 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
+    should execute prestop http hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  works for CRD with validation schema [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:50:08.324: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] works for CRD with validation schema [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:50:08.363: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: client-side validation (kubectl create and apply) allows request with known and required properties
+Jun  3 20:50:12.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-2046 create -f -'
+Jun  3 20:50:12.589: INFO: stderr: ""
+Jun  3 20:50:12.589: INFO: stdout: "e2e-test-crd-publish-openapi-5686-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
+Jun  3 20:50:12.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-2046 delete e2e-test-crd-publish-openapi-5686-crds test-foo'
+Jun  3 20:50:12.742: INFO: stderr: ""
+Jun  3 20:50:12.742: INFO: stdout: "e2e-test-crd-publish-openapi-5686-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
+Jun  3 20:50:12.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-2046 apply -f -'
+Jun  3 20:50:13.019: INFO: stderr: ""
+Jun  3 20:50:13.019: INFO: stdout: "e2e-test-crd-publish-openapi-5686-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
+Jun  3 20:50:13.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-2046 delete e2e-test-crd-publish-openapi-5686-crds test-foo'
+Jun  3 20:50:13.135: INFO: stderr: ""
+Jun  3 20:50:13.135: INFO: stdout: "e2e-test-crd-publish-openapi-5686-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
+STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
+Jun  3 20:50:13.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-2046 create -f -'
+Jun  3 20:50:13.313: INFO: rc: 1
+Jun  3 20:50:13.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-2046 apply -f -'
+Jun  3 20:50:13.555: INFO: rc: 1
+STEP: client-side validation (kubectl create and apply) rejects request without required properties
+Jun  3 20:50:13.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-2046 create -f -'
+Jun  3 20:50:13.819: INFO: rc: 1
+Jun  3 20:50:13.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-2046 apply -f -'
+Jun  3 20:50:14.019: INFO: rc: 1
+STEP: kubectl explain works to explain CR properties
+Jun  3 20:50:14.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 explain e2e-test-crd-publish-openapi-5686-crds'
+Jun  3 20:50:14.273: INFO: stderr: ""
+Jun  3 20:50:14.273: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5686-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
+STEP: kubectl explain works to explain CR properties recursively
+Jun  3 20:50:14.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 explain e2e-test-crd-publish-openapi-5686-crds.metadata'
+Jun  3 20:50:14.514: INFO: stderr: ""
+Jun  3 20:50:14.514: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5686-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
+Jun  3 20:50:14.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 explain e2e-test-crd-publish-openapi-5686-crds.spec'
+Jun  3 20:50:14.711: INFO: stderr: ""
+Jun  3 20:50:14.711: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5686-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
+Jun  3 20:50:14.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 explain e2e-test-crd-publish-openapi-5686-crds.spec.bars'
+Jun  3 20:50:14.906: INFO: stderr: ""
+Jun  3 20:50:14.906: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5686-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
+STEP: kubectl explain works to return error when explain is called on property that doesn't exist
+Jun  3 20:50:14.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 explain e2e-test-crd-publish-openapi-5686-crds.spec.bars2'
+Jun  3 20:50:15.080: INFO: rc: 1
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:50:18.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-2046" for this suite.
+Jun  3 20:50:24.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:50:24.751: INFO: namespace crd-publish-openapi-2046 deletion completed in 6.099280233s
+
+• [SLOW TEST:16.427 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  works for CRD with validation schema [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:50:24.751: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 20:50:24.788: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa9acce9-0bbf-45ba-948e-ea66f7878647" in namespace "downward-api-4589" to be "success or failure"
+Jun  3 20:50:24.791: INFO: Pod "downwardapi-volume-fa9acce9-0bbf-45ba-948e-ea66f7878647": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319042ms
+Jun  3 20:50:26.795: INFO: Pod "downwardapi-volume-fa9acce9-0bbf-45ba-948e-ea66f7878647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006436469s
+STEP: Saw pod success
+Jun  3 20:50:26.795: INFO: Pod "downwardapi-volume-fa9acce9-0bbf-45ba-948e-ea66f7878647" satisfied condition "success or failure"
+Jun  3 20:50:26.798: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-fa9acce9-0bbf-45ba-948e-ea66f7878647 container client-container: 
+STEP: delete the pod
+Jun  3 20:50:26.830: INFO: Waiting for pod downwardapi-volume-fa9acce9-0bbf-45ba-948e-ea66f7878647 to disappear
+Jun  3 20:50:26.833: INFO: Pod downwardapi-volume-fa9acce9-0bbf-45ba-948e-ea66f7878647 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:50:26.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-4589" for this suite.
+Jun  3 20:50:32.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:50:32.943: INFO: namespace downward-api-4589 deletion completed in 6.10490094s
+
+• [SLOW TEST:8.192 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:50:32.943: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
+[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+Jun  3 20:50:32.977: INFO: PodSpec: initContainers in spec.initContainers
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:50:35.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-1092" for this suite.
+Jun  3 20:50:41.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:50:41.430: INFO: namespace init-container-1092 deletion completed in 6.10532669s
+
+• [SLOW TEST:8.487 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:50:41.430: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-test-volume-b05caf09-e248-4960-b24c-7676de12ac51
+STEP: Creating a pod to test consume configMaps
+Jun  3 20:50:41.476: INFO: Waiting up to 5m0s for pod "pod-configmaps-22980b31-96f3-48db-86e0-f83420cab721" in namespace "configmap-2375" to be "success or failure"
+Jun  3 20:50:41.478: INFO: Pod "pod-configmaps-22980b31-96f3-48db-86e0-f83420cab721": Phase="Pending", Reason="", readiness=false. Elapsed: 2.636846ms
+Jun  3 20:50:43.482: INFO: Pod "pod-configmaps-22980b31-96f3-48db-86e0-f83420cab721": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006050693s
+Jun  3 20:50:45.485: INFO: Pod "pod-configmaps-22980b31-96f3-48db-86e0-f83420cab721": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009728063s
+STEP: Saw pod success
+Jun  3 20:50:45.485: INFO: Pod "pod-configmaps-22980b31-96f3-48db-86e0-f83420cab721" satisfied condition "success or failure"
+Jun  3 20:50:45.488: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-configmaps-22980b31-96f3-48db-86e0-f83420cab721 container configmap-volume-test: 
+STEP: delete the pod
+Jun  3 20:50:45.508: INFO: Waiting for pod pod-configmaps-22980b31-96f3-48db-86e0-f83420cab721 to disappear
+Jun  3 20:50:45.512: INFO: Pod pod-configmaps-22980b31-96f3-48db-86e0-f83420cab721 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:50:45.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-2375" for this suite.
+Jun  3 20:50:51.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:50:51.624: INFO: namespace configmap-2375 deletion completed in 6.107213615s
+
+• [SLOW TEST:10.194 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl run --rm job 
+  should create a job from an image, then delete the job  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:50:51.625: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should create a job from an image, then delete the job  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: executing a command with run --rm and attach with stdin
+Jun  3 20:50:51.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=kubectl-6075 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
+Jun  3 20:50:53.972: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
+Jun  3 20:50:53.972: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
+STEP: verifying the job e2e-test-rm-busybox-job was deleted
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:50:55.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-6075" for this suite.
+Jun  3 20:51:05.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:51:06.083: INFO: namespace kubectl-6075 deletion completed in 10.097230366s
+
+• [SLOW TEST:14.458 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl run --rm job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1751
+    should create a job from an image, then delete the job  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:51:06.083: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
+[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating pod liveness-a7deb5e4-ec32-405c-80e0-e9a3eb4d3f2a in namespace container-probe-358
+Jun  3 20:51:10.144: INFO: Started pod liveness-a7deb5e4-ec32-405c-80e0-e9a3eb4d3f2a in namespace container-probe-358
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun  3 20:51:10.147: INFO: Initial restart count of pod liveness-a7deb5e4-ec32-405c-80e0-e9a3eb4d3f2a is 0
+Jun  3 20:51:34.207: INFO: Restart count of pod container-probe-358/liveness-a7deb5e4-ec32-405c-80e0-e9a3eb4d3f2a is now 1 (24.059881455s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:51:34.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-358" for this suite.
+Jun  3 20:51:40.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:51:40.323: INFO: namespace container-probe-358 deletion completed in 6.10030187s
+
+• [SLOW TEST:34.240 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:51:40.323: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-test-volume-map-58650096-157b-4277-b26e-17b2321bd464
+STEP: Creating a pod to test consume configMaps
+Jun  3 20:51:40.369: INFO: Waiting up to 5m0s for pod "pod-configmaps-9d2fc9f2-1014-4e09-b40a-d9cf5cab2875" in namespace "configmap-1955" to be "success or failure"
+Jun  3 20:51:40.372: INFO: Pod "pod-configmaps-9d2fc9f2-1014-4e09-b40a-d9cf5cab2875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.817886ms
+Jun  3 20:51:42.377: INFO: Pod "pod-configmaps-9d2fc9f2-1014-4e09-b40a-d9cf5cab2875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007459198s
+STEP: Saw pod success
+Jun  3 20:51:42.377: INFO: Pod "pod-configmaps-9d2fc9f2-1014-4e09-b40a-d9cf5cab2875" satisfied condition "success or failure"
+Jun  3 20:51:42.381: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-9d2fc9f2-1014-4e09-b40a-d9cf5cab2875 container configmap-volume-test: 
+STEP: delete the pod
+Jun  3 20:51:42.404: INFO: Waiting for pod pod-configmaps-9d2fc9f2-1014-4e09-b40a-d9cf5cab2875 to disappear
+Jun  3 20:51:42.407: INFO: Pod pod-configmaps-9d2fc9f2-1014-4e09-b40a-d9cf5cab2875 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:51:42.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-1955" for this suite.
+Jun  3 20:51:48.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:51:48.525: INFO: namespace configmap-1955 deletion completed in 6.113385552s
+
+• [SLOW TEST:8.202 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
+  should be able to convert a non homogeneous list of CRs [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:51:48.526: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
+STEP: Setting up server cert
+STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
+STEP: Deploying the custom resource conversion webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 20:51:48.983: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
+Jun  3 20:51:50.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726814308, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726814308, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726814309, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726814308, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 20:51:54.015: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
+[It] should be able to convert a non homogeneous list of CRs [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:51:54.019: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Creating a v1 custom resource
+STEP: Create a v2 custom resource
+STEP: List CRs in v1
+STEP: List CRs in v2
+[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:51:55.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-webhook-3201" for this suite.
+Jun  3 20:52:01.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:52:01.346: INFO: namespace crd-webhook-3201 deletion completed in 6.105563032s
+[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
+
+• [SLOW TEST:12.835 seconds]
+[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to convert a non homogeneous list of CRs [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
+  listing custom resource definition objects works  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:52:01.361: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename custom-resource-definition
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] listing custom resource definition objects works  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:52:01.397: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:52:06.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "custom-resource-definition-9050" for this suite.
+Jun  3 20:52:12.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:52:12.962: INFO: namespace custom-resource-definition-9050 deletion completed in 6.115676581s
+
+• [SLOW TEST:11.601 seconds]
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  Simple CustomResourceDefinition
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42
+    listing custom resource definition objects works  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-apps] ReplicaSet 
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:52:12.963: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename replicaset
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:52:12.996: INFO: Creating ReplicaSet my-hostname-basic-eefa7a98-5b8b-4ed1-849c-8c1a6cf36f3f
+Jun  3 20:52:13.004: INFO: Pod name my-hostname-basic-eefa7a98-5b8b-4ed1-849c-8c1a6cf36f3f: Found 0 pods out of 1
+Jun  3 20:52:18.009: INFO: Pod name my-hostname-basic-eefa7a98-5b8b-4ed1-849c-8c1a6cf36f3f: Found 1 pods out of 1
+Jun  3 20:52:18.009: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-eefa7a98-5b8b-4ed1-849c-8c1a6cf36f3f" is running
+Jun  3 20:52:18.014: INFO: Pod "my-hostname-basic-eefa7a98-5b8b-4ed1-849c-8c1a6cf36f3f-274lw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 20:52:13 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 20:52:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 20:52:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 20:52:13 +0000 UTC Reason: Message:}])
+Jun  3 20:52:18.014: INFO: Trying to dial the pod
+Jun  3 20:52:23.030: INFO: Controller my-hostname-basic-eefa7a98-5b8b-4ed1-849c-8c1a6cf36f3f: Got expected result from replica 1 [my-hostname-basic-eefa7a98-5b8b-4ed1-849c-8c1a6cf36f3f-274lw]: "my-hostname-basic-eefa7a98-5b8b-4ed1-849c-8c1a6cf36f3f-274lw", 1 of 1 required successes so far
+[AfterEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:52:23.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replicaset-8781" for this suite.
+Jun  3 20:52:29.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:52:29.152: INFO: namespace replicaset-8781 deletion completed in 6.117716869s
+
+• [SLOW TEST:16.190 seconds]
+[sig-apps] ReplicaSet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:52:29.152: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating a watch on configmaps
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: closing the watch once it receives two notifications
+Jun  3 20:52:29.205: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8279 /api/v1/namespaces/watch-8279/configmaps/e2e-watch-test-watch-closed 90cc76bf-e0db-4d2f-89a7-ddf475827bab 153784 0 2020-06-03 20:52:29 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Jun  3 20:52:29.205: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8279 /api/v1/namespaces/watch-8279/configmaps/e2e-watch-test-watch-closed 90cc76bf-e0db-4d2f-89a7-ddf475827bab 153785 0 2020-06-03 20:52:29 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+STEP: modifying the configmap a second time, while the watch is closed
+STEP: creating a new watch on configmaps from the last resource version observed by the first watch
+STEP: deleting the configmap
+STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
+Jun  3 20:52:29.226: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8279 /api/v1/namespaces/watch-8279/configmaps/e2e-watch-test-watch-closed 90cc76bf-e0db-4d2f-89a7-ddf475827bab 153786 0 2020-06-03 20:52:29 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Jun  3 20:52:29.226: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8279 /api/v1/namespaces/watch-8279/configmaps/e2e-watch-test-watch-closed 90cc76bf-e0db-4d2f-89a7-ddf475827bab 153787 0 2020-06-03 20:52:29 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:52:29.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-8279" for this suite.
+Jun  3 20:52:35.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:52:35.329: INFO: namespace watch-8279 deletion completed in 6.097283793s
+
+• [SLOW TEST:6.176 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:52:35.329: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
+Jun  3 20:52:35.362: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jun  3 20:52:35.373: INFO: Waiting for terminating namespaces to be deleted...
+Jun  3 20:52:35.376: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-0 before test
+Jun  3 20:52:35.391: INFO: kube-proxy-ds-qrgfl from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.391: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: kube-flannel-ds-hznhg from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.391: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-0 from kube-system started at 2020-06-02 22:11:48 +0000 UTC (3 container statuses recorded)
+Jun  3 20:52:35.391: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: fluent-bit-mb264 from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.391: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: node-exporter-hkj7p from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.391: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: csi-node-ntnx-plugin-pdc8c from ntnx-system started at 2020-06-03 01:26:50 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.391: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-58wws from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.391: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:52:35.391: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-1 before test
+Jun  3 20:52:35.407: INFO: kube-flannel-ds-zdlj6 from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.407: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: csi-node-ntnx-plugin-6cg44 from ntnx-system started at 2020-06-03 01:27:02 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.407: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-sz7h8 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.407: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-1 from kube-system started at 2020-06-02 22:13:08 +0000 UTC (3 container statuses recorded)
+Jun  3 20:52:35.407: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: kube-proxy-ds-8hv5j from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.407: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: node-exporter-dwrsb from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.407: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: kube-dns-5c64dc6c6b-ls68z from kube-system started at 2020-06-02 22:16:18 +0000 UTC (3 container statuses recorded)
+Jun  3 20:52:35.407: INFO: 	Container dnsmasq ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: 	Container kubedns ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: 	Container sidecar ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: fluent-bit-zcqwz from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.407: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:52:35.407: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-0 before test
+Jun  3 20:52:35.426: INFO: kube-flannel-ds-qnlzb from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.427: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: fluent-bit-gb59k from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.427: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: kubernetes-events-printer-5c6d46dfdb-zcvlt from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.427: INFO: 	Container kubernetes-events-printer ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: elasticsearch-logging-0 from ntnx-system started at 2020-06-02 22:17:12 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.427: INFO: 	Container elasticsearch-logging ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: node-exporter-5q9qc from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.427: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-7btt6 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.427: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: kube-proxy-ds-qt528 from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.427: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: csi-node-ntnx-plugin-zbw4j from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.427: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:52:35.427: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-1 before test
+Jun  3 20:52:35.437: INFO: csi-node-ntnx-plugin-vqstr from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: node-exporter-qwbtg from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: alertmanager-main-1 from ntnx-system started at 2020-06-02 22:20:10 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-szp8f from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: kube-proxy-ds-fgf9r from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: kibana-logging-54b7d845-tqgh5 from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container kibana-logging ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 	Container nginxhttp ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: sonobuoy from sonobuoy started at 2020-06-03 20:08:28 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: csi-attacher-ntnx-plugin-0 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container csi-attacher ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: kube-flannel-ds-wvrqx from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: fluent-bit-bqqbz from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: prometheus-k8s-1 from ntnx-system started at 2020-06-02 22:20:26 +0000 UTC (3 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container prometheus ready: true, restart count 1
+Jun  3 20:52:35.437: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: sonobuoy-e2e-job-5435c8b63156474a from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.437: INFO: 	Container e2e ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:52:35.437: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-2 before test
+Jun  3 20:52:35.457: INFO: node-exporter-hs75m from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: alertmanager-main-0 from ntnx-system started at 2020-06-02 22:20:10 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: elasticsearch-curator-cron-1591142460-cj4wj from ntnx-system started at 2020-06-03 00:01:05 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container curator ready: false, restart count 0
+Jun  3 20:52:35.457: INFO: fluent-bit-zgt4s from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: prometheus-operator-58f86dddd6-fkbmk from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container prometheus-operator ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: kube-flannel-ds-q4sbl from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: csi-provisioner-ntnx-plugin-0 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container csi-provisioner ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: kube-state-metrics-5d45657948-qkv6t from ntnx-system started at 2020-06-02 22:19:59 +0000 UTC (4 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container addon-resizer ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container kube-rbac-proxy-main ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container kube-rbac-proxy-self ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container kube-state-metrics ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-p8d7c from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: kube-proxy-ds-gn6cv from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: csi-node-ntnx-plugin-wnbs7 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: prometheus-k8s-0 from ntnx-system started at 2020-06-02 22:20:28 +0000 UTC (3 container statuses recorded)
+Jun  3 20:52:35.457: INFO: 	Container prometheus ready: true, restart count 1
+Jun  3 20:52:35.457: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 20:52:35.457: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+[It] validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Trying to schedule Pod with nonempty NodeSelector.
+STEP: Considering event: 
+Type = [Warning], Name = [restricted-pod.161523efa3050c56], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.]
+STEP: Considering event: 
+Type = [Warning], Name = [restricted-pod.161523efa3613611], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.]
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:52:36.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-9558" for this suite.
+Jun  3 20:52:42.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:52:42.599: INFO: namespace sched-pred-9558 deletion completed in 6.105984231s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78
+
+• [SLOW TEST:7.271 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:52:42.599: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name projected-configmap-test-volume-b27a1698-34a7-47b2-a4a6-d3d120d7181d
+STEP: Creating a pod to test consume configMaps
+Jun  3 20:52:42.643: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-085d1523-f4e0-4a15-84b9-6c0afa612c98" in namespace "projected-8160" to be "success or failure"
+Jun  3 20:52:42.647: INFO: Pod "pod-projected-configmaps-085d1523-f4e0-4a15-84b9-6c0afa612c98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.799525ms
+Jun  3 20:52:44.652: INFO: Pod "pod-projected-configmaps-085d1523-f4e0-4a15-84b9-6c0afa612c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008719199s
+STEP: Saw pod success
+Jun  3 20:52:44.652: INFO: Pod "pod-projected-configmaps-085d1523-f4e0-4a15-84b9-6c0afa612c98" satisfied condition "success or failure"
+Jun  3 20:52:44.656: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod pod-projected-configmaps-085d1523-f4e0-4a15-84b9-6c0afa612c98 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  3 20:52:44.677: INFO: Waiting for pod pod-projected-configmaps-085d1523-f4e0-4a15-84b9-6c0afa612c98 to disappear
+Jun  3 20:52:44.680: INFO: Pod pod-projected-configmaps-085d1523-f4e0-4a15-84b9-6c0afa612c98 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:52:44.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8160" for this suite.
+Jun  3 20:52:50.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:52:50.795: INFO: namespace projected-8160 deletion completed in 6.111609832s
+
+• [SLOW TEST:8.196 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:52:50.795: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+[It] should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating service endpoint-test2 in namespace services-1909
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1909 to expose endpoints map[]
+Jun  3 20:52:50.844: INFO: successfully validated that service endpoint-test2 in namespace services-1909 exposes endpoints map[] (4.742328ms elapsed)
+STEP: Creating pod pod1 in namespace services-1909
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1909 to expose endpoints map[pod1:[80]]
+Jun  3 20:52:52.878: INFO: successfully validated that service endpoint-test2 in namespace services-1909 exposes endpoints map[pod1:[80]] (2.022633922s elapsed)
+STEP: Creating pod pod2 in namespace services-1909
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1909 to expose endpoints map[pod1:[80] pod2:[80]]
+Jun  3 20:52:54.919: INFO: successfully validated that service endpoint-test2 in namespace services-1909 exposes endpoints map[pod1:[80] pod2:[80]] (2.03486148s elapsed)
+STEP: Deleting pod pod1 in namespace services-1909
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1909 to expose endpoints map[pod2:[80]]
+Jun  3 20:52:54.937: INFO: successfully validated that service endpoint-test2 in namespace services-1909 exposes endpoints map[pod2:[80]] (9.563581ms elapsed)
+STEP: Deleting pod pod2 in namespace services-1909
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1909 to expose endpoints map[]
+Jun  3 20:52:55.956: INFO: successfully validated that service endpoint-test2 in namespace services-1909 exposes endpoints map[] (1.009013984s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:52:55.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-1909" for this suite.
+Jun  3 20:53:08.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:53:08.106: INFO: namespace services-1909 deletion completed in 12.118567599s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95
+
+• [SLOW TEST:17.310 seconds]
+[sig-network] Services
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:53:08.106: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 20:53:08.149: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66710201-55ce-4006-9ae3-20d01803a221" in namespace "downward-api-9873" to be "success or failure"
+Jun  3 20:53:08.153: INFO: Pod "downwardapi-volume-66710201-55ce-4006-9ae3-20d01803a221": Phase="Pending", Reason="", readiness=false. Elapsed: 3.250319ms
+Jun  3 20:53:10.157: INFO: Pod "downwardapi-volume-66710201-55ce-4006-9ae3-20d01803a221": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007448408s
+STEP: Saw pod success
+Jun  3 20:53:10.157: INFO: Pod "downwardapi-volume-66710201-55ce-4006-9ae3-20d01803a221" satisfied condition "success or failure"
+Jun  3 20:53:10.159: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-2 pod downwardapi-volume-66710201-55ce-4006-9ae3-20d01803a221 container client-container: 
+STEP: delete the pod
+Jun  3 20:53:10.181: INFO: Waiting for pod downwardapi-volume-66710201-55ce-4006-9ae3-20d01803a221 to disappear
+Jun  3 20:53:10.184: INFO: Pod downwardapi-volume-66710201-55ce-4006-9ae3-20d01803a221 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:53:10.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-9873" for this suite.
+Jun  3 20:53:16.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:53:16.306: INFO: namespace downward-api-9873 deletion completed in 6.118068846s
+
+• [SLOW TEST:8.200 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:53:16.306: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating the pod
+Jun  3 20:53:18.935: INFO: Successfully updated pod "annotationupdateecdedbb6-18dc-4d0c-a051-e3ace3260e46"
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:53:20.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6054" for this suite.
+Jun  3 20:53:32.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:53:33.070: INFO: namespace projected-6054 deletion completed in 12.113037454s
+
+• [SLOW TEST:16.764 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:53:33.070: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
+[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating pod busybox-e720bc83-ed13-464d-b7b7-64413a2b10b2 in namespace container-probe-1130
+Jun  3 20:53:35.126: INFO: Started pod busybox-e720bc83-ed13-464d-b7b7-64413a2b10b2 in namespace container-probe-1130
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun  3 20:53:35.130: INFO: Initial restart count of pod busybox-e720bc83-ed13-464d-b7b7-64413a2b10b2 is 0
+Jun  3 20:54:23.244: INFO: Restart count of pod container-probe-1130/busybox-e720bc83-ed13-464d-b7b7-64413a2b10b2 is now 1 (48.113701012s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:54:23.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-1130" for this suite.
+Jun  3 20:54:29.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:54:29.360: INFO: namespace container-probe-1130 deletion completed in 6.09983734s
+
+• [SLOW TEST:56.290 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:54:29.361: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
+Jun  3 20:54:29.397: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jun  3 20:54:29.410: INFO: Waiting for terminating namespaces to be deleted...
+Jun  3 20:54:29.413: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-0 before test
+Jun  3 20:54:29.427: INFO: kube-proxy-ds-qrgfl from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.427: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: kube-flannel-ds-hznhg from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.427: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-58wws from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.427: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-0 from kube-system started at 2020-06-02 22:11:48 +0000 UTC (3 container statuses recorded)
+Jun  3 20:54:29.427: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: fluent-bit-mb264 from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.427: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: node-exporter-hkj7p from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.427: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: csi-node-ntnx-plugin-pdc8c from ntnx-system started at 2020-06-03 01:26:50 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.427: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:54:29.427: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-1 before test
+Jun  3 20:54:29.445: INFO: kube-proxy-ds-8hv5j from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.445: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: kube-flannel-ds-zdlj6 from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.445: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: csi-node-ntnx-plugin-6cg44 from ntnx-system started at 2020-06-03 01:27:02 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.445: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-sz7h8 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.445: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-1 from kube-system started at 2020-06-02 22:13:08 +0000 UTC (3 container statuses recorded)
+Jun  3 20:54:29.445: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: fluent-bit-zcqwz from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.445: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: node-exporter-dwrsb from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.445: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: kube-dns-5c64dc6c6b-ls68z from kube-system started at 2020-06-02 22:16:18 +0000 UTC (3 container statuses recorded)
+Jun  3 20:54:29.445: INFO: 	Container dnsmasq ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: 	Container kubedns ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: 	Container sidecar ready: true, restart count 0
+Jun  3 20:54:29.445: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-0 before test
+Jun  3 20:54:29.460: INFO: kube-proxy-ds-qt528 from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.460: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: csi-node-ntnx-plugin-zbw4j from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.460: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: kube-flannel-ds-qnlzb from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.460: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: fluent-bit-gb59k from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.460: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: kubernetes-events-printer-5c6d46dfdb-zcvlt from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.460: INFO: 	Container kubernetes-events-printer ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: elasticsearch-logging-0 from ntnx-system started at 2020-06-02 22:17:12 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.460: INFO: 	Container elasticsearch-logging ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: node-exporter-5q9qc from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.460: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-7btt6 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.460: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:54:29.460: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-1 before test
+Jun  3 20:54:29.470: INFO: csi-node-ntnx-plugin-vqstr from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: node-exporter-qwbtg from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: alertmanager-main-1 from ntnx-system started at 2020-06-02 22:20:10 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-szp8f from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: kube-proxy-ds-fgf9r from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: kibana-logging-54b7d845-tqgh5 from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container kibana-logging ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 	Container nginxhttp ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: sonobuoy from sonobuoy started at 2020-06-03 20:08:28 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: csi-attacher-ntnx-plugin-0 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container csi-attacher ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: kube-flannel-ds-wvrqx from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: fluent-bit-bqqbz from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: prometheus-k8s-1 from ntnx-system started at 2020-06-02 22:20:26 +0000 UTC (3 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container prometheus ready: true, restart count 1
+Jun  3 20:54:29.470: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: sonobuoy-e2e-job-5435c8b63156474a from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.470: INFO: 	Container e2e ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:54:29.470: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-2 before test
+Jun  3 20:54:29.478: INFO: fluent-bit-zgt4s from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: prometheus-operator-58f86dddd6-fkbmk from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container prometheus-operator ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-p8d7c from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: kube-flannel-ds-q4sbl from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: csi-provisioner-ntnx-plugin-0 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container csi-provisioner ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: kube-state-metrics-5d45657948-qkv6t from ntnx-system started at 2020-06-02 22:19:59 +0000 UTC (4 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container addon-resizer ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container kube-rbac-proxy-main ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container kube-rbac-proxy-self ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container kube-state-metrics ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: kube-proxy-ds-gn6cv from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: csi-node-ntnx-plugin-wnbs7 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: prometheus-k8s-0 from ntnx-system started at 2020-06-02 22:20:28 +0000 UTC (3 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container prometheus ready: true, restart count 1
+Jun  3 20:54:29.478: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: node-exporter-hs75m from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: alertmanager-main-0 from ntnx-system started at 2020-06-02 22:20:10 +0000 UTC (2 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 20:54:29.478: INFO: elasticsearch-curator-cron-1591142460-cj4wj from ntnx-system started at 2020-06-03 00:01:05 +0000 UTC (1 container statuses recorded)
+Jun  3 20:54:29.478: INFO: 	Container curator ready: false, restart count 0
+[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Trying to launch a pod without a label to get a node which can launch it.
+STEP: Explicitly delete pod here to free the resource it takes.
+STEP: Trying to apply a random label on the found node.
+STEP: verifying the node has the label kubernetes.io/e2e-77de89a0-43c6-466a-8cb7-1e84818f44d2 95
+STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
+STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
+STEP: removing the label kubernetes.io/e2e-77de89a0-43c6-466a-8cb7-1e84818f44d2 off the node karbon-certification-ff5a6a-k8s-worker-1
+STEP: verifying the node doesn't have the label kubernetes.io/e2e-77de89a0-43c6-466a-8cb7-1e84818f44d2
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:59:37.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-3075" for this suite.
+Jun  3 20:59:45.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:59:45.691: INFO: namespace sched-pred-3075 deletion completed in 8.106091841s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78
+
+• [SLOW TEST:316.331 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should be able to change the type from ExternalName to NodePort [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:59:45.692: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+[It] should be able to change the type from ExternalName to NodePort [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating a service externalname-service with the type=ExternalName in namespace services-9763
+STEP: changing the ExternalName service to type=NodePort
+STEP: creating replication controller externalname-service in namespace services-9763
+I0603 20:59:45.760637      25 runners.go:184] Created replication controller with name: externalname-service, namespace: services-9763, replica count: 2
+Jun  3 20:59:48.811: INFO: Creating new exec pod
+I0603 20:59:48.811195      25 runners.go:184] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Jun  3 20:59:51.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-9763 execpodg86z6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
+Jun  3 20:59:52.070: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
+Jun  3 20:59:52.070: INFO: stdout: ""
+Jun  3 20:59:52.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-9763 execpodg86z6 -- /bin/sh -x -c nc -zv -t -w 2 172.19.42.116 80'
+Jun  3 20:59:52.301: INFO: stderr: "+ nc -zv -t -w 2 172.19.42.116 80\nConnection to 172.19.42.116 80 port [tcp/http] succeeded!\n"
+Jun  3 20:59:52.301: INFO: stdout: ""
+Jun  3 20:59:52.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-9763 execpodg86z6 -- /bin/sh -x -c nc -zv -t -w 2 10.45.43.24 32575'
+Jun  3 20:59:52.530: INFO: stderr: "+ nc -zv -t -w 2 10.45.43.24 32575\nConnection to 10.45.43.24 32575 port [tcp/32575] succeeded!\n"
+Jun  3 20:59:52.530: INFO: stdout: ""
+Jun  3 20:59:52.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-9763 execpodg86z6 -- /bin/sh -x -c nc -zv -t -w 2 10.45.43.10 32575'
+Jun  3 20:59:52.780: INFO: stderr: "+ nc -zv -t -w 2 10.45.43.10 32575\nConnection to 10.45.43.10 32575 port [tcp/32575] succeeded!\n"
+Jun  3 20:59:52.780: INFO: stdout: ""
+Jun  3 20:59:52.780: INFO: Cleaning up the ExternalName to NodePort test service
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 20:59:52.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-9763" for this suite.
+Jun  3 20:59:58.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 20:59:58.917: INFO: namespace services-9763 deletion completed in 6.100211334s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95
+
+• [SLOW TEST:13.225 seconds]
+[sig-network] Services
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should be able to change the type from ExternalName to NodePort [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 20:59:58.917: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+[It] deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 20:59:58.954: INFO: Creating deployment "webserver-deployment"
+Jun  3 20:59:58.961: INFO: Waiting for observed generation 1
+Jun  3 21:00:00.976: INFO: Waiting for all required pods to come up
+Jun  3 21:00:00.980: INFO: Pod name httpd: Found 10 pods out of 10
+STEP: ensuring each pod is running
+Jun  3 21:00:02.989: INFO: Waiting for deployment "webserver-deployment" to complete
+Jun  3 21:00:02.997: INFO: Updating deployment "webserver-deployment" with a non-existent image
+Jun  3 21:00:03.010: INFO: Updating deployment webserver-deployment
+Jun  3 21:00:03.010: INFO: Waiting for observed generation 2
+Jun  3 21:00:05.020: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
+Jun  3 21:00:05.024: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
+Jun  3 21:00:05.028: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
+Jun  3 21:00:05.041: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
+Jun  3 21:00:05.041: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
+Jun  3 21:00:05.045: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
+Jun  3 21:00:05.052: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
+Jun  3 21:00:05.052: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
+Jun  3 21:00:05.064: INFO: Updating deployment webserver-deployment
+Jun  3 21:00:05.064: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
+Jun  3 21:00:05.076: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
+Jun  3 21:00:07.132: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
+Jun  3 21:00:07.141: INFO: Deployment "webserver-deployment":
+&Deployment{ObjectMeta:{webserver-deployment  deployment-9693 /apis/apps/v1/namespaces/deployment-9693/deployments/webserver-deployment 7ae8afa5-4654-4976-b646-8aa966d79aa2 155193 3 2020-06-03 20:59:58 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034252e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-03 21:00:05 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-06-03 21:00:05 +0000 UTC,LastTransitionTime:2020-06-03 20:59:58 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}
+
+Jun  3 21:00:07.148: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
+&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-9693 /apis/apps/v1/namespaces/deployment-9693/replicasets/webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 155190 3 2020-06-03 21:00:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7ae8afa5-4654-4976-b646-8aa966d79aa2 0xc0034257f7 0xc0034257f8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003425868  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:00:07.148: INFO: All old ReplicaSets of Deployment "webserver-deployment":
+Jun  3 21:00:07.148: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-9693 /apis/apps/v1/namespaces/deployment-9693/replicasets/webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 155284 3 2020-06-03 20:59:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7ae8afa5-4654-4976-b646-8aa966d79aa2 0xc003425737 0xc003425738}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003425798  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:9,AvailableReplicas:9,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:00:07.158: INFO: Pod "webserver-deployment-595b5b9587-44mvq" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-44mvq webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-44mvq bcea220e-987d-4690-9e7d-68b3ff2ea52a 155004 0 2020-06-03 20:59:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408d077 0xc00408d078}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.21,PodIP:172.20.3.104,StartTime:2020-06-03 20:59:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c074f0451add1348f224aa797dda03c0ea78c73c95bdf41c1dba37f42d11311e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.3.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.159: INFO: Pod "webserver-deployment-595b5b9587-49m8x" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-49m8x webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-49m8x dd1b3291-1db2-4643-9d41-9445471eb455 155196 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408d1e7 0xc00408d1e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.24,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.159: INFO: Pod "webserver-deployment-595b5b9587-67k79" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-67k79 webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-67k79 6f05c6b1-58c8-4d60-b527-805bbff96d0b 155132 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408d337 0xc00408d338}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.24,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.159: INFO: Pod "webserver-deployment-595b5b9587-6pztc" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6pztc webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-6pztc 1cf895c0-4ade-4335-92b6-8a4c17f6ef3e 155191 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408d487 0xc00408d488}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.21,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.160: INFO: Pod "webserver-deployment-595b5b9587-7kbch" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7kbch webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-7kbch 152efd6d-57d6-4c5b-92d0-bf0e49f98785 155197 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408d5d7 0xc00408d5d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.21,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.160: INFO: Pod "webserver-deployment-595b5b9587-c6fj9" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c6fj9 webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-c6fj9 0b21c3e2-5543-4439-83f0-81f61082b536 154998 0 2020-06-03 20:59:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408d727 0xc00408d728}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:172.20.2.78,StartTime:2020-06-03 20:59:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ce7fd7f3f3fcef677ec073c2ecbd247913e975de1812cbd00faa76d1cf27bac3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.2.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.160: INFO: Pod "webserver-deployment-595b5b9587-fbzwm" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fbzwm webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-fbzwm aa279240-30a9-4c16-9aca-628d8fe8c592 155194 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408d897 0xc00408d898}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.14,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.160: INFO: Pod "webserver-deployment-595b5b9587-fwpgp" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fwpgp webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-fwpgp 30046010-5e0b-4f9c-985e-0ca242f918ef 154990 0 2020-06-03 20:59:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408d9e7 0xc00408d9e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.14,PodIP:172.20.4.16,StartTime:2020-06-03 20:59:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e3237904b5c565d485e56331e8f1a322b72b3af2ea1954ed368a2da150707695,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.4.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.160: INFO: Pod "webserver-deployment-595b5b9587-k6lkj" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-k6lkj webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-k6lkj 0bdd248f-f8f8-4fdd-83a6-053b329e3db3 155021 0 2020-06-03 20:59:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408db57 0xc00408db58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.10,PodIP:172.20.1.12,StartTime:2020-06-03 20:59:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4be8feae98b59f1de99c229c9774485b92f81760a3555e537655e4f7cf7bbaa3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.160: INFO: Pod "webserver-deployment-595b5b9587-kgqnf" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kgqnf webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-kgqnf 4f626bb4-2db2-42a4-b2c1-93e65a54d972 155178 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408dcc7 0xc00408dcc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.10,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.161: INFO: Pod "webserver-deployment-595b5b9587-m487h" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-m487h webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-m487h 9237c674-de55-449e-8750-4a3a5de8cedd 155198 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408de17 0xc00408de18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.14,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.161: INFO: Pod "webserver-deployment-595b5b9587-m5mlt" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-m5mlt webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-m5mlt e141544f-0a22-492f-a9ac-d594ce98a063 155010 0 2020-06-03 20:59:59 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc00408df67 0xc00408df68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.14,PodIP:172.20.4.17,StartTime:2020-06-03 20:59:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7afddf80976f10458e4b788138f3c73010d6da7fc44fdc13046d646856e3a98f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.4.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.161: INFO: Pod "webserver-deployment-595b5b9587-nb5bd" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nb5bd webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-nb5bd 5363e5f8-f02e-4a70-b0ff-4fc55227c71f 155024 0 2020-06-03 20:59:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc000f420d7 0xc000f420d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.21,PodIP:172.20.3.103,StartTime:2020-06-03 20:59:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a1ae44c095095acc3b9109aa285e08580cebd6eb26cf5b59306269576992ffe9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.3.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.161: INFO: Pod "webserver-deployment-595b5b9587-q9cff" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-q9cff webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-q9cff 79dfe1e7-0180-4968-a0b3-9429d62697ec 155186 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc000f42247 0xc000f42248}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.24,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.161: INFO: Pod "webserver-deployment-595b5b9587-rfncv" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rfncv webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-rfncv ec9d6af1-f5b7-47a8-b991-4bd3a733a085 155162 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc000f42397 0xc000f42398}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.162: INFO: Pod "webserver-deployment-595b5b9587-sfcvh" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sfcvh webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-sfcvh 78e67d00-86c9-47d8-919a-364a960b28f9 155026 0 2020-06-03 20:59:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc000f42527 0xc000f42528}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:172.20.2.79,StartTime:2020-06-03 20:59:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ac0b05a0c08108c3a5428574a50379a2280cd22dcfe99a77b4ba7ae94bf7300d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.2.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.162: INFO: Pod "webserver-deployment-595b5b9587-svbpw" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-svbpw webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-svbpw 11f5526d-4363-445b-bfe4-24a29311ab1d 155185 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc000f426a7 0xc000f426a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.162: INFO: Pod "webserver-deployment-595b5b9587-tjwlw" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tjwlw webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-tjwlw 8f49f93d-fb12-41cb-b584-d51e4689889e 155283 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc000f427f7 0xc000f427f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.10,PodIP:172.20.1.15,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6dbda49f460be773d7fdb2f5a7c34908fdbba3ac05408fc4cb7ffad973bd0b3d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.1.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.163: INFO: Pod "webserver-deployment-595b5b9587-vwzjk" is not available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vwzjk webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-vwzjk 35a86b4c-eda3-4247-bcee-5b4d5b56f75e 155201 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc000f42967 0xc000f42968}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.21,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.163: INFO: Pod "webserver-deployment-595b5b9587-wtcjx" is available:
+&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wtcjx webserver-deployment-595b5b9587- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-595b5b9587-wtcjx 43ecf595-bd9f-4b44-913b-70e0bb1a27f1 155015 0 2020-06-03 20:59:58 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9eaaa023-d995-4036-b5c5-e3a592fadd1a 0xc000f42ac7 0xc000f42ac8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 20:59:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.24,PodIP:172.20.0.13,StartTime:2020-06-03 20:59:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:00:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://28f8eb1dd72975a527e0e1576eaa729a002a0ad5434bfdea7bde1879edad4dd1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.0.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.163: INFO: Pod "webserver-deployment-c7997dcc8-57cxc" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-57cxc webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-57cxc 76ee018e-3a4f-4c91-888a-44d9e12ab9f4 155199 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f42c47 0xc000f42c48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.24,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.163: INFO: Pod "webserver-deployment-c7997dcc8-5dwdj" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5dwdj webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-5dwdj ac4ebe2d-e62c-4d92-a49d-035abf090c39 155269 0 2020-06-03 21:00:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f42dd0 0xc000f42dd1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.21,PodIP:172.20.3.105,StartTime:2020-06-03 21:00:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.3.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.164: INFO: Pod "webserver-deployment-c7997dcc8-5zhr4" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5zhr4 webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-5zhr4 f4c445c0-3fd5-43d1-a3ca-16432f332388 155203 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f42f60 0xc000f42f61}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.21,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.164: INFO: Pod "webserver-deployment-c7997dcc8-9l2hs" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9l2hs webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-9l2hs f418daec-154e-4672-b327-716d05b654b8 155206 0 2020-06-03 21:00:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f430d0 0xc000f430d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.14,PodIP:172.20.4.18,StartTime:2020-06-03 21:00:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.4.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.164: INFO: Pod "webserver-deployment-c7997dcc8-fwct8" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fwct8 webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-fwct8 1e4ff222-e193-4124-81da-9efc8c9b54c7 155209 0 2020-06-03 21:00:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f43260 0xc000f43261}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.24,PodIP:172.20.0.15,StartTime:2020-06-03 21:00:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.0.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.164: INFO: Pod "webserver-deployment-c7997dcc8-jdds7" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jdds7 webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-jdds7 bf601217-3cba-4052-87e6-bf3d02519aae 155226 0 2020-06-03 21:00:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f43400 0xc000f43401}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.10,PodIP:172.20.1.14,StartTime:2020-06-03 21:00:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login',},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.1.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.164: INFO: Pod "webserver-deployment-c7997dcc8-jgk5j" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jgk5j webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-jgk5j 7d9fdada-2723-4da2-8a93-823073e5a38c 155195 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f435a0 0xc000f435a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.165: INFO: Pod "webserver-deployment-c7997dcc8-nbrk8" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nbrk8 webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-nbrk8 f17e0bab-c3ae-4c1f-93d3-ee3d37b0416e 155243 0 2020-06-03 21:00:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f43710 0xc000f43711}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:172.20.2.80,StartTime:2020-06-03 21:00:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.2.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.165: INFO: Pod "webserver-deployment-c7997dcc8-nhj9f" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nhj9f webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-nhj9f 78d2e5c5-0fa3-4da0-9588-a2eb29a0271a 155177 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f438a0 0xc000f438a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.21,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.165: INFO: Pod "webserver-deployment-c7997dcc8-prkf7" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-prkf7 webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-prkf7 30d72837-2e6b-475f-a057-e7e427b75339 155188 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f43a10 0xc000f43a11}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-master-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.10,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.165: INFO: Pod "webserver-deployment-c7997dcc8-sv9hf" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sv9hf webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-sv9hf ead4ead7-a472-4f85-a442-255f1a963a8d 155187 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f43b70 0xc000f43b71}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.14,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.165: INFO: Pod "webserver-deployment-c7997dcc8-t58j4" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t58j4 webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-t58j4 0b96e5e4-ff66-42f6-9f00-28bb0cb9bc4f 155200 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f43ce0 0xc000f43ce1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+Jun  3 21:00:07.166: INFO: Pod "webserver-deployment-c7997dcc8-wdzhs" is not available:
+&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wdzhs webserver-deployment-c7997dcc8- deployment-9693 /api/v1/namespaces/deployment-9693/pods/webserver-deployment-c7997dcc8-wdzhs f677d9b5-060f-431f-8105-d7d9cc9fda0a 155202 0 2020-06-03 21:00:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 496368cd-9cf1-4234-9d5e-217766c74850 0xc000f43e50 0xc000f43e51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-plxh4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-plxh4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-plxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-0,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.14,PodIP:,StartTime:2020-06-03 21:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:00:07.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-9693" for this suite.
+Jun  3 21:00:15.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:00:15.286: INFO: namespace deployment-9693 deletion completed in 8.113483843s
+
+• [SLOW TEST:16.369 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] 
+  evicts pods with minTolerationSeconds [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:00:15.287: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename taint-multiple-pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:345
+Jun  3 21:00:15.320: INFO: Waiting up to 1m0s for all nodes to be ready
+Jun  3 21:01:15.361: INFO: Waiting for terminating namespaces to be deleted...
+[It] evicts pods with minTolerationSeconds [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:01:15.365: INFO: Starting informer...
+STEP: Starting pods...
+Jun  3 21:01:15.583: INFO: Pod1 is running on karbon-certification-ff5a6a-k8s-worker-1. Tainting Node
+Jun  3 21:01:17.804: INFO: Pod2 is running on karbon-certification-ff5a6a-k8s-worker-1. Tainting Node
+STEP: Trying to apply a taint on the Node
+STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
+STEP: Waiting for Pod1 and Pod2 to be deleted
+Jun  3 21:01:25.661: INFO: Noticed Pod "taint-eviction-b1" gets evicted.
+Jun  3 21:01:44.846: INFO: Noticed Pod "taint-eviction-b2" gets evicted.
+STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
+[AfterEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:01:44.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "taint-multiple-pods-6342" for this suite.
+Jun  3 21:01:50.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:01:50.989: INFO: namespace taint-multiple-pods-6342 deletion completed in 6.116014245s
+
+• [SLOW TEST:95.703 seconds]
+[sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  evicts pods with minTolerationSeconds [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy logs on node using proxy subresource  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] version v1
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:01:50.990: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy logs on node using proxy subresource  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:01:51.050: INFO: (0) /api/v1/nodes/karbon-certification-ff5a6a-k8s-master-0/proxy/logs/: 
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+
+boot.log
+boot.log-20200603.gz
+>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename hostpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] HostPath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
+[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test hostPath mode
+Jun  3 21:01:57.268: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4854" to be "success or failure"
+Jun  3 21:01:57.274: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010523ms
+Jun  3 21:01:59.278: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010186102s
+STEP: Saw pod success
+Jun  3 21:01:59.278: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
+Jun  3 21:01:59.281: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-host-path-test container test-container-1: 
+STEP: delete the pod
+Jun  3 21:01:59.312: INFO: Waiting for pod pod-host-path-test to disappear
+Jun  3 21:01:59.315: INFO: Pod pod-host-path-test no longer exists
+[AfterEach] [sig-storage] HostPath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:01:59.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "hostpath-4854" for this suite.
+Jun  3 21:02:05.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:02:05.424: INFO: namespace hostpath-4854 deletion completed in 6.105114256s
+
+• [SLOW TEST:8.199 seconds]
+[sig-storage] HostPath
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
+  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
+  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:02:05.424: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename security-context-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
+[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:02:05.466: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-0483ebb4-d5e4-430f-a686-9c8d9c0dacef" in namespace "security-context-test-7601" to be "success or failure"
+Jun  3 21:02:05.471: INFO: Pod "busybox-readonly-false-0483ebb4-d5e4-430f-a686-9c8d9c0dacef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.695264ms
+Jun  3 21:02:07.476: INFO: Pod "busybox-readonly-false-0483ebb4-d5e4-430f-a686-9c8d9c0dacef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010320349s
+Jun  3 21:02:07.476: INFO: Pod "busybox-readonly-false-0483ebb4-d5e4-430f-a686-9c8d9c0dacef" satisfied condition "success or failure"
+[AfterEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:02:07.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "security-context-test-7601" for this suite.
+Jun  3 21:02:13.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:02:13.581: INFO: namespace security-context-test-7601 deletion completed in 6.100482996s
+
+• [SLOW TEST:8.157 seconds]
+[k8s.io] Security Context
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  When creating a pod with readOnlyRootFilesystem
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165
+    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:02:13.581: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
+STEP: Creating service test in namespace statefulset-4832
+[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating stateful set ss in namespace statefulset-4832
+STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4832
+Jun  3 21:02:13.630: INFO: Found 0 stateful pods, waiting for 1
+Jun  3 21:02:23.635: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
+Jun  3 21:02:23.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-4832 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jun  3 21:02:24.110: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jun  3 21:02:24.110: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jun  3 21:02:24.110: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jun  3 21:02:24.114: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
+Jun  3 21:02:34.119: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jun  3 21:02:34.119: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:02:34.133: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:02:34.133: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:02:34.133: INFO: 
+Jun  3 21:02:34.133: INFO: StatefulSet ss has not reached scale 3, at 1
+Jun  3 21:02:35.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997022448s
+Jun  3 21:02:36.145: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991929175s
+Jun  3 21:02:37.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985160902s
+Jun  3 21:02:38.155: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.981016742s
+Jun  3 21:02:39.159: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97561867s
+Jun  3 21:02:40.165: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970929647s
+Jun  3 21:02:41.168: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.965775182s
+Jun  3 21:02:42.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.961809638s
+Jun  3 21:02:43.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.658432ms
+STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4832
+Jun  3 21:02:44.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-4832 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jun  3 21:02:44.394: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jun  3 21:02:44.394: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jun  3 21:02:44.394: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jun  3 21:02:44.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-4832 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jun  3 21:02:44.622: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
+Jun  3 21:02:44.622: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jun  3 21:02:44.622: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jun  3 21:02:44.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-4832 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jun  3 21:02:45.064: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
+Jun  3 21:02:45.064: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jun  3 21:02:45.064: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jun  3 21:02:45.069: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun  3 21:02:45.069: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun  3 21:02:45.069: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Scale down will not halt with unhealthy stateful pod
+Jun  3 21:02:45.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-4832 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jun  3 21:02:45.318: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jun  3 21:02:45.318: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jun  3 21:02:45.318: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jun  3 21:02:45.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-4832 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jun  3 21:02:45.556: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jun  3 21:02:45.556: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jun  3 21:02:45.556: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jun  3 21:02:45.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-4832 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jun  3 21:02:45.785: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jun  3 21:02:45.785: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jun  3 21:02:45.785: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jun  3 21:02:45.785: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:02:45.789: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
+Jun  3 21:02:55.798: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jun  3 21:02:55.798: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
+Jun  3 21:02:55.798: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
+Jun  3 21:02:55.813: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:02:55.813: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:02:55.813: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:55.813: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:55.813: INFO: 
+Jun  3 21:02:55.813: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:02:56.818: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:02:56.818: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:02:56.818: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:56.818: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:56.818: INFO: 
+Jun  3 21:02:56.818: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:02:57.823: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:02:57.823: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:02:57.823: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:57.823: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:57.823: INFO: 
+Jun  3 21:02:57.823: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:02:58.830: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:02:58.830: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:02:58.830: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:58.830: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:58.830: INFO: 
+Jun  3 21:02:58.830: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:02:59.835: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:02:59.836: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:02:59.836: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:59.836: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:02:59.836: INFO: 
+Jun  3 21:02:59.836: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:03:00.840: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:03:00.840: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:03:00.840: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:03:00.840: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:03:00.841: INFO: 
+Jun  3 21:03:00.841: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:03:01.845: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:03:01.846: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:03:01.846: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:03:01.846: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:03:01.846: INFO: 
+Jun  3 21:03:01.846: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:03:02.850: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:03:02.850: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:03:02.850: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:03:02.850: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:03:02.850: INFO: 
+Jun  3 21:03:02.850: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:03:03.856: INFO: POD   NODE                                      PHASE    GRACE  CONDITIONS
+Jun  3 21:03:03.856: INFO: ss-0  karbon-certification-ff5a6a-k8s-worker-1  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:13 +0000 UTC  }]
+Jun  3 21:03:03.856: INFO: ss-1  karbon-certification-ff5a6a-k8s-worker-2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:03:03.856: INFO: ss-2  karbon-certification-ff5a6a-k8s-worker-0  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-03 21:02:34 +0000 UTC  }]
+Jun  3 21:03:03.856: INFO: 
+Jun  3 21:03:03.856: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun  3 21:03:04.861: INFO: Verifying statefulset ss doesn't scale past 0 for another 953.556836ms
+STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4832
+Jun  3 21:03:05.866: INFO: Scaling statefulset ss to 0
+Jun  3 21:03:05.876: INFO: Waiting for statefulset status.replicas updated to 0
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
+Jun  3 21:03:05.879: INFO: Deleting all statefulset in ns statefulset-4832
+Jun  3 21:03:05.881: INFO: Scaling statefulset ss to 0
+Jun  3 21:03:05.890: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:03:05.892: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:03:05.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-4832" for this suite.
+Jun  3 21:03:11.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:03:12.018: INFO: namespace statefulset-4832 deletion completed in 6.107885914s
+
+• [SLOW TEST:58.437 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:03:12.019: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name projected-configmap-test-volume-24874013-1ee5-4b43-9948-1203f6339a7d
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:03:12.072: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9957951a-8f21-469b-910c-1d62f7595dd1" in namespace "projected-4552" to be "success or failure"
+Jun  3 21:03:12.075: INFO: Pod "pod-projected-configmaps-9957951a-8f21-469b-910c-1d62f7595dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027993ms
+Jun  3 21:03:14.080: INFO: Pod "pod-projected-configmaps-9957951a-8f21-469b-910c-1d62f7595dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007989295s
+Jun  3 21:03:16.089: INFO: Pod "pod-projected-configmaps-9957951a-8f21-469b-910c-1d62f7595dd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017687902s
+STEP: Saw pod success
+Jun  3 21:03:16.089: INFO: Pod "pod-projected-configmaps-9957951a-8f21-469b-910c-1d62f7595dd1" satisfied condition "success or failure"
+Jun  3 21:03:16.093: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-configmaps-9957951a-8f21-469b-910c-1d62f7595dd1 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  3 21:03:16.118: INFO: Waiting for pod pod-projected-configmaps-9957951a-8f21-469b-910c-1d62f7595dd1 to disappear
+Jun  3 21:03:16.121: INFO: Pod pod-projected-configmaps-9957951a-8f21-469b-910c-1d62f7595dd1 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:03:16.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-4552" for this suite.
+Jun  3 21:03:22.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:03:22.226: INFO: namespace projected-4552 deletion completed in 6.099886054s
+
+• [SLOW TEST:10.207 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl run job 
+  should create a job from an image when restart is OnFailure  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:03:22.226: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl run job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1595
+[It] should create a job from an image when restart is OnFailure  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: running the image docker.io/library/httpd:2.4.38-alpine
+Jun  3 21:03:22.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4241'
+Jun  3 21:03:22.371: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun  3 21:03:22.371: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
+STEP: verifying the job e2e-test-httpd-job was created
+[AfterEach] Kubectl run job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1600
+Jun  3 21:03:22.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete jobs e2e-test-httpd-job --namespace=kubectl-4241'
+Jun  3 21:03:22.472: INFO: stderr: ""
+Jun  3 21:03:22.472: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:03:22.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-4241" for this suite.
+Jun  3 21:03:28.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:03:28.579: INFO: namespace kubectl-4241 deletion completed in 6.101249956s
+
+• [SLOW TEST:6.353 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl run job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591
+    should create a job from an image when restart is OnFailure  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:03:28.579: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name projected-configmap-test-volume-c67e756e-04fb-477e-954e-0db177709486
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:03:28.630: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-76d99d9d-8a6a-4126-899c-6915feab4ef4" in namespace "projected-4069" to be "success or failure"
+Jun  3 21:03:28.632: INFO: Pod "pod-projected-configmaps-76d99d9d-8a6a-4126-899c-6915feab4ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841007ms
+Jun  3 21:03:30.636: INFO: Pod "pod-projected-configmaps-76d99d9d-8a6a-4126-899c-6915feab4ef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006653322s
+STEP: Saw pod success
+Jun  3 21:03:30.636: INFO: Pod "pod-projected-configmaps-76d99d9d-8a6a-4126-899c-6915feab4ef4" satisfied condition "success or failure"
+Jun  3 21:03:30.639: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-configmaps-76d99d9d-8a6a-4126-899c-6915feab4ef4 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  3 21:03:30.659: INFO: Waiting for pod pod-projected-configmaps-76d99d9d-8a6a-4126-899c-6915feab4ef4 to disappear
+Jun  3 21:03:30.661: INFO: Pod pod-projected-configmaps-76d99d9d-8a6a-4126-899c-6915feab4ef4 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:03:30.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-4069" for this suite.
+Jun  3 21:03:36.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:03:36.769: INFO: namespace projected-4069 deletion completed in 6.104025701s
+
+• [SLOW TEST:8.190 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:03:36.770: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating a watch on configmaps with a certain label
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: changing the label value of the configmap
+STEP: Expecting to observe a delete notification for the watched object
+Jun  3 21:03:36.819: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-749 /api/v1/namespaces/watch-749/configmaps/e2e-watch-test-label-changed e3b9465e-5d8d-43a4-9dac-2e167563c056 156538 0 2020-06-03 21:03:36 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Jun  3 21:03:36.819: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-749 /api/v1/namespaces/watch-749/configmaps/e2e-watch-test-label-changed e3b9465e-5d8d-43a4-9dac-2e167563c056 156539 0 2020-06-03 21:03:36 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+Jun  3 21:03:36.819: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-749 /api/v1/namespaces/watch-749/configmaps/e2e-watch-test-label-changed e3b9465e-5d8d-43a4-9dac-2e167563c056 156540 0 2020-06-03 21:03:36 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+STEP: modifying the configmap a second time
+STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
+STEP: changing the label value of the configmap back
+STEP: modifying the configmap a third time
+STEP: deleting the configmap
+STEP: Expecting to observe an add notification for the watched object when the label value was restored
+Jun  3 21:03:46.852: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-749 /api/v1/namespaces/watch-749/configmaps/e2e-watch-test-label-changed e3b9465e-5d8d-43a4-9dac-2e167563c056 156559 0 2020-06-03 21:03:36 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Jun  3 21:03:46.852: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-749 /api/v1/namespaces/watch-749/configmaps/e2e-watch-test-label-changed e3b9465e-5d8d-43a4-9dac-2e167563c056 156560 0 2020-06-03 21:03:36 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
+Jun  3 21:03:46.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-749 /api/v1/namespaces/watch-749/configmaps/e2e-watch-test-label-changed e3b9465e-5d8d-43a4-9dac-2e167563c056 156561 0 2020-06-03 21:03:36 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:03:46.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-749" for this suite.
+Jun  3 21:03:52.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:03:52.961: INFO: namespace watch-749 deletion completed in 6.103710079s
+
+• [SLOW TEST:16.191 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a replica set. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:03:52.961: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a ReplicaSet
+STEP: Ensuring resource quota status captures replicaset creation
+STEP: Deleting a ReplicaSet
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:04:04.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-8378" for this suite.
+Jun  3 21:04:10.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:04:10.148: INFO: namespace resourcequota-8378 deletion completed in 6.100444093s
+
+• [SLOW TEST:17.187 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a replica set. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-network] DNS 
+  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:04:10.148: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3120.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3120.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3120.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3120.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3120.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3120.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe /etc/hosts
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun  3 21:04:24.237: INFO: DNS probes using dns-3120/dns-test-6eb0b731-fcbe-4254-bb2c-f0423a3c8103 succeeded
+
+STEP: deleting the pod
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:04:24.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-3120" for this suite.
+Jun  3 21:04:30.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:04:30.364: INFO: namespace dns-3120 deletion completed in 6.108125759s
+
+• [SLOW TEST:20.216 seconds]
+[sig-network] DNS
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  should perform canary updates and phased rolling updates of template modifications [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:04:30.365: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
+STEP: Creating service test in namespace statefulset-1346
+[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a new StatefulSet
+Jun  3 21:04:30.420: INFO: Found 0 stateful pods, waiting for 3
+Jun  3 21:04:40.426: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun  3 21:04:40.426: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun  3 21:04:40.426: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
+Jun  3 21:04:40.456: INFO: Updating stateful set ss2
+STEP: Creating a new revision
+STEP: Not applying an update when the partition is greater than the number of replicas
+STEP: Performing a canary update
+Jun  3 21:04:50.493: INFO: Updating stateful set ss2
+Jun  3 21:04:50.502: INFO: Waiting for Pod statefulset-1346/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+STEP: Restoring Pods to the correct revision when they are deleted
+Jun  3 21:05:00.559: INFO: Found 2 stateful pods, waiting for 3
+Jun  3 21:05:10.566: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun  3 21:05:10.566: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun  3 21:05:10.566: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Performing a phased rolling update
+Jun  3 21:05:10.591: INFO: Updating stateful set ss2
+Jun  3 21:05:10.597: INFO: Waiting for Pod statefulset-1346/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+Jun  3 21:05:20.625: INFO: Updating stateful set ss2
+Jun  3 21:05:20.630: INFO: Waiting for StatefulSet statefulset-1346/ss2 to complete update
+Jun  3 21:05:20.631: INFO: Waiting for Pod statefulset-1346/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
+Jun  3 21:05:30.639: INFO: Deleting all statefulset in ns statefulset-1346
+Jun  3 21:05:30.642: INFO: Scaling statefulset ss2 to 0
+Jun  3 21:05:40.658: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:05:40.661: INFO: Deleting statefulset ss2
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:05:40.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-1346" for this suite.
+Jun  3 21:05:46.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:05:46.780: INFO: namespace statefulset-1346 deletion completed in 6.099818511s
+
+• [SLOW TEST:76.416 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+    should perform canary updates and phased rolling updates of template modifications [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:05:46.781: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name projected-configmap-test-volume-099e164e-03b4-4dde-9f46-411d84889c93
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:05:46.826: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f71f9d9-bfc8-4850-aa59-782f29aba361" in namespace "projected-8460" to be "success or failure"
+Jun  3 21:05:46.830: INFO: Pod "pod-projected-configmaps-0f71f9d9-bfc8-4850-aa59-782f29aba361": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424533ms
+Jun  3 21:05:48.835: INFO: Pod "pod-projected-configmaps-0f71f9d9-bfc8-4850-aa59-782f29aba361": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009571261s
+STEP: Saw pod success
+Jun  3 21:05:48.835: INFO: Pod "pod-projected-configmaps-0f71f9d9-bfc8-4850-aa59-782f29aba361" satisfied condition "success or failure"
+Jun  3 21:05:48.838: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-configmaps-0f71f9d9-bfc8-4850-aa59-782f29aba361 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  3 21:05:48.866: INFO: Waiting for pod pod-projected-configmaps-0f71f9d9-bfc8-4850-aa59-782f29aba361 to disappear
+Jun  3 21:05:48.869: INFO: Pod pod-projected-configmaps-0f71f9d9-bfc8-4850-aa59-782f29aba361 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:05:48.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8460" for this suite.
+Jun  3 21:05:54.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:05:54.975: INFO: namespace projected-8460 deletion completed in 6.103002026s
+
+• [SLOW TEST:8.195 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:05:54.976: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0644 on node default medium
+Jun  3 21:05:55.017: INFO: Waiting up to 5m0s for pod "pod-4febf65d-adc9-4dff-b3d0-046798382bad" in namespace "emptydir-8910" to be "success or failure"
+Jun  3 21:05:55.025: INFO: Pod "pod-4febf65d-adc9-4dff-b3d0-046798382bad": Phase="Pending", Reason="", readiness=false. Elapsed: 7.622593ms
+Jun  3 21:05:57.030: INFO: Pod "pod-4febf65d-adc9-4dff-b3d0-046798382bad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012517261s
+Jun  3 21:05:59.034: INFO: Pod "pod-4febf65d-adc9-4dff-b3d0-046798382bad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016963115s
+STEP: Saw pod success
+Jun  3 21:05:59.034: INFO: Pod "pod-4febf65d-adc9-4dff-b3d0-046798382bad" satisfied condition "success or failure"
+Jun  3 21:05:59.038: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-4febf65d-adc9-4dff-b3d0-046798382bad container test-container: 
+STEP: delete the pod
+Jun  3 21:05:59.064: INFO: Waiting for pod pod-4febf65d-adc9-4dff-b3d0-046798382bad to disappear
+Jun  3 21:05:59.068: INFO: Pod pod-4febf65d-adc9-4dff-b3d0-046798382bad no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:05:59.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-8910" for this suite.
+Jun  3 21:06:05.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:06:05.180: INFO: namespace emptydir-8910 deletion completed in 6.108942623s
+
+• [SLOW TEST:10.205 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected combined 
+  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected combined
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:06:05.181: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-projected-all-test-volume-35f21436-71c2-4ef8-a071-09fd7472a7ed
+STEP: Creating secret with name secret-projected-all-test-volume-c8339fc7-e8ee-4e62-992c-7f573c52da1d
+STEP: Creating a pod to test Check all projections for projected volume plugin
+Jun  3 21:06:05.236: INFO: Waiting up to 5m0s for pod "projected-volume-4f70cbd0-1b7b-455c-8f63-747794469f41" in namespace "projected-6654" to be "success or failure"
+Jun  3 21:06:05.241: INFO: Pod "projected-volume-4f70cbd0-1b7b-455c-8f63-747794469f41": Phase="Pending", Reason="", readiness=false. Elapsed: 5.06442ms
+Jun  3 21:06:07.245: INFO: Pod "projected-volume-4f70cbd0-1b7b-455c-8f63-747794469f41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009553537s
+STEP: Saw pod success
+Jun  3 21:06:07.245: INFO: Pod "projected-volume-4f70cbd0-1b7b-455c-8f63-747794469f41" satisfied condition "success or failure"
+Jun  3 21:06:07.249: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod projected-volume-4f70cbd0-1b7b-455c-8f63-747794469f41 container projected-all-volume-test: 
+STEP: delete the pod
+Jun  3 21:06:07.271: INFO: Waiting for pod projected-volume-4f70cbd0-1b7b-455c-8f63-747794469f41 to disappear
+Jun  3 21:06:07.274: INFO: Pod projected-volume-4f70cbd0-1b7b-455c-8f63-747794469f41 no longer exists
+[AfterEach] [sig-storage] Projected combined
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:06:07.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6654" for this suite.
+Jun  3 21:06:13.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:06:13.382: INFO: namespace projected-6654 deletion completed in 6.10299267s
+
+• [SLOW TEST:8.201 seconds]
+[sig-storage] Projected combined
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
+  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-api-machinery] Watchers 
+  should be able to start watching from a specific resource version [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:06:13.382: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to start watching from a specific resource version [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: modifying the configmap a second time
+STEP: deleting the configmap
+STEP: creating a watch on configmaps from the resource version returned by the first update
+STEP: Expecting to observe notifications for all changes to the configmap after the first update
+Jun  3 21:06:13.443: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-9178 /api/v1/namespaces/watch-9178/configmaps/e2e-watch-test-resource-version 29c2d41d-ea46-4e5e-811d-15699954dcb9 157274 0 2020-06-03 21:06:13 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Jun  3 21:06:13.443: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-9178 /api/v1/namespaces/watch-9178/configmaps/e2e-watch-test-resource-version 29c2d41d-ea46-4e5e-811d-15699954dcb9 157275 0 2020-06-03 21:06:13 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:06:13.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-9178" for this suite.
+Jun  3 21:06:19.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:06:19.564: INFO: namespace watch-9178 deletion completed in 6.11737795s
+
+• [SLOW TEST:6.182 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to start watching from a specific resource version [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:06:19.564: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Performing setup for networking test in namespace pod-network-test-8427
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun  3 21:06:19.602: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun  3 21:06:41.734: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.102:8080/dial?request=hostName&protocol=udp&host=172.20.2.101&port=8081&tries=1'] Namespace:pod-network-test-8427 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:06:41.734: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:06:41.862: INFO: Waiting for endpoints: map[]
+Jun  3 21:06:41.867: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.102:8080/dial?request=hostName&protocol=udp&host=172.20.3.116&port=8081&tries=1'] Namespace:pod-network-test-8427 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:06:41.867: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:06:41.994: INFO: Waiting for endpoints: map[]
+Jun  3 21:06:41.998: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.102:8080/dial?request=hostName&protocol=udp&host=172.20.1.18&port=8081&tries=1'] Namespace:pod-network-test-8427 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:06:41.998: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:06:42.137: INFO: Waiting for endpoints: map[]
+Jun  3 21:06:42.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.102:8080/dial?request=hostName&protocol=udp&host=172.20.4.27&port=8081&tries=1'] Namespace:pod-network-test-8427 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:06:42.142: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:06:42.278: INFO: Waiting for endpoints: map[]
+Jun  3 21:06:42.282: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.102:8080/dial?request=hostName&protocol=udp&host=172.20.0.21&port=8081&tries=1'] Namespace:pod-network-test-8427 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:06:42.282: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:06:42.405: INFO: Waiting for endpoints: map[]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:06:42.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-8427" for this suite.
+Jun  3 21:06:54.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:06:54.510: INFO: namespace pod-network-test-8427 deletion completed in 12.098816716s
+
+• [SLOW TEST:34.945 seconds]
+[sig-network] Networking
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl run deployment 
+  should create a deployment from an image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:06:54.510: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl run deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540
+[It] should create a deployment from an image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: running the image docker.io/library/httpd:2.4.38-alpine
+Jun  3 21:06:54.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-6534'
+Jun  3 21:06:54.658: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun  3 21:06:54.658: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
+STEP: verifying the deployment e2e-test-httpd-deployment was created
+STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
+[AfterEach] Kubectl run deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545
+Jun  3 21:06:56.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete deployment e2e-test-httpd-deployment --namespace=kubectl-6534'
+Jun  3 21:06:56.772: INFO: stderr: ""
+Jun  3 21:06:56.772: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:06:56.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-6534" for this suite.
+Jun  3 21:07:08.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:07:08.882: INFO: namespace kubectl-6534 deletion completed in 12.105940395s
+
+• [SLOW TEST:14.372 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl run deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1536
+    should create a deployment from an image  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Proxy server 
+  should support --unix-socket=/path  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:07:08.882: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should support --unix-socket=/path  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Starting the proxy
+Jun  3 21:07:08.916: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-005848369 proxy --unix-socket=/tmp/kubectl-proxy-unix785780680/test'
+STEP: retrieving proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:07:08.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-2241" for this suite.
+Jun  3 21:07:15.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:07:15.103: INFO: namespace kubectl-2241 deletion completed in 6.113848701s
+
+• [SLOW TEST:6.220 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Proxy server
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1782
+    should support --unix-socket=/path  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:07:15.103: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:07:15.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc55ca96-2664-45f5-a231-86a15aaca5fb" in namespace "downward-api-4898" to be "success or failure"
+Jun  3 21:07:15.205: INFO: Pod "downwardapi-volume-dc55ca96-2664-45f5-a231-86a15aaca5fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.670515ms
+Jun  3 21:07:17.211: INFO: Pod "downwardapi-volume-dc55ca96-2664-45f5-a231-86a15aaca5fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009836633s
+STEP: Saw pod success
+Jun  3 21:07:17.211: INFO: Pod "downwardapi-volume-dc55ca96-2664-45f5-a231-86a15aaca5fb" satisfied condition "success or failure"
+Jun  3 21:07:17.214: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-dc55ca96-2664-45f5-a231-86a15aaca5fb container client-container: 
+STEP: delete the pod
+Jun  3 21:07:17.235: INFO: Waiting for pod downwardapi-volume-dc55ca96-2664-45f5-a231-86a15aaca5fb to disappear
+Jun  3 21:07:17.238: INFO: Pod downwardapi-volume-dc55ca96-2664-45f5-a231-86a15aaca5fb no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:07:17.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-4898" for this suite.
+Jun  3 21:07:23.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:07:23.368: INFO: namespace downward-api-4898 deletion completed in 6.121328062s
+
+• [SLOW TEST:8.265 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  removes definition from spec when one version gets changed to not be served [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:07:23.368: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] removes definition from spec when one version gets changed to not be served [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: set up a multi version CRD
+Jun  3 21:07:23.406: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: mark a version not serverd
+STEP: check the unserved version gets removed
+STEP: check the other version is not changed
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:07:40.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-9604" for this suite.
+Jun  3 21:07:46.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:07:46.279: INFO: namespace crd-publish-openapi-9604 deletion completed in 6.103128519s
+
+• [SLOW TEST:22.911 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  removes definition from spec when one version gets changed to not be served [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:07:46.279: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name cm-test-opt-del-62ecb74c-282b-4d77-8e6a-70f297675010
+STEP: Creating configMap with name cm-test-opt-upd-ef9efd53-48ac-44e7-9b58-612920cff63e
+STEP: Creating the pod
+STEP: Deleting configmap cm-test-opt-del-62ecb74c-282b-4d77-8e6a-70f297675010
+STEP: Updating configmap cm-test-opt-upd-ef9efd53-48ac-44e7-9b58-612920cff63e
+STEP: Creating configMap with name cm-test-opt-create-c7b78c51-bf69-414e-b459-efb13f32da3b
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:08:58.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-7736" for this suite.
+Jun  3 21:09:10.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:09:10.897: INFO: namespace configmap-7736 deletion completed in 12.104897115s
+
+• [SLOW TEST:84.617 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:09:10.897: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0777 on tmpfs
+Jun  3 21:09:10.937: INFO: Waiting up to 5m0s for pod "pod-e57abd24-7038-4f65-8755-33ad9f88dc5b" in namespace "emptydir-9471" to be "success or failure"
+Jun  3 21:09:10.940: INFO: Pod "pod-e57abd24-7038-4f65-8755-33ad9f88dc5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896315ms
+Jun  3 21:09:12.944: INFO: Pod "pod-e57abd24-7038-4f65-8755-33ad9f88dc5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006847571s
+STEP: Saw pod success
+Jun  3 21:09:12.944: INFO: Pod "pod-e57abd24-7038-4f65-8755-33ad9f88dc5b" satisfied condition "success or failure"
+Jun  3 21:09:12.946: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-e57abd24-7038-4f65-8755-33ad9f88dc5b container test-container: 
+STEP: delete the pod
+Jun  3 21:09:12.968: INFO: Waiting for pod pod-e57abd24-7038-4f65-8755-33ad9f88dc5b to disappear
+Jun  3 21:09:12.971: INFO: Pod pod-e57abd24-7038-4f65-8755-33ad9f88dc5b no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:09:12.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-9471" for this suite.
+Jun  3 21:09:18.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:09:19.078: INFO: namespace emptydir-9471 deletion completed in 6.104170506s
+
+• [SLOW TEST:8.182 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  works for CRD preserving unknown fields at the schema root [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:09:19.079: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] works for CRD preserving unknown fields at the schema root [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:09:19.111: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
+Jun  3 21:09:22.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-9965 create -f -'
+Jun  3 21:09:23.234: INFO: stderr: ""
+Jun  3 21:09:23.234: INFO: stdout: "e2e-test-crd-publish-openapi-9941-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
+Jun  3 21:09:23.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-9965 delete e2e-test-crd-publish-openapi-9941-crds test-cr'
+Jun  3 21:09:23.367: INFO: stderr: ""
+Jun  3 21:09:23.367: INFO: stdout: "e2e-test-crd-publish-openapi-9941-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
+Jun  3 21:09:23.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-9965 apply -f -'
+Jun  3 21:09:23.638: INFO: stderr: ""
+Jun  3 21:09:23.638: INFO: stdout: "e2e-test-crd-publish-openapi-9941-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
+Jun  3 21:09:23.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-9965 delete e2e-test-crd-publish-openapi-9941-crds test-cr'
+Jun  3 21:09:23.742: INFO: stderr: ""
+Jun  3 21:09:23.742: INFO: stdout: "e2e-test-crd-publish-openapi-9941-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
+STEP: kubectl explain works to explain CR
+Jun  3 21:09:23.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 explain e2e-test-crd-publish-openapi-9941-crds'
+Jun  3 21:09:23.948: INFO: stderr: ""
+Jun  3 21:09:23.948: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9941-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:09:27.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-9965" for this suite.
+Jun  3 21:09:33.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:09:33.641: INFO: namespace crd-publish-openapi-9965 deletion completed in 6.120015836s
+
+• [SLOW TEST:14.562 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  works for CRD preserving unknown fields at the schema root [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:09:33.641: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Performing setup for networking test in namespace pod-network-test-6032
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun  3 21:09:33.680: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun  3 21:09:57.801: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.108:8080/dial?request=hostName&protocol=http&host=172.20.3.117&port=8080&tries=1'] Namespace:pod-network-test-6032 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:09:57.802: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:09:57.944: INFO: Waiting for endpoints: map[]
+Jun  3 21:09:57.948: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.108:8080/dial?request=hostName&protocol=http&host=172.20.0.22&port=8080&tries=1'] Namespace:pod-network-test-6032 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:09:57.948: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:09:58.120: INFO: Waiting for endpoints: map[]
+Jun  3 21:09:58.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.108:8080/dial?request=hostName&protocol=http&host=172.20.1.19&port=8080&tries=1'] Namespace:pod-network-test-6032 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:09:58.124: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:09:58.265: INFO: Waiting for endpoints: map[]
+Jun  3 21:09:58.269: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.108:8080/dial?request=hostName&protocol=http&host=172.20.4.28&port=8080&tries=1'] Namespace:pod-network-test-6032 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:09:58.269: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:09:58.405: INFO: Waiting for endpoints: map[]
+Jun  3 21:09:58.408: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://172.20.2.108:8080/dial?request=hostName&protocol=http&host=172.20.2.107&port=8080&tries=1'] Namespace:pod-network-test-6032 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:09:58.408: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:09:58.552: INFO: Waiting for endpoints: map[]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:09:58.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-6032" for this suite.
+Jun  3 21:10:10.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:10:10.673: INFO: namespace pod-network-test-6032 deletion completed in 12.116252117s
+
+• [SLOW TEST:37.032 seconds]
+[sig-network] Networking
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:10:10.674: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test env composition
+Jun  3 21:10:10.722: INFO: Waiting up to 5m0s for pod "var-expansion-059f3f9e-0cd0-4c22-af91-749dd6ff971e" in namespace "var-expansion-8436" to be "success or failure"
+Jun  3 21:10:10.725: INFO: Pod "var-expansion-059f3f9e-0cd0-4c22-af91-749dd6ff971e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392735ms
+Jun  3 21:10:12.730: INFO: Pod "var-expansion-059f3f9e-0cd0-4c22-af91-749dd6ff971e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007244594s
+STEP: Saw pod success
+Jun  3 21:10:12.730: INFO: Pod "var-expansion-059f3f9e-0cd0-4c22-af91-749dd6ff971e" satisfied condition "success or failure"
+Jun  3 21:10:12.733: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod var-expansion-059f3f9e-0cd0-4c22-af91-749dd6ff971e container dapi-container: 
+STEP: delete the pod
+Jun  3 21:10:12.755: INFO: Waiting for pod var-expansion-059f3f9e-0cd0-4c22-af91-749dd6ff971e to disappear
+Jun  3 21:10:12.758: INFO: Pod var-expansion-059f3f9e-0cd0-4c22-af91-749dd6ff971e no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:10:12.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "var-expansion-8436" for this suite.
+Jun  3 21:10:18.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:10:18.868: INFO: namespace var-expansion-8436 deletion completed in 6.103865352s
+
+• [SLOW TEST:8.194 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:10:18.868: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name secret-test-map-824a97f1-2168-4be0-9be1-4859682cf31e
+STEP: Creating a pod to test consume secrets
+Jun  3 21:10:18.917: INFO: Waiting up to 5m0s for pod "pod-secrets-504eda06-a2db-4dc4-a810-679e37804315" in namespace "secrets-4027" to be "success or failure"
+Jun  3 21:10:18.921: INFO: Pod "pod-secrets-504eda06-a2db-4dc4-a810-679e37804315": Phase="Pending", Reason="", readiness=false. Elapsed: 4.700261ms
+Jun  3 21:10:20.927: INFO: Pod "pod-secrets-504eda06-a2db-4dc4-a810-679e37804315": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009845417s
+STEP: Saw pod success
+Jun  3 21:10:20.927: INFO: Pod "pod-secrets-504eda06-a2db-4dc4-a810-679e37804315" satisfied condition "success or failure"
+Jun  3 21:10:20.930: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-secrets-504eda06-a2db-4dc4-a810-679e37804315 container secret-volume-test: 
+STEP: delete the pod
+Jun  3 21:10:20.953: INFO: Waiting for pod pod-secrets-504eda06-a2db-4dc4-a810-679e37804315 to disappear
+Jun  3 21:10:20.956: INFO: Pod pod-secrets-504eda06-a2db-4dc4-a810-679e37804315 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:10:20.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-4027" for this suite.
+Jun  3 21:10:26.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:10:27.065: INFO: namespace secrets-4027 deletion completed in 6.105440324s
+
+• [SLOW TEST:8.197 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-api-machinery] Namespaces [Serial] 
+  should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:10:27.066: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename namespaces
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a test namespace
+STEP: Waiting for a default service account to be provisioned in namespace
+STEP: Creating a service in the namespace
+STEP: Deleting the namespace
+STEP: Waiting for the namespace to be removed.
+STEP: Recreating the namespace
+STEP: Verifying there is no service in the namespace
+[AfterEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:10:33.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "namespaces-4595" for this suite.
+Jun  3 21:10:39.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:10:39.304: INFO: namespace namespaces-4595 deletion completed in 6.109027172s
+STEP: Destroying namespace "nsdeletetest-8293" for this suite.
+Jun  3 21:10:39.306: INFO: Namespace nsdeletetest-8293 was already deleted
+STEP: Destroying namespace "nsdeletetest-4803" for this suite.
+Jun  3 21:10:45.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:10:45.404: INFO: namespace nsdeletetest-4803 deletion completed in 6.097043259s
+
+• [SLOW TEST:18.338 seconds]
+[sig-api-machinery] Namespaces [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:10:45.404: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
+[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:11:11.459: INFO: Container started at 2020-06-03 21:10:46 +0000 UTC, pod became ready at 2020-06-03 21:11:10 +0000 UTC
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:11:11.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-3762" for this suite.
+Jun  3 21:11:23.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:11:23.569: INFO: namespace container-probe-3762 deletion completed in 12.106524516s
+
+• [SLOW TEST:38.166 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with configmap pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:11:23.570: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
+STEP: Setting up data
+[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating pod pod-subpath-test-configmap-fxgg
+STEP: Creating a pod to test atomic-volume-subpath
+Jun  3 21:11:23.619: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-fxgg" in namespace "subpath-6374" to be "success or failure"
+Jun  3 21:11:23.624: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Pending", Reason="", readiness=false. Elapsed: 5.696616ms
+Jun  3 21:11:25.629: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 2.010155796s
+Jun  3 21:11:27.632: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 4.013534424s
+Jun  3 21:11:29.637: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 6.017983958s
+Jun  3 21:11:31.641: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 8.022460958s
+Jun  3 21:11:33.645: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 10.026212921s
+Jun  3 21:11:35.649: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 12.030618056s
+Jun  3 21:11:37.654: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 14.03528068s
+Jun  3 21:11:39.658: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 16.039816294s
+Jun  3 21:11:41.663: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 18.044130386s
+Jun  3 21:11:43.666: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Running", Reason="", readiness=true. Elapsed: 20.047264766s
+Jun  3 21:11:45.670: INFO: Pod "pod-subpath-test-configmap-fxgg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051540871s
+STEP: Saw pod success
+Jun  3 21:11:45.670: INFO: Pod "pod-subpath-test-configmap-fxgg" satisfied condition "success or failure"
+Jun  3 21:11:45.673: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-subpath-test-configmap-fxgg container test-container-subpath-configmap-fxgg: 
+STEP: delete the pod
+Jun  3 21:11:45.694: INFO: Waiting for pod pod-subpath-test-configmap-fxgg to disappear
+Jun  3 21:11:45.697: INFO: Pod pod-subpath-test-configmap-fxgg no longer exists
+STEP: Deleting pod pod-subpath-test-configmap-fxgg
+Jun  3 21:11:45.697: INFO: Deleting pod "pod-subpath-test-configmap-fxgg" in namespace "subpath-6374"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:11:45.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-6374" for this suite.
+Jun  3 21:11:51.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:11:51.803: INFO: namespace subpath-6374 deletion completed in 6.099017202s
+
+• [SLOW TEST:28.233 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  Atomic writer volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
+    should support subpaths with configmap pod [LinuxOnly] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-node] Downward API 
+  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:11:51.803: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward api env vars
+Jun  3 21:11:51.844: INFO: Waiting up to 5m0s for pod "downward-api-e62812a9-1a9e-4d77-b867-d03469e19250" in namespace "downward-api-8352" to be "success or failure"
+Jun  3 21:11:51.846: INFO: Pod "downward-api-e62812a9-1a9e-4d77-b867-d03469e19250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.421149ms
+Jun  3 21:11:53.851: INFO: Pod "downward-api-e62812a9-1a9e-4d77-b867-d03469e19250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007102672s
+STEP: Saw pod success
+Jun  3 21:11:53.851: INFO: Pod "downward-api-e62812a9-1a9e-4d77-b867-d03469e19250" satisfied condition "success or failure"
+Jun  3 21:11:53.853: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downward-api-e62812a9-1a9e-4d77-b867-d03469e19250 container dapi-container: 
+STEP: delete the pod
+Jun  3 21:11:53.876: INFO: Waiting for pod downward-api-e62812a9-1a9e-4d77-b867-d03469e19250 to disappear
+Jun  3 21:11:53.878: INFO: Pod downward-api-e62812a9-1a9e-4d77-b867-d03469e19250 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:11:53.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-8352" for this suite.
+Jun  3 21:11:59.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:11:59.986: INFO: namespace downward-api-8352 deletion completed in 6.103381935s
+
+• [SLOW TEST:8.184 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
+  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl rolling-update 
+  should support rolling-update to same image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:11:59.987: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl rolling-update
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1499
+[It] should support rolling-update to same image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: running the image docker.io/library/httpd:2.4.38-alpine
+Jun  3 21:12:00.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7561'
+Jun  3 21:12:00.128: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun  3 21:12:00.128: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
+STEP: verifying the rc e2e-test-httpd-rc was created
+Jun  3 21:12:00.135: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
+Jun  3 21:12:00.136: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
+STEP: rolling-update to same image controller
+Jun  3 21:12:00.149: INFO: scanned /root for discovery docs: 
+Jun  3 21:12:00.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7561'
+Jun  3 21:12:15.960: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
+Jun  3 21:12:15.960: INFO: stdout: "Created e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75\nScaling up e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
+Jun  3 21:12:15.960: INFO: stdout: "Created e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75\nScaling up e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
+STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
+Jun  3 21:12:15.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7561'
+Jun  3 21:12:16.058: INFO: stderr: ""
+Jun  3 21:12:16.058: INFO: stdout: "e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75-mpkvz "
+Jun  3 21:12:16.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75-mpkvz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7561'
+Jun  3 21:12:16.153: INFO: stderr: ""
+Jun  3 21:12:16.153: INFO: stdout: "true"
+Jun  3 21:12:16.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75-mpkvz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7561'
+Jun  3 21:12:16.241: INFO: stderr: ""
+Jun  3 21:12:16.241: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
+Jun  3 21:12:16.241: INFO: e2e-test-httpd-rc-dd4f3bb699983d910217bb623e3deb75-mpkvz is verified up and running
+[AfterEach] Kubectl rolling-update
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1505
+Jun  3 21:12:16.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete rc e2e-test-httpd-rc --namespace=kubectl-7561'
+Jun  3 21:12:16.347: INFO: stderr: ""
+Jun  3 21:12:16.347: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:12:16.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-7561" for this suite.
+Jun  3 21:12:28.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:12:28.460: INFO: namespace kubectl-7561 deletion completed in 12.107439297s
+
+• [SLOW TEST:28.474 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl rolling-update
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1494
+    should support rolling-update to same image  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:12:28.461: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name secret-test-map-a5d330c2-69fc-4f61-ae55-7a2defc13a51
+STEP: Creating a pod to test consume secrets
+Jun  3 21:12:28.508: INFO: Waiting up to 5m0s for pod "pod-secrets-4a59112e-3b18-4866-bf1a-5119e3439b26" in namespace "secrets-3201" to be "success or failure"
+Jun  3 21:12:28.511: INFO: Pod "pod-secrets-4a59112e-3b18-4866-bf1a-5119e3439b26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323011ms
+Jun  3 21:12:30.515: INFO: Pod "pod-secrets-4a59112e-3b18-4866-bf1a-5119e3439b26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006985573s
+STEP: Saw pod success
+Jun  3 21:12:30.515: INFO: Pod "pod-secrets-4a59112e-3b18-4866-bf1a-5119e3439b26" satisfied condition "success or failure"
+Jun  3 21:12:30.518: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-secrets-4a59112e-3b18-4866-bf1a-5119e3439b26 container secret-volume-test: 
+STEP: delete the pod
+Jun  3 21:12:30.540: INFO: Waiting for pod pod-secrets-4a59112e-3b18-4866-bf1a-5119e3439b26 to disappear
+Jun  3 21:12:30.542: INFO: Pod pod-secrets-4a59112e-3b18-4866-bf1a-5119e3439b26 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:12:30.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-3201" for this suite.
+Jun  3 21:12:36.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:12:36.651: INFO: namespace secrets-3201 deletion completed in 6.102978117s
+
+• [SLOW TEST:8.190 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should provide secure master service  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:12:36.651: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+[It] should provide secure master service  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:12:36.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-3374" for this suite.
+Jun  3 21:12:42.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:12:42.793: INFO: namespace services-3374 deletion completed in 6.101419313s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95
+
+• [SLOW TEST:6.142 seconds]
+[sig-network] Services
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide secure master service  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:12:42.793: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for all pods to be garbage collected
+STEP: Gathering metrics
+Jun  3 21:12:52.852: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+W0603 21:12:52.852433      25 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:12:52.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-3825" for this suite.
+Jun  3 21:12:58.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:12:58.962: INFO: namespace gc-3825 deletion completed in 6.106473991s
+
+• [SLOW TEST:16.170 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should not be blocked by dependency circle [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:12:58.963: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not be blocked by dependency circle [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:12:59.058: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"61ada2f1-ac52-4ee4-b823-5a85a53cfbeb", Controller:(*bool)(0xc003973e3a), BlockOwnerDeletion:(*bool)(0xc003973e3b)}}
+Jun  3 21:12:59.065: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8ee24b95-d360-4a05-a267-5e6e2c51eda6", Controller:(*bool)(0xc0038030da), BlockOwnerDeletion:(*bool)(0xc0038030db)}}
+Jun  3 21:12:59.073: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"524c2f61-cb17-4828-a38e-b7aea0e9e0b8", Controller:(*bool)(0xc00623d73a), BlockOwnerDeletion:(*bool)(0xc00623d73b)}}
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:13:04.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-9338" for this suite.
+Jun  3 21:13:10.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:13:10.208: INFO: namespace gc-9338 deletion completed in 6.116171167s
+
+• [SLOW TEST:11.246 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should not be blocked by dependency circle [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:13:10.208: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Jun  3 21:13:10.249: INFO: Waiting up to 5m0s for pod "pod-cc930cac-cbb9-48f1-a704-6d08486ccc27" in namespace "emptydir-9374" to be "success or failure"
+Jun  3 21:13:10.254: INFO: Pod "pod-cc930cac-cbb9-48f1-a704-6d08486ccc27": Phase="Pending", Reason="", readiness=false. Elapsed: 5.085552ms
+Jun  3 21:13:12.261: INFO: Pod "pod-cc930cac-cbb9-48f1-a704-6d08486ccc27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012211774s
+STEP: Saw pod success
+Jun  3 21:13:12.261: INFO: Pod "pod-cc930cac-cbb9-48f1-a704-6d08486ccc27" satisfied condition "success or failure"
+Jun  3 21:13:12.266: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-cc930cac-cbb9-48f1-a704-6d08486ccc27 container test-container: 
+STEP: delete the pod
+Jun  3 21:13:12.289: INFO: Waiting for pod pod-cc930cac-cbb9-48f1-a704-6d08486ccc27 to disappear
+Jun  3 21:13:12.292: INFO: Pod pod-cc930cac-cbb9-48f1-a704-6d08486ccc27 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:13:12.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-9374" for this suite.
+Jun  3 21:13:18.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:13:18.397: INFO: namespace emptydir-9374 deletion completed in 6.101177358s
+
+• [SLOW TEST:8.189 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should be able to deny pod and configmap creation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:13:18.397: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:13:18.668: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:13:21.693: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should be able to deny pod and configmap creation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Registering the webhook via the AdmissionRegistration API
+STEP: create a pod that should be denied by the webhook
+STEP: create a pod that causes the webhook to hang
+STEP: create a configmap that should be denied by the webhook
+STEP: create a configmap that should be admitted by the webhook
+STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
+STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
+STEP: create a namespace that bypass the webhook
+STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:13:31.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-1438" for this suite.
+Jun  3 21:13:37.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:13:37.904: INFO: namespace webhook-1438 deletion completed in 6.099668004s
+STEP: Destroying namespace "webhook-1438-markers" for this suite.
+Jun  3 21:13:43.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:13:44.007: INFO: namespace webhook-1438-markers deletion completed in 6.102457917s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:25.627 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to deny pod and configmap creation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should observe add, update, and delete watch notifications on configmaps [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:13:44.024: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating a watch on configmaps with label A
+STEP: creating a watch on configmaps with label B
+STEP: creating a watch on configmaps with label A or B
+STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
+Jun  3 21:13:44.069: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-a 6e76fac4-fd10-4c02-8b9b-48cd10fe1d91 159036 0 2020-06-03 21:13:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Jun  3 21:13:44.069: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-a 6e76fac4-fd10-4c02-8b9b-48cd10fe1d91 159036 0 2020-06-03 21:13:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
+STEP: modifying configmap A and ensuring the correct watchers observe the notification
+Jun  3 21:13:54.080: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-a 6e76fac4-fd10-4c02-8b9b-48cd10fe1d91 159054 0 2020-06-03 21:13:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+Jun  3 21:13:54.080: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-a 6e76fac4-fd10-4c02-8b9b-48cd10fe1d91 159054 0 2020-06-03 21:13:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+STEP: modifying configmap A again and ensuring the correct watchers observe the notification
+Jun  3 21:14:04.089: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-a 6e76fac4-fd10-4c02-8b9b-48cd10fe1d91 159073 0 2020-06-03 21:13:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Jun  3 21:14:04.089: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-a 6e76fac4-fd10-4c02-8b9b-48cd10fe1d91 159073 0 2020-06-03 21:13:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+STEP: deleting configmap A and ensuring the correct watchers observe the notification
+Jun  3 21:14:14.101: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-a 6e76fac4-fd10-4c02-8b9b-48cd10fe1d91 159092 0 2020-06-03 21:13:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Jun  3 21:14:14.101: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-a 6e76fac4-fd10-4c02-8b9b-48cd10fe1d91 159092 0 2020-06-03 21:13:44 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
+Jun  3 21:14:24.110: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-b 872d6edf-c1c2-4d47-a4fb-4319d3294cc3 159112 0 2020-06-03 21:14:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Jun  3 21:14:24.110: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-b 872d6edf-c1c2-4d47-a4fb-4319d3294cc3 159112 0 2020-06-03 21:14:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
+STEP: deleting configmap B and ensuring the correct watchers observe the notification
+Jun  3 21:14:34.120: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-b 872d6edf-c1c2-4d47-a4fb-4319d3294cc3 159131 0 2020-06-03 21:14:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Jun  3 21:14:34.120: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1921 /api/v1/namespaces/watch-1921/configmaps/e2e-watch-test-configmap-b 872d6edf-c1c2-4d47-a4fb-4319d3294cc3 159131 0 2020-06-03 21:14:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:14:44.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "watch-1921" for this suite.
+Jun  3 21:14:50.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:14:50.224: INFO: namespace watch-1921 deletion completed in 6.098620448s
+
+• [SLOW TEST:66.200 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should observe add, update, and delete watch notifications on configmaps [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[sig-apps] ReplicationController 
+  should surface a failure condition on a common issue like exceeded quota [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:14:50.224: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename replication-controller
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:14:50.259: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
+STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
+STEP: Checking rc "condition-test" has the desired failure condition set
+STEP: Scaling down rc "condition-test" to satisfy pod quota
+Jun  3 21:14:52.296: INFO: Updating replication controller "condition-test"
+STEP: Checking rc "condition-test" has no failure condition set
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:14:53.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-9584" for this suite.
+Jun  3 21:14:59.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:14:59.404: INFO: namespace replication-controller-9584 deletion completed in 6.096721673s
+
+• [SLOW TEST:9.180 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should surface a failure condition on a common issue like exceeded quota [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:14:59.404: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
+Jun  3 21:14:59.435: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jun  3 21:14:59.446: INFO: Waiting for terminating namespaces to be deleted...
+Jun  3 21:14:59.449: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-0 before test
+Jun  3 21:14:59.466: INFO: kube-flannel-ds-hznhg from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.466: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: kube-proxy-ds-qrgfl from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.466: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: fluent-bit-mb264 from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.466: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: node-exporter-hkj7p from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.466: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: csi-node-ntnx-plugin-pdc8c from ntnx-system started at 2020-06-03 01:26:50 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.466: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-58wws from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.466: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:14:59.466: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-0 from kube-system started at 2020-06-02 22:11:48 +0000 UTC (3 container statuses recorded)
+Jun  3 21:14:59.466: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 21:14:59.466: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-1 before test
+Jun  3 21:14:59.481: INFO: kube-flannel-ds-zdlj6 from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.481: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: csi-node-ntnx-plugin-6cg44 from ntnx-system started at 2020-06-03 01:27:02 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.481: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-sz7h8 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.481: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:14:59.481: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-1 from kube-system started at 2020-06-02 22:13:08 +0000 UTC (3 container statuses recorded)
+Jun  3 21:14:59.481: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: kube-proxy-ds-8hv5j from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.481: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: node-exporter-dwrsb from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.481: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: kube-dns-5c64dc6c6b-ls68z from kube-system started at 2020-06-02 22:16:18 +0000 UTC (3 container statuses recorded)
+Jun  3 21:14:59.481: INFO: 	Container dnsmasq ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: 	Container kubedns ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: 	Container sidecar ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: fluent-bit-zcqwz from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.481: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:14:59.481: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-0 before test
+Jun  3 21:14:59.496: INFO: fluent-bit-gb59k from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: elasticsearch-logging-0 from ntnx-system started at 2020-06-02 22:17:12 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container elasticsearch-logging ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: node-exporter-5q9qc from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: kube-proxy-ds-qt528 from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: kube-flannel-ds-qnlzb from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: kubernetes-events-printer-5c6d46dfdb-zcvlt from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container kubernetes-events-printer ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-7btt6 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:14:59.496: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: csi-node-ntnx-plugin-zbw4j from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:14:59.496: INFO: alertmanager-main-1 from ntnx-system started at 2020-06-03 21:01:20 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.496: INFO: 	Container alertmanager ready: false, restart count 0
+Jun  3 21:14:59.496: INFO: 	Container config-reloader ready: false, restart count 0
+Jun  3 21:14:59.496: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-1 before test
+Jun  3 21:14:59.512: INFO: kube-flannel-ds-jhm9k from kube-system started at 2020-06-03 21:01:50 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: node-exporter-qwbtg from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-szp8f from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:14:59.512: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: kube-proxy-ds-fgf9r from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: sonobuoy from sonobuoy started at 2020-06-03 20:08:28 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: csi-node-ntnx-plugin-bh72v from ntnx-system started at 2020-06-03 21:01:44 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: fluent-bit-dn8fp from ntnx-system started at 2020-06-03 21:01:44 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: prometheus-k8s-1 from ntnx-system started at 2020-06-03 21:02:04 +0000 UTC (3 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container prometheus ready: false, restart count 0
+Jun  3 21:14:59.512: INFO: 	Container prometheus-config-reloader ready: false, restart count 0
+Jun  3 21:14:59.512: INFO: 	Container rules-configmap-reloader ready: false, restart count 0
+Jun  3 21:14:59.512: INFO: sonobuoy-e2e-job-5435c8b63156474a from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.512: INFO: 	Container e2e ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 21:14:59.512: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-2 before test
+Jun  3 21:14:59.529: INFO: fluent-bit-zgt4s from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: prometheus-operator-58f86dddd6-fkbmk from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container prometheus-operator ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: kube-flannel-ds-q4sbl from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: csi-provisioner-ntnx-plugin-0 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container csi-provisioner ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: kube-state-metrics-5d45657948-qkv6t from ntnx-system started at 2020-06-02 22:19:59 +0000 UTC (4 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container addon-resizer ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container kube-rbac-proxy-main ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container kube-rbac-proxy-self ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container kube-state-metrics ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-p8d7c from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:14:59.529: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: kube-proxy-ds-gn6cv from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: csi-node-ntnx-plugin-wnbs7 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: prometheus-k8s-0 from ntnx-system started at 2020-06-02 22:20:28 +0000 UTC (3 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container prometheus ready: true, restart count 1
+Jun  3 21:14:59.529: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: kibana-logging-54b7d845-c94kw from ntnx-system started at 2020-06-03 21:01:17 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container kibana-logging ready: false, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container nginxhttp ready: false, restart count 0
+Jun  3 21:14:59.529: INFO: csi-attacher-ntnx-plugin-0 from ntnx-system started at 2020-06-03 21:01:22 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container csi-attacher ready: false, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: node-exporter-hs75m from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: alertmanager-main-0 from ntnx-system started at 2020-06-02 22:20:10 +0000 UTC (2 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 21:14:59.529: INFO: elasticsearch-curator-cron-1591142460-cj4wj from ntnx-system started at 2020-06-03 00:01:05 +0000 UTC (1 container statuses recorded)
+Jun  3 21:14:59.529: INFO: 	Container curator ready: false, restart count 0
+[It] validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: verifying the node has the label node karbon-certification-ff5a6a-k8s-master-0
+STEP: verifying the node has the label node karbon-certification-ff5a6a-k8s-master-1
+STEP: verifying the node has the label node karbon-certification-ff5a6a-k8s-worker-0
+STEP: verifying the node has the label node karbon-certification-ff5a6a-k8s-worker-1
+STEP: verifying the node has the label node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod kube-apiserver-karbon-certification-ff5a6a-k8s-master-0 requesting resource cpu=300m on Node karbon-certification-ff5a6a-k8s-master-0
+Jun  3 21:14:59.615: INFO: Pod kube-apiserver-karbon-certification-ff5a6a-k8s-master-1 requesting resource cpu=300m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.615: INFO: Pod kube-dns-5c64dc6c6b-ls68z requesting resource cpu=260m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.615: INFO: Pod kube-flannel-ds-hznhg requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-master-0
+Jun  3 21:14:59.615: INFO: Pod kube-flannel-ds-jhm9k requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.615: INFO: Pod kube-flannel-ds-q4sbl requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod kube-flannel-ds-qnlzb requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod kube-flannel-ds-zdlj6 requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.615: INFO: Pod kube-proxy-ds-8hv5j requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.615: INFO: Pod kube-proxy-ds-fgf9r requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.615: INFO: Pod kube-proxy-ds-gn6cv requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod kube-proxy-ds-qrgfl requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-master-0
+Jun  3 21:14:59.615: INFO: Pod kube-proxy-ds-qt528 requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod alertmanager-main-0 requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod alertmanager-main-1 requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod csi-attacher-ntnx-plugin-0 requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod csi-node-ntnx-plugin-6cg44 requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.615: INFO: Pod csi-node-ntnx-plugin-bh72v requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.615: INFO: Pod csi-node-ntnx-plugin-pdc8c requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-master-0
+Jun  3 21:14:59.615: INFO: Pod csi-node-ntnx-plugin-wnbs7 requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod csi-node-ntnx-plugin-zbw4j requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod csi-provisioner-ntnx-plugin-0 requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod elasticsearch-logging-0 requesting resource cpu=500m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod fluent-bit-dn8fp requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.615: INFO: Pod fluent-bit-gb59k requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod fluent-bit-mb264 requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-master-0
+Jun  3 21:14:59.615: INFO: Pod fluent-bit-zcqwz requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.615: INFO: Pod fluent-bit-zgt4s requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod kibana-logging-54b7d845-c94kw requesting resource cpu=200m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod kube-state-metrics-5d45657948-qkv6t requesting resource cpu=150m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod kubernetes-events-printer-5c6d46dfdb-zcvlt requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod node-exporter-5q9qc requesting resource cpu=112m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod node-exporter-dwrsb requesting resource cpu=112m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.615: INFO: Pod node-exporter-hkj7p requesting resource cpu=112m on Node karbon-certification-ff5a6a-k8s-master-0
+Jun  3 21:14:59.615: INFO: Pod node-exporter-hs75m requesting resource cpu=112m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod node-exporter-qwbtg requesting resource cpu=112m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.615: INFO: Pod prometheus-k8s-0 requesting resource cpu=400m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod prometheus-k8s-1 requesting resource cpu=400m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.615: INFO: Pod prometheus-operator-58f86dddd6-fkbmk requesting resource cpu=100m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod sonobuoy requesting resource cpu=0m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.615: INFO: Pod sonobuoy-e2e-job-5435c8b63156474a requesting resource cpu=0m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.615: INFO: Pod sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-58wws requesting resource cpu=0m on Node karbon-certification-ff5a6a-k8s-master-0
+Jun  3 21:14:59.615: INFO: Pod sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-7btt6 requesting resource cpu=0m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.615: INFO: Pod sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-p8d7c requesting resource cpu=0m on Node karbon-certification-ff5a6a-k8s-worker-2
+Jun  3 21:14:59.615: INFO: Pod sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-sz7h8 requesting resource cpu=0m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.615: INFO: Pod sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-szp8f requesting resource cpu=0m on Node karbon-certification-ff5a6a-k8s-worker-1
+STEP: Starting Pods to consume most of the cluster CPU.
+Jun  3 21:14:59.615: INFO: Creating a pod which consumes cpu=2161m on Node karbon-certification-ff5a6a-k8s-master-0
+Jun  3 21:14:59.624: INFO: Creating a pod which consumes cpu=1979m on Node karbon-certification-ff5a6a-k8s-master-1
+Jun  3 21:14:59.636: INFO: Creating a pod which consumes cpu=4611m on Node karbon-certification-ff5a6a-k8s-worker-0
+Jun  3 21:14:59.643: INFO: Creating a pod which consumes cpu=4891m on Node karbon-certification-ff5a6a-k8s-worker-1
+Jun  3 21:14:59.654: INFO: Creating a pod which consumes cpu=4226m on Node karbon-certification-ff5a6a-k8s-worker-2
+STEP: Creating another pod that requires unavailable amount of CPU.
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-333658aa-e275-4ad4-83bc-b1a2c31c1352.161525289b06cd3b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2361/filler-pod-333658aa-e275-4ad4-83bc-b1a2c31c1352 to karbon-certification-ff5a6a-k8s-worker-2]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-333658aa-e275-4ad4-83bc-b1a2c31c1352.16152528cb8e0c83], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-333658aa-e275-4ad4-83bc-b1a2c31c1352.16152528d0c918b8], Reason = [Created], Message = [Created container filler-pod-333658aa-e275-4ad4-83bc-b1a2c31c1352]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-333658aa-e275-4ad4-83bc-b1a2c31c1352.16152528d8e86f50], Reason = [Started], Message = [Started container filler-pod-333658aa-e275-4ad4-83bc-b1a2c31c1352]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-6f7b4cb3-1137-4534-9156-beed5980e257.161525289a5bf50f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2361/filler-pod-6f7b4cb3-1137-4534-9156-beed5980e257 to karbon-certification-ff5a6a-k8s-worker-1]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-6f7b4cb3-1137-4534-9156-beed5980e257.16152528cbf7ce6f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-6f7b4cb3-1137-4534-9156-beed5980e257.16152528d37e720b], Reason = [Created], Message = [Created container filler-pod-6f7b4cb3-1137-4534-9156-beed5980e257]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-6f7b4cb3-1137-4534-9156-beed5980e257.16152528dbb2b094], Reason = [Started], Message = [Started container filler-pod-6f7b4cb3-1137-4534-9156-beed5980e257]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-9d486af7-b3e4-4a7d-b5cf-ce5b3a3f83a0.16152528997ad3a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2361/filler-pod-9d486af7-b3e4-4a7d-b5cf-ce5b3a3f83a0 to karbon-certification-ff5a6a-k8s-worker-0]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-9d486af7-b3e4-4a7d-b5cf-ce5b3a3f83a0.16152528c93e4318], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.1"]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-9d486af7-b3e4-4a7d-b5cf-ce5b3a3f83a0.1615252935d533b5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.1"]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-9d486af7-b3e4-4a7d-b5cf-ce5b3a3f83a0.161525293af5fff3], Reason = [Created], Message = [Created container filler-pod-9d486af7-b3e4-4a7d-b5cf-ce5b3a3f83a0]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-9d486af7-b3e4-4a7d-b5cf-ce5b3a3f83a0.1615252943548702], Reason = [Started], Message = [Started container filler-pod-9d486af7-b3e4-4a7d-b5cf-ce5b3a3f83a0]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-b07809b1-e8ae-4fed-9dcf-3afeb7606121.1615252898e57250], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2361/filler-pod-b07809b1-e8ae-4fed-9dcf-3afeb7606121 to karbon-certification-ff5a6a-k8s-master-1]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-b07809b1-e8ae-4fed-9dcf-3afeb7606121.16152528c589c9f6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.1"]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-b07809b1-e8ae-4fed-9dcf-3afeb7606121.1615252934d0bc7a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.1"]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-b07809b1-e8ae-4fed-9dcf-3afeb7606121.1615252939af4ef0], Reason = [Created], Message = [Created container filler-pod-b07809b1-e8ae-4fed-9dcf-3afeb7606121]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-b07809b1-e8ae-4fed-9dcf-3afeb7606121.1615252942ba7eb1], Reason = [Started], Message = [Started container filler-pod-b07809b1-e8ae-4fed-9dcf-3afeb7606121]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-dd525a66-e841-4ee0-adb7-5bb4328817fa.16152528982c4cb0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2361/filler-pod-dd525a66-e841-4ee0-adb7-5bb4328817fa to karbon-certification-ff5a6a-k8s-master-0]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-dd525a66-e841-4ee0-adb7-5bb4328817fa.16152528c7addef3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.1"]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-dd525a66-e841-4ee0-adb7-5bb4328817fa.1615252935296472], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.1"]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-dd525a66-e841-4ee0-adb7-5bb4328817fa.161525293a5d7c47], Reason = [Created], Message = [Created container filler-pod-dd525a66-e841-4ee0-adb7-5bb4328817fa]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-dd525a66-e841-4ee0-adb7-5bb4328817fa.16152529436f9cdb], Reason = [Started], Message = [Started container filler-pod-dd525a66-e841-4ee0-adb7-5bb4328817fa]
+STEP: Considering event: 
+Type = [Warning], Name = [additional-pod.161525298b30b1c2], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient cpu.]
+STEP: removing the label node off the node karbon-certification-ff5a6a-k8s-master-0
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node karbon-certification-ff5a6a-k8s-master-1
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node karbon-certification-ff5a6a-k8s-worker-0
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node karbon-certification-ff5a6a-k8s-worker-1
+STEP: verifying the node doesn't have the label node
+STEP: removing the label node off the node karbon-certification-ff5a6a-k8s-worker-2
+STEP: verifying the node doesn't have the label node
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:15:04.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-2361" for this suite.
+Jun  3 21:15:10.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:15:10.887: INFO: namespace sched-pred-2361 deletion completed in 6.102861744s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78
+
+• [SLOW TEST:11.483 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for ExternalName services [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:15:10.888: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for ExternalName services [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a test externalName service
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8214.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8214.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8214.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8214.svc.cluster.local; sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun  3 21:15:12.957: INFO: DNS probes using dns-test-0a3bcd24-5198-4f7b-8864-b7b5cc09305b succeeded
+
+STEP: deleting the pod
+STEP: changing the externalName to bar.example.com
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8214.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8214.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8214.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8214.svc.cluster.local; sleep 1; done
+
+STEP: creating a second pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun  3 21:15:15.063: INFO: DNS probes using dns-test-6a56e342-24b4-4102-8f96-082f94142d66 succeeded
+
+STEP: deleting the pod
+STEP: changing the service to type=ClusterIP
+STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8214.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8214.svc.cluster.local; sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8214.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8214.svc.cluster.local; sleep 1; done
+
+STEP: creating a third pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun  3 21:15:17.133: INFO: DNS probes using dns-test-c90be107-69b0-436f-b681-1fe57f41cd99 succeeded
+
+STEP: deleting the pod
+STEP: deleting the test externalName service
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:15:17.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-8214" for this suite.
+Jun  3 21:15:23.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:15:23.287: INFO: namespace dns-8214 deletion completed in 6.102951551s
+
+• [SLOW TEST:12.400 seconds]
+[sig-network] DNS
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide DNS for ExternalName services [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  patching/updating a validating webhook should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:15:23.288: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:15:23.829: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:15:26.855: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] patching/updating a validating webhook should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a validating webhook configuration
+STEP: Creating a configMap that does not comply to the validation webhook rules
+STEP: Updating a validating webhook configuration's rules to not include the create operation
+STEP: Creating a configMap that does not comply to the validation webhook rules
+STEP: Patching a validating webhook configuration's rules to include the create operation
+STEP: Creating a configMap that does not comply to the validation webhook rules
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:15:26.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-1401" for this suite.
+Jun  3 21:15:32.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:15:33.041: INFO: namespace webhook-1401 deletion completed in 6.106847956s
+STEP: Destroying namespace "webhook-1401-markers" for this suite.
+Jun  3 21:15:39.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:15:39.148: INFO: namespace webhook-1401-markers deletion completed in 6.10668852s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:15.879 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  patching/updating a validating webhook should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:15:39.167: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating projection with secret that has name projected-secret-test-e483caa4-b409-4b68-8cc4-1377f65f4a79
+STEP: Creating a pod to test consume secrets
+Jun  3 21:15:39.218: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eaa1d387-de59-4d81-8b4c-3371af112bb9" in namespace "projected-4556" to be "success or failure"
+Jun  3 21:15:39.227: INFO: Pod "pod-projected-secrets-eaa1d387-de59-4d81-8b4c-3371af112bb9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.714158ms
+Jun  3 21:15:41.231: INFO: Pod "pod-projected-secrets-eaa1d387-de59-4d81-8b4c-3371af112bb9": Phase="Running", Reason="", readiness=true. Elapsed: 2.012872502s
+Jun  3 21:15:43.235: INFO: Pod "pod-projected-secrets-eaa1d387-de59-4d81-8b4c-3371af112bb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017375763s
+STEP: Saw pod success
+Jun  3 21:15:43.235: INFO: Pod "pod-projected-secrets-eaa1d387-de59-4d81-8b4c-3371af112bb9" satisfied condition "success or failure"
+Jun  3 21:15:43.238: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-secrets-eaa1d387-de59-4d81-8b4c-3371af112bb9 container projected-secret-volume-test: 
+STEP: delete the pod
+Jun  3 21:15:43.262: INFO: Waiting for pod pod-projected-secrets-eaa1d387-de59-4d81-8b4c-3371af112bb9 to disappear
+Jun  3 21:15:43.266: INFO: Pod pod-projected-secrets-eaa1d387-de59-4d81-8b4c-3371af112bb9 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:15:43.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-4556" for this suite.
+Jun  3 21:15:49.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:15:49.376: INFO: namespace projected-4556 deletion completed in 6.103744557s
+
+• [SLOW TEST:10.209 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:15:49.376: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-test-volume-map-f6347b28-56db-43d0-a0e2-aac1111cd4ec
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:15:49.420: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b4d3321-cf31-4f28-a929-90ed8dd5d82e" in namespace "configmap-4660" to be "success or failure"
+Jun  3 21:15:49.425: INFO: Pod "pod-configmaps-9b4d3321-cf31-4f28-a929-90ed8dd5d82e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46883ms
+Jun  3 21:15:51.430: INFO: Pod "pod-configmaps-9b4d3321-cf31-4f28-a929-90ed8dd5d82e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009149514s
+STEP: Saw pod success
+Jun  3 21:15:51.430: INFO: Pod "pod-configmaps-9b4d3321-cf31-4f28-a929-90ed8dd5d82e" satisfied condition "success or failure"
+Jun  3 21:15:51.433: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-9b4d3321-cf31-4f28-a929-90ed8dd5d82e container configmap-volume-test: 
+STEP: delete the pod
+Jun  3 21:15:51.452: INFO: Waiting for pod pod-configmaps-9b4d3321-cf31-4f28-a929-90ed8dd5d82e to disappear
+Jun  3 21:15:51.454: INFO: Pod pod-configmaps-9b4d3321-cf31-4f28-a929-90ed8dd5d82e no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:15:51.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-4660" for this suite.
+Jun  3 21:15:57.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:15:57.556: INFO: namespace configmap-4660 deletion completed in 6.0976744s
+
+• [SLOW TEST:8.179 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a read only busybox container 
+  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:15:57.556: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:15:59.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-4535" for this suite.
+Jun  3 21:16:45.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:16:45.725: INFO: namespace kubelet-test-4535 deletion completed in 46.10295479s
+
+• [SLOW TEST:48.169 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when scheduling a read only busybox container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
+    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:16:45.725: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0666 on node default medium
+Jun  3 21:16:45.767: INFO: Waiting up to 5m0s for pod "pod-a0335f52-3120-43d0-aaa3-edc641e5ae0d" in namespace "emptydir-9974" to be "success or failure"
+Jun  3 21:16:45.771: INFO: Pod "pod-a0335f52-3120-43d0-aaa3-edc641e5ae0d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.564068ms
+Jun  3 21:16:47.775: INFO: Pod "pod-a0335f52-3120-43d0-aaa3-edc641e5ae0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007577439s
+STEP: Saw pod success
+Jun  3 21:16:47.775: INFO: Pod "pod-a0335f52-3120-43d0-aaa3-edc641e5ae0d" satisfied condition "success or failure"
+Jun  3 21:16:47.777: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-a0335f52-3120-43d0-aaa3-edc641e5ae0d container test-container: 
+STEP: delete the pod
+Jun  3 21:16:47.798: INFO: Waiting for pod pod-a0335f52-3120-43d0-aaa3-edc641e5ae0d to disappear
+Jun  3 21:16:47.800: INFO: Pod pod-a0335f52-3120-43d0-aaa3-edc641e5ae0d no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:16:47.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-9974" for this suite.
+Jun  3 21:16:53.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:16:53.907: INFO: namespace emptydir-9974 deletion completed in 6.103033428s
+
+• [SLOW TEST:8.182 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should include webhook resources in discovery documents [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:16:53.907: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:16:54.788: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jun  3 21:16:56.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815814, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815814, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815814, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815814, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:16:59.815: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should include webhook resources in discovery documents [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: fetching the /apis discovery document
+STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
+STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
+STEP: fetching the /apis/admissionregistration.k8s.io discovery document
+STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
+STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
+STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:16:59.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-1717" for this suite.
+Jun  3 21:17:05.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:17:05.925: INFO: namespace webhook-1717 deletion completed in 6.100254608s
+STEP: Destroying namespace "webhook-1717-markers" for this suite.
+Jun  3 21:17:11.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:17:12.032: INFO: namespace webhook-1717-markers deletion completed in 6.106831476s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:18.140 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should include webhook resources in discovery documents [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-apps] Deployment 
+  RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:17:12.047: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+[It] RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:17:12.081: INFO: Creating deployment "test-recreate-deployment"
+Jun  3 21:17:12.085: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
+Jun  3 21:17:12.092: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
+Jun  3 21:17:14.100: INFO: Waiting deployment "test-recreate-deployment" to complete
+Jun  3 21:17:14.102: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
+Jun  3 21:17:14.110: INFO: Updating deployment test-recreate-deployment
+Jun  3 21:17:14.110: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
+Jun  3 21:17:14.177: INFO: Deployment "test-recreate-deployment":
+&Deployment{ObjectMeta:{test-recreate-deployment  deployment-9918 /apis/apps/v1/namespaces/deployment-9918/deployments/test-recreate-deployment bd2c16f9-f0f1-403a-a230-8f4606f7227d 159993 2 2020-06-03 21:17:12 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c000e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-06-03 21:17:14 +0000 UTC,LastTransitionTime:2020-06-03 21:17:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-06-03 21:17:14 +0000 UTC,LastTransitionTime:2020-06-03 21:17:12 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}
+
+Jun  3 21:17:14.180: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
+&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-9918 /apis/apps/v1/namespaces/deployment-9918/replicasets/test-recreate-deployment-5f94c574ff eb55f059-8317-4255-8682-319ad5c6ea8f 159992 1 2020-06-03 21:17:14 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment bd2c16f9-f0f1-403a-a230-8f4606f7227d 0xc001c004b7 0xc001c004b8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c00518  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:17:14.180: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
+Jun  3 21:17:14.181: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-68fc85c7bb  deployment-9918 /apis/apps/v1/namespaces/deployment-9918/replicasets/test-recreate-deployment-68fc85c7bb 1e6c755f-c816-4e41-9319-a1222c1f14c0 159981 2 2020-06-03 21:17:12 +0000 UTC   map[name:sample-pod-3 pod-template-hash:68fc85c7bb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment bd2c16f9-f0f1-403a-a230-8f4606f7227d 0xc001c00587 0xc001c00588}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 68fc85c7bb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:68fc85c7bb] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c005e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:17:14.184: INFO: Pod "test-recreate-deployment-5f94c574ff-hqz99" is not available:
+&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-hqz99 test-recreate-deployment-5f94c574ff- deployment-9918 /api/v1/namespaces/deployment-9918/pods/test-recreate-deployment-5f94c574ff-hqz99 d6464981-e457-4578-be44-7c9115e95cf5 159991 0 2020-06-03 21:17:14 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff eb55f059-8317-4255-8682-319ad5c6ea8f 0xc001c00d57 0xc001c00d58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rtm7m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rtm7m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rtm7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:17:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:17:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:17:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:17:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:,StartTime:2020-06-03 21:17:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:17:14.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-9918" for this suite.
+Jun  3 21:17:20.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:17:20.303: INFO: namespace deployment-9918 deletion completed in 6.114930616s
+
+• [SLOW TEST:8.256 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for services  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:17:20.303: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for services  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a test headless service
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5502.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5502.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5502.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5502.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5502.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5502.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5502.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5502.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5502.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5502.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5502.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 180.32.19.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.19.32.180_udp@PTR;check="$$(dig +tcp +noall +answer +search 180.32.19.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.19.32.180_tcp@PTR;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5502.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5502.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5502.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5502.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5502.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5502.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5502.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5502.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5502.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5502.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5502.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 180.32.19.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.19.32.180_udp@PTR;check="$$(dig +tcp +noall +answer +search 180.32.19.172.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/172.19.32.180_tcp@PTR;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun  3 21:17:22.390: INFO: Unable to read wheezy_udp@dns-test-service.dns-5502.svc.cluster.local from pod dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137: the server could not find the requested resource (get pods dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137)
+Jun  3 21:17:22.394: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5502.svc.cluster.local from pod dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137: the server could not find the requested resource (get pods dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137)
+Jun  3 21:17:22.397: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local from pod dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137: the server could not find the requested resource (get pods dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137)
+Jun  3 21:17:22.399: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local from pod dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137: the server could not find the requested resource (get pods dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137)
+Jun  3 21:17:22.422: INFO: Unable to read jessie_udp@dns-test-service.dns-5502.svc.cluster.local from pod dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137: the server could not find the requested resource (get pods dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137)
+Jun  3 21:17:22.425: INFO: Unable to read jessie_tcp@dns-test-service.dns-5502.svc.cluster.local from pod dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137: the server could not find the requested resource (get pods dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137)
+Jun  3 21:17:22.428: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local from pod dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137: the server could not find the requested resource (get pods dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137)
+Jun  3 21:17:22.431: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local from pod dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137: the server could not find the requested resource (get pods dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137)
+Jun  3 21:17:22.448: INFO: Lookups using dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137 failed for: [wheezy_udp@dns-test-service.dns-5502.svc.cluster.local wheezy_tcp@dns-test-service.dns-5502.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local jessie_udp@dns-test-service.dns-5502.svc.cluster.local jessie_tcp@dns-test-service.dns-5502.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5502.svc.cluster.local]
+
+Jun  3 21:17:27.513: INFO: DNS probes using dns-5502/dns-test-6867bd8b-5f0c-44df-8391-f24d7a14a137 succeeded
+
+STEP: deleting the pod
+STEP: deleting the test service
+STEP: deleting the test headless service
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:17:27.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-5502" for this suite.
+Jun  3 21:17:33.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:17:33.710: INFO: namespace dns-5502 deletion completed in 6.105301131s
+
+• [SLOW TEST:13.407 seconds]
+[sig-network] DNS
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide DNS for services  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-node] ConfigMap 
+  should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:17:33.710: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap that has name configmap-test-emptyKey-5e2338c7-84fa-43e9-975c-3f6698d65cb1
+[AfterEach] [sig-node] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:17:33.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-9733" for this suite.
+Jun  3 21:17:39.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:17:39.852: INFO: namespace configmap-9733 deletion completed in 6.103294379s
+
+• [SLOW TEST:6.142 seconds]
+[sig-node] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
+  should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should be able to deny attaching pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:17:39.853: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:17:40.778: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:17:43.802: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should be able to deny attaching pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Registering the webhook via the AdmissionRegistration API
+STEP: create a pod
+STEP: 'kubectl attach' the pod, should be denied by the webhook
+Jun  3 21:17:45.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 attach --namespace=webhook-533 to-be-attached-pod -i -c=container1'
+Jun  3 21:17:45.966: INFO: rc: 1
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:17:45.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-533" for this suite.
+Jun  3 21:17:58.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:17:58.138: INFO: namespace webhook-533 deletion completed in 12.141172063s
+STEP: Destroying namespace "webhook-533-markers" for this suite.
+Jun  3 21:18:04.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:18:04.257: INFO: namespace webhook-533-markers deletion completed in 6.119613362s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:24.429 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to deny attaching pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:18:04.282: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:18:04.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9e25653-07f9-4d17-98f2-32874293e2a7" in namespace "projected-6789" to be "success or failure"
+Jun  3 21:18:04.328: INFO: Pod "downwardapi-volume-f9e25653-07f9-4d17-98f2-32874293e2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.649663ms
+Jun  3 21:18:06.333: INFO: Pod "downwardapi-volume-f9e25653-07f9-4d17-98f2-32874293e2a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010586224s
+STEP: Saw pod success
+Jun  3 21:18:06.333: INFO: Pod "downwardapi-volume-f9e25653-07f9-4d17-98f2-32874293e2a7" satisfied condition "success or failure"
+Jun  3 21:18:06.337: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-f9e25653-07f9-4d17-98f2-32874293e2a7 container client-container: 
+STEP: delete the pod
+Jun  3 21:18:06.362: INFO: Waiting for pod downwardapi-volume-f9e25653-07f9-4d17-98f2-32874293e2a7 to disappear
+Jun  3 21:18:06.365: INFO: Pod downwardapi-volume-f9e25653-07f9-4d17-98f2-32874293e2a7 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:18:06.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6789" for this suite.
+Jun  3 21:18:12.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:18:12.467: INFO: namespace projected-6789 deletion completed in 6.097618269s
+
+• [SLOW TEST:8.185 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl label 
+  should update the label on a resource  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:18:12.467: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl label
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1192
+STEP: creating the pod
+Jun  3 21:18:12.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-3039'
+Jun  3 21:18:12.749: INFO: stderr: ""
+Jun  3 21:18:12.749: INFO: stdout: "pod/pause created\n"
+Jun  3 21:18:12.749: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
+Jun  3 21:18:12.749: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3039" to be "running and ready"
+Jun  3 21:18:12.754: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.929872ms
+Jun  3 21:18:14.758: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.008764768s
+Jun  3 21:18:14.758: INFO: Pod "pause" satisfied condition "running and ready"
+Jun  3 21:18:14.758: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
+[It] should update the label on a resource  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: adding the label testing-label with value testing-label-value to a pod
+Jun  3 21:18:14.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 label pods pause testing-label=testing-label-value --namespace=kubectl-3039'
+Jun  3 21:18:14.855: INFO: stderr: ""
+Jun  3 21:18:14.855: INFO: stdout: "pod/pause labeled\n"
+STEP: verifying the pod has the label testing-label with the value testing-label-value
+Jun  3 21:18:14.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pod pause -L testing-label --namespace=kubectl-3039'
+Jun  3 21:18:14.946: INFO: stderr: ""
+Jun  3 21:18:14.946: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          2s    testing-label-value\n"
+STEP: removing the label testing-label of a pod
+Jun  3 21:18:14.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 label pods pause testing-label- --namespace=kubectl-3039'
+Jun  3 21:18:15.057: INFO: stderr: ""
+Jun  3 21:18:15.057: INFO: stdout: "pod/pause labeled\n"
+STEP: verifying the pod doesn't have the label testing-label
+Jun  3 21:18:15.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pod pause -L testing-label --namespace=kubectl-3039'
+Jun  3 21:18:15.149: INFO: stderr: ""
+Jun  3 21:18:15.149: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          3s    \n"
+[AfterEach] Kubectl label
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1199
+STEP: using delete to clean up resources
+Jun  3 21:18:15.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-3039'
+Jun  3 21:18:15.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun  3 21:18:15.252: INFO: stdout: "pod \"pause\" force deleted\n"
+Jun  3 21:18:15.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get rc,svc -l name=pause --no-headers --namespace=kubectl-3039'
+Jun  3 21:18:15.352: INFO: stderr: "No resources found in kubectl-3039 namespace.\n"
+Jun  3 21:18:15.352: INFO: stdout: ""
+Jun  3 21:18:15.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -l name=pause --namespace=kubectl-3039 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun  3 21:18:15.448: INFO: stderr: ""
+Jun  3 21:18:15.448: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:18:15.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-3039" for this suite.
+Jun  3 21:18:21.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:18:21.560: INFO: namespace kubectl-3039 deletion completed in 6.1076469s
+
+• [SLOW TEST:9.093 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl label
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189
+    should update the label on a resource  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Aggregator 
+  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Aggregator
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:18:21.561: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename aggregator
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] Aggregator
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77
+Jun  3 21:18:21.594: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Registering the sample API server.
+Jun  3 21:18:22.152: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
+Jun  3 21:18:24.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:18:26.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:18:28.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:18:30.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:18:32.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:18:34.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:18:36.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726815902, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:18:39.063: INFO: Waited 824.026561ms for the sample-apiserver to be ready to handle requests.
+[AfterEach] [sig-api-machinery] Aggregator
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68
+[AfterEach] [sig-api-machinery] Aggregator
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:18:39.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "aggregator-4943" for this suite.
+Jun  3 21:18:45.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:18:45.793: INFO: namespace aggregator-4943 deletion completed in 6.193069019s
+
+• [SLOW TEST:24.233 seconds]
+[sig-api-machinery] Aggregator
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl run default 
+  should create an rc or deployment from an image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:18:45.794: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl run default
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1403
+[It] should create an rc or deployment from an image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: running the image docker.io/library/httpd:2.4.38-alpine
+Jun  3 21:18:45.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6141'
+Jun  3 21:18:45.953: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun  3 21:18:45.953: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
+STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
+[AfterEach] Kubectl run default
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1409
+Jun  3 21:18:45.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete deployment e2e-test-httpd-deployment --namespace=kubectl-6141'
+Jun  3 21:18:46.071: INFO: stderr: ""
+Jun  3 21:18:46.071: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:18:46.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-6141" for this suite.
+Jun  3 21:18:58.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:18:58.185: INFO: namespace kubectl-6141 deletion completed in 12.108983722s
+
+• [SLOW TEST:12.392 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl run default
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397
+    should create an rc or deployment from an image  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:18:58.186: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:18:58.224: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af536273-f997-48ff-b468-cdd3fe765410" in namespace "projected-1226" to be "success or failure"
+Jun  3 21:18:58.230: INFO: Pod "downwardapi-volume-af536273-f997-48ff-b468-cdd3fe765410": Phase="Pending", Reason="", readiness=false. Elapsed: 5.453406ms
+Jun  3 21:19:00.235: INFO: Pod "downwardapi-volume-af536273-f997-48ff-b468-cdd3fe765410": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010371488s
+STEP: Saw pod success
+Jun  3 21:19:00.235: INFO: Pod "downwardapi-volume-af536273-f997-48ff-b468-cdd3fe765410" satisfied condition "success or failure"
+Jun  3 21:19:00.238: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-af536273-f997-48ff-b468-cdd3fe765410 container client-container: 
+STEP: delete the pod
+Jun  3 21:19:00.260: INFO: Waiting for pod downwardapi-volume-af536273-f997-48ff-b468-cdd3fe765410 to disappear
+Jun  3 21:19:00.264: INFO: Pod downwardapi-volume-af536273-f997-48ff-b468-cdd3fe765410 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:19:00.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-1226" for this suite.
+Jun  3 21:19:06.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:19:06.376: INFO: namespace projected-1226 deletion completed in 6.107672588s
+
+• [SLOW TEST:8.190 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:19:06.377: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
+[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: updating the pod
+Jun  3 21:19:08.945: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0ffc7990-bf66-4741-a59d-7a824ec8088d"
+Jun  3 21:19:08.945: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0ffc7990-bf66-4741-a59d-7a824ec8088d" in namespace "pods-518" to be "terminated due to deadline exceeded"
+Jun  3 21:19:08.949: INFO: Pod "pod-update-activedeadlineseconds-0ffc7990-bf66-4741-a59d-7a824ec8088d": Phase="Running", Reason="", readiness=true. Elapsed: 3.310424ms
+Jun  3 21:19:10.953: INFO: Pod "pod-update-activedeadlineseconds-0ffc7990-bf66-4741-a59d-7a824ec8088d": Phase="Running", Reason="", readiness=true. Elapsed: 2.00763767s
+Jun  3 21:19:12.958: INFO: Pod "pod-update-activedeadlineseconds-0ffc7990-bf66-4741-a59d-7a824ec8088d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.012374979s
+Jun  3 21:19:12.958: INFO: Pod "pod-update-activedeadlineseconds-0ffc7990-bf66-4741-a59d-7a824ec8088d" satisfied condition "terminated due to deadline exceeded"
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:19:12.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-518" for this suite.
+Jun  3 21:19:18.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:19:19.066: INFO: namespace pods-518 deletion completed in 6.104100482s
+
+• [SLOW TEST:12.690 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:19:19.067: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:19:19.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-956ee09b-31b9-4d0d-8bbb-573648cbd919" in namespace "projected-150" to be "success or failure"
+Jun  3 21:19:19.115: INFO: Pod "downwardapi-volume-956ee09b-31b9-4d0d-8bbb-573648cbd919": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71598ms
+Jun  3 21:19:21.119: INFO: Pod "downwardapi-volume-956ee09b-31b9-4d0d-8bbb-573648cbd919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009169016s
+STEP: Saw pod success
+Jun  3 21:19:21.119: INFO: Pod "downwardapi-volume-956ee09b-31b9-4d0d-8bbb-573648cbd919" satisfied condition "success or failure"
+Jun  3 21:19:21.122: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-956ee09b-31b9-4d0d-8bbb-573648cbd919 container client-container: 
+STEP: delete the pod
+Jun  3 21:19:21.143: INFO: Waiting for pod downwardapi-volume-956ee09b-31b9-4d0d-8bbb-573648cbd919 to disappear
+Jun  3 21:19:21.147: INFO: Pod downwardapi-volume-956ee09b-31b9-4d0d-8bbb-573648cbd919 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:19:21.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-150" for this suite.
+Jun  3 21:19:27.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:19:27.257: INFO: namespace projected-150 deletion completed in 6.104132899s
+
+• [SLOW TEST:8.191 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:19:27.258: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the container
+STEP: wait for the container to reach Succeeded
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Jun  3 21:19:29.312: INFO: Expected: &{} to match Container's Termination Message:  --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:19:29.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-8323" for this suite.
+Jun  3 21:19:35.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:19:35.455: INFO: namespace container-runtime-8323 deletion completed in 6.121003885s
+
+• [SLOW TEST:8.197 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  blackbox test
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
+    on terminated container
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
+      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+      /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute poststart exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:19:35.455: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the pod with lifecycle hook
+STEP: check poststart hook
+STEP: delete the pod with lifecycle hook
+Jun  3 21:19:41.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun  3 21:19:41.548: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun  3 21:19:43.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun  3 21:19:43.551: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun  3 21:19:45.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun  3 21:19:45.553: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun  3 21:19:47.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun  3 21:19:47.553: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun  3 21:19:49.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun  3 21:19:49.553: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun  3 21:19:51.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun  3 21:19:51.553: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun  3 21:19:53.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun  3 21:19:53.552: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun  3 21:19:55.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun  3 21:19:55.552: INFO: Pod pod-with-poststart-exec-hook no longer exists
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:19:55.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-5195" for this suite.
+Jun  3 21:20:07.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:20:07.661: INFO: namespace container-lifecycle-hook-5195 deletion completed in 12.10467608s
+
+• [SLOW TEST:32.206 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
+    should execute poststart exec hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support rollover [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:20:07.661: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+[It] deployment should support rollover [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:20:07.701: INFO: Pod name rollover-pod: Found 0 pods out of 1
+Jun  3 21:20:12.706: INFO: Pod name rollover-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Jun  3 21:20:12.706: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
+Jun  3 21:20:14.710: INFO: Creating deployment "test-rollover-deployment"
+Jun  3 21:20:14.718: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
+Jun  3 21:20:16.729: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
+Jun  3 21:20:16.735: INFO: Ensure that both replica sets have 1 created replica
+Jun  3 21:20:16.741: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
+Jun  3 21:20:16.750: INFO: Updating deployment test-rollover-deployment
+Jun  3 21:20:16.750: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
+Jun  3 21:20:18.760: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
+Jun  3 21:20:18.766: INFO: Make sure deployment "test-rollover-deployment" is complete
+Jun  3 21:20:18.772: INFO: all replica sets need to contain the pod-template-hash label
+Jun  3 21:20:18.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816018, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:20:20.780: INFO: all replica sets need to contain the pod-template-hash label
+Jun  3 21:20:20.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816018, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:20:22.780: INFO: all replica sets need to contain the pod-template-hash label
+Jun  3 21:20:22.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816018, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:20:24.780: INFO: all replica sets need to contain the pod-template-hash label
+Jun  3 21:20:24.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816018, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:20:26.780: INFO: all replica sets need to contain the pod-template-hash label
+Jun  3 21:20:26.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816018, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816014, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun  3 21:20:28.781: INFO: 
+Jun  3 21:20:28.781: INFO: Ensure that both old replica sets have no replicas
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
+Jun  3 21:20:28.791: INFO: Deployment "test-rollover-deployment":
+&Deployment{ObjectMeta:{test-rollover-deployment  deployment-1826 /apis/apps/v1/namespaces/deployment-1826/deployments/test-rollover-deployment 084c5061-729f-4f07-b971-f207b3a58d45 160919 2 2020-06-03 21:20:14 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046ba738  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-03 21:20:14 +0000 UTC,LastTransitionTime:2020-06-03 21:20:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7d7dc6548c" has successfully progressed.,LastUpdateTime:2020-06-03 21:20:28 +0000 UTC,LastTransitionTime:2020-06-03 21:20:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}
+
+Jun  3 21:20:28.795: INFO: New ReplicaSet "test-rollover-deployment-7d7dc6548c" of Deployment "test-rollover-deployment":
+&ReplicaSet{ObjectMeta:{test-rollover-deployment-7d7dc6548c  deployment-1826 /apis/apps/v1/namespaces/deployment-1826/replicasets/test-rollover-deployment-7d7dc6548c 84870132-1e87-4fd7-b21a-eb25457b7539 160908 2 2020-06-03 21:20:16 +0000 UTC   map[name:rollover-pod pod-template-hash:7d7dc6548c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 084c5061-729f-4f07-b971-f207b3a58d45 0xc0046babe7 0xc0046babe8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7d7dc6548c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:7d7dc6548c] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046bac48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:20:28.795: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
+Jun  3 21:20:28.795: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-1826 /apis/apps/v1/namespaces/deployment-1826/replicasets/test-rollover-controller d2d3aeec-a76b-4240-a9fa-0b6f51473bba 160918 2 2020-06-03 21:20:07 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 084c5061-729f-4f07-b971-f207b3a58d45 0xc0046bab17 0xc0046bab18}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0046bab78  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:20:28.795: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-1826 /apis/apps/v1/namespaces/deployment-1826/replicasets/test-rollover-deployment-f6c94f66c edd3b451-5b7e-47a4-9de4-aead4cad7618 160877 2 2020-06-03 21:20:14 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 084c5061-729f-4f07-b971-f207b3a58d45 0xc0046bacb0 0xc0046bacb1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0046bad28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:20:28.799: INFO: Pod "test-rollover-deployment-7d7dc6548c-f2898" is available:
+&Pod{ObjectMeta:{test-rollover-deployment-7d7dc6548c-f2898 test-rollover-deployment-7d7dc6548c- deployment-1826 /api/v1/namespaces/deployment-1826/pods/test-rollover-deployment-7d7dc6548c-f2898 7f22b83a-17f6-477e-be04-c94f8931c32b 160887 0 2020-06-03 21:20:16 +0000 UTC   map[name:rollover-pod pod-template-hash:7d7dc6548c] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7d7dc6548c 84870132-1e87-4fd7-b21a-eb25457b7539 0xc0046bb277 0xc0046bb278}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9tkqz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9tkqz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9tkqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:20:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:20:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:20:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:20:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:172.20.2.150,StartTime:2020-06-03 21:20:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:20:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://25d159bbf2f88ae1609792afcdb511466fee8644f311e99313357f766a30e4ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.2.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:20:28.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-1826" for this suite.
+Jun  3 21:20:34.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:20:34.905: INFO: namespace deployment-1826 deletion completed in 6.101507502s
+
+• [SLOW TEST:27.243 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  deployment should support rollover [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:20:34.905: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:20:34.945: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcc713f9-53b5-46a5-992a-21921d853c9a" in namespace "downward-api-6927" to be "success or failure"
+Jun  3 21:20:34.951: INFO: Pod "downwardapi-volume-fcc713f9-53b5-46a5-992a-21921d853c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.710948ms
+Jun  3 21:20:36.956: INFO: Pod "downwardapi-volume-fcc713f9-53b5-46a5-992a-21921d853c9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010556006s
+STEP: Saw pod success
+Jun  3 21:20:36.956: INFO: Pod "downwardapi-volume-fcc713f9-53b5-46a5-992a-21921d853c9a" satisfied condition "success or failure"
+Jun  3 21:20:36.958: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-fcc713f9-53b5-46a5-992a-21921d853c9a container client-container: 
+STEP: delete the pod
+Jun  3 21:20:36.977: INFO: Waiting for pod downwardapi-volume-fcc713f9-53b5-46a5-992a-21921d853c9a to disappear
+Jun  3 21:20:36.981: INFO: Pod downwardapi-volume-fcc713f9-53b5-46a5-992a-21921d853c9a no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:20:36.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-6927" for this suite.
+Jun  3 21:20:43.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:20:43.104: INFO: namespace downward-api-6927 deletion completed in 6.118261601s
+
+• [SLOW TEST:8.199 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:20:43.104: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test override command
+Jun  3 21:20:43.152: INFO: Waiting up to 5m0s for pod "client-containers-62d6857e-dd63-44dd-b689-076e10fc612c" in namespace "containers-3988" to be "success or failure"
+Jun  3 21:20:43.155: INFO: Pod "client-containers-62d6857e-dd63-44dd-b689-076e10fc612c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.400293ms
+Jun  3 21:20:45.160: INFO: Pod "client-containers-62d6857e-dd63-44dd-b689-076e10fc612c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008069169s
+STEP: Saw pod success
+Jun  3 21:20:45.160: INFO: Pod "client-containers-62d6857e-dd63-44dd-b689-076e10fc612c" satisfied condition "success or failure"
+Jun  3 21:20:45.163: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod client-containers-62d6857e-dd63-44dd-b689-076e10fc612c container test-container: 
+STEP: delete the pod
+Jun  3 21:20:45.186: INFO: Waiting for pod client-containers-62d6857e-dd63-44dd-b689-076e10fc612c to disappear
+Jun  3 21:20:45.189: INFO: Pod client-containers-62d6857e-dd63-44dd-b689-076e10fc612c no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:20:45.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-3988" for this suite.
+Jun  3 21:20:51.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:20:51.297: INFO: namespace containers-3988 deletion completed in 6.103486933s
+
+• [SLOW TEST:8.193 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:20:51.297: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating simple DaemonSet "daemon-set"
+STEP: Check that daemon pods launch on every node of the cluster.
+Jun  3 21:20:51.374: INFO: Number of nodes with available pods: 0
+Jun  3 21:20:51.374: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:20:52.384: INFO: Number of nodes with available pods: 0
+Jun  3 21:20:52.384: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:20:53.383: INFO: Number of nodes with available pods: 5
+Jun  3 21:20:53.383: INFO: Number of running nodes: 5, number of available pods: 5
+STEP: Stop a daemon pod, check that the daemon pod is revived.
+Jun  3 21:20:53.405: INFO: Number of nodes with available pods: 4
+Jun  3 21:20:53.405: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:20:54.416: INFO: Number of nodes with available pods: 4
+Jun  3 21:20:54.416: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:20:55.416: INFO: Number of nodes with available pods: 4
+Jun  3 21:20:55.416: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:20:56.414: INFO: Number of nodes with available pods: 4
+Jun  3 21:20:56.414: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:20:57.416: INFO: Number of nodes with available pods: 4
+Jun  3 21:20:57.416: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:20:58.416: INFO: Number of nodes with available pods: 4
+Jun  3 21:20:58.416: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:20:59.414: INFO: Number of nodes with available pods: 4
+Jun  3 21:20:59.414: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:21:00.415: INFO: Number of nodes with available pods: 4
+Jun  3 21:21:00.415: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:21:01.415: INFO: Number of nodes with available pods: 4
+Jun  3 21:21:01.415: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:21:02.416: INFO: Number of nodes with available pods: 4
+Jun  3 21:21:02.416: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:21:03.415: INFO: Number of nodes with available pods: 4
+Jun  3 21:21:03.415: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:21:04.416: INFO: Number of nodes with available pods: 4
+Jun  3 21:21:04.416: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:21:05.417: INFO: Number of nodes with available pods: 4
+Jun  3 21:21:05.417: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:21:06.415: INFO: Number of nodes with available pods: 4
+Jun  3 21:21:06.415: INFO: Node karbon-certification-ff5a6a-k8s-worker-1 is running more than one daemon pod
+Jun  3 21:21:07.415: INFO: Number of nodes with available pods: 5
+Jun  3 21:21:07.415: INFO: Number of running nodes: 5, number of available pods: 5
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-225, will wait for the garbage collector to delete the pods
+Jun  3 21:21:07.485: INFO: Deleting DaemonSet.extensions daemon-set took: 13.65423ms
+Jun  3 21:21:07.886: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.345198ms
+Jun  3 21:21:18.390: INFO: Number of nodes with available pods: 0
+Jun  3 21:21:18.390: INFO: Number of running nodes: 0, number of available pods: 0
+Jun  3 21:21:18.392: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-225/daemonsets","resourceVersion":"161197"},"items":null}
+
+Jun  3 21:21:18.394: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-225/pods","resourceVersion":"161197"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:21:18.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-225" for this suite.
+Jun  3 21:21:24.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:21:24.527: INFO: namespace daemonsets-225 deletion completed in 6.111527833s
+
+• [SLOW TEST:33.230 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should run and stop simple daemon [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] EmptyDir wrapper volumes 
+  should not conflict [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:21:24.527: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir-wrapper
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not conflict [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Cleaning up the secret
+STEP: Cleaning up the configmap
+STEP: Cleaning up the pod
+[AfterEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:21:26.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-wrapper-8521" for this suite.
+Jun  3 21:21:32.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:21:32.721: INFO: namespace emptydir-wrapper-8521 deletion completed in 6.101841001s
+
+• [SLOW TEST:8.194 seconds]
+[sig-storage] EmptyDir wrapper volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  should not conflict [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-storage] Downward API volume 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:21:32.721: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating the pod
+Jun  3 21:21:37.295: INFO: Successfully updated pod "annotationupdatea987e0d0-8c0c-40af-a251-bb8892b77f58"
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:21:39.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-5739" for this suite.
+Jun  3 21:21:57.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:21:57.421: INFO: namespace downward-api-5739 deletion completed in 18.102303183s
+
+• [SLOW TEST:24.700 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:21:57.422: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name secret-test-42c122c2-053a-4bcc-992f-e13fb4b9f4ce
+STEP: Creating a pod to test consume secrets
+Jun  3 21:21:57.462: INFO: Waiting up to 5m0s for pod "pod-secrets-bb5a670a-fd6a-4671-a2c0-7713ac427d05" in namespace "secrets-6844" to be "success or failure"
+Jun  3 21:21:57.466: INFO: Pod "pod-secrets-bb5a670a-fd6a-4671-a2c0-7713ac427d05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.542192ms
+Jun  3 21:21:59.480: INFO: Pod "pod-secrets-bb5a670a-fd6a-4671-a2c0-7713ac427d05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017826544s
+STEP: Saw pod success
+Jun  3 21:21:59.480: INFO: Pod "pod-secrets-bb5a670a-fd6a-4671-a2c0-7713ac427d05" satisfied condition "success or failure"
+Jun  3 21:21:59.483: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-secrets-bb5a670a-fd6a-4671-a2c0-7713ac427d05 container secret-volume-test: 
+STEP: delete the pod
+Jun  3 21:21:59.502: INFO: Waiting for pod pod-secrets-bb5a670a-fd6a-4671-a2c0-7713ac427d05 to disappear
+Jun  3 21:21:59.505: INFO: Pod pod-secrets-bb5a670a-fd6a-4671-a2c0-7713ac427d05 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:21:59.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-6844" for this suite.
+Jun  3 21:22:05.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:22:05.620: INFO: namespace secrets-6844 deletion completed in 6.111066392s
+
+• [SLOW TEST:8.198 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should deny crd creation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:22:05.620: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:22:06.820: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:22:09.847: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should deny crd creation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Registering the crd webhook via the AdmissionRegistration API
+STEP: Creating a custom resource definition that should be denied by the webhook
+Jun  3 21:22:09.870: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:22:09.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-8451" for this suite.
+Jun  3 21:22:15.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:22:15.993: INFO: namespace webhook-8451 deletion completed in 6.105099275s
+STEP: Destroying namespace "webhook-8451-markers" for this suite.
+Jun  3 21:22:22.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:22:22.098: INFO: namespace webhook-8451-markers deletion completed in 6.104189327s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:16.493 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should deny crd creation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:22:22.113: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename replication-controller
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating replication controller my-hostname-basic-ff8154cf-2e67-43ba-8705-3ea00af16f6b
+Jun  3 21:22:22.153: INFO: Pod name my-hostname-basic-ff8154cf-2e67-43ba-8705-3ea00af16f6b: Found 0 pods out of 1
+Jun  3 21:22:27.158: INFO: Pod name my-hostname-basic-ff8154cf-2e67-43ba-8705-3ea00af16f6b: Found 1 pods out of 1
+Jun  3 21:22:27.158: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ff8154cf-2e67-43ba-8705-3ea00af16f6b" are running
+Jun  3 21:22:27.161: INFO: Pod "my-hostname-basic-ff8154cf-2e67-43ba-8705-3ea00af16f6b-gqj6w" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 21:22:22 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 21:22:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 21:22:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-03 21:22:22 +0000 UTC Reason: Message:}])
+Jun  3 21:22:27.161: INFO: Trying to dial the pod
+Jun  3 21:22:32.173: INFO: Controller my-hostname-basic-ff8154cf-2e67-43ba-8705-3ea00af16f6b: Got expected result from replica 1 [my-hostname-basic-ff8154cf-2e67-43ba-8705-3ea00af16f6b-gqj6w]: "my-hostname-basic-ff8154cf-2e67-43ba-8705-3ea00af16f6b-gqj6w", 1 of 1 required successes so far
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:22:32.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-3990" for this suite.
+Jun  3 21:22:38.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:22:38.277: INFO: namespace replication-controller-3990 deletion completed in 6.100451822s
+
+• [SLOW TEST:16.164 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:22:38.278: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
+[It] should be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: updating the pod
+Jun  3 21:22:40.848: INFO: Successfully updated pod "pod-update-cf6da561-5297-40c1-9142-10f42b227dbe"
+STEP: verifying the updated pod is in kubernetes
+Jun  3 21:22:40.854: INFO: Pod update OK
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:22:40.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-249" for this suite.
+Jun  3 21:23:08.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:23:08.996: INFO: namespace pods-249 deletion completed in 28.137171512s
+
+• [SLOW TEST:30.718 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl patch 
+  should add annotations for pods in rc  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:23:08.996: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should add annotations for pods in rc  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating Redis RC
+Jun  3 21:23:09.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-6473'
+Jun  3 21:23:09.572: INFO: stderr: ""
+Jun  3 21:23:09.572: INFO: stdout: "replicationcontroller/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Jun  3 21:23:10.577: INFO: Selector matched 1 pods for map[app:redis]
+Jun  3 21:23:10.577: INFO: Found 0 / 1
+Jun  3 21:23:11.576: INFO: Selector matched 1 pods for map[app:redis]
+Jun  3 21:23:11.577: INFO: Found 1 / 1
+Jun  3 21:23:11.577: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+STEP: patching all pods
+Jun  3 21:23:11.580: INFO: Selector matched 1 pods for map[app:redis]
+Jun  3 21:23:11.580: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Jun  3 21:23:11.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 patch pod redis-master-72tcg --namespace=kubectl-6473 -p {"metadata":{"annotations":{"x":"y"}}}'
+Jun  3 21:23:11.756: INFO: stderr: ""
+Jun  3 21:23:11.756: INFO: stdout: "pod/redis-master-72tcg patched\n"
+STEP: checking annotations
+Jun  3 21:23:11.760: INFO: Selector matched 1 pods for map[app:redis]
+Jun  3 21:23:11.761: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:23:11.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-6473" for this suite.
+Jun  3 21:23:39.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:23:39.878: INFO: namespace kubectl-6473 deletion completed in 28.112909253s
+
+• [SLOW TEST:30.882 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl patch
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1346
+    should add annotations for pods in rc  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
+  creating/deleting custom resource definition objects works  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:23:39.878: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename custom-resource-definition
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] creating/deleting custom resource definition objects works  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:23:39.909: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:23:40.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "custom-resource-definition-9140" for this suite.
+Jun  3 21:23:46.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:23:47.056: INFO: namespace custom-resource-definition-9140 deletion completed in 6.11672487s
+
+• [SLOW TEST:7.178 seconds]
+[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  Simple CustomResourceDefinition
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42
+    creating/deleting custom resource definition objects works  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:23:47.056: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name projected-configmap-test-volume-map-905dea11-d43b-458d-8d01-5f08800859de
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:23:47.105: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-18257ddf-2613-422b-8757-a837d1eb49ea" in namespace "projected-5346" to be "success or failure"
+Jun  3 21:23:47.120: INFO: Pod "pod-projected-configmaps-18257ddf-2613-422b-8757-a837d1eb49ea": Phase="Pending", Reason="", readiness=false. Elapsed: 14.978589ms
+Jun  3 21:23:49.125: INFO: Pod "pod-projected-configmaps-18257ddf-2613-422b-8757-a837d1eb49ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020239987s
+STEP: Saw pod success
+Jun  3 21:23:49.125: INFO: Pod "pod-projected-configmaps-18257ddf-2613-422b-8757-a837d1eb49ea" satisfied condition "success or failure"
+Jun  3 21:23:49.128: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-configmaps-18257ddf-2613-422b-8757-a837d1eb49ea container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  3 21:23:49.165: INFO: Waiting for pod pod-projected-configmaps-18257ddf-2613-422b-8757-a837d1eb49ea to disappear
+Jun  3 21:23:49.167: INFO: Pod pod-projected-configmaps-18257ddf-2613-422b-8757-a837d1eb49ea no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:23:49.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-5346" for this suite.
+Jun  3 21:23:55.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:23:55.272: INFO: namespace projected-5346 deletion completed in 6.101121146s
+
+• [SLOW TEST:8.216 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:23:55.273: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:23:55.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2d83d8d-4526-411a-aad3-8e7d87724975" in namespace "downward-api-2190" to be "success or failure"
+Jun  3 21:23:55.317: INFO: Pod "downwardapi-volume-d2d83d8d-4526-411a-aad3-8e7d87724975": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541092ms
+Jun  3 21:23:57.321: INFO: Pod "downwardapi-volume-d2d83d8d-4526-411a-aad3-8e7d87724975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007961028s
+STEP: Saw pod success
+Jun  3 21:23:57.321: INFO: Pod "downwardapi-volume-d2d83d8d-4526-411a-aad3-8e7d87724975" satisfied condition "success or failure"
+Jun  3 21:23:57.324: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-d2d83d8d-4526-411a-aad3-8e7d87724975 container client-container: 
+STEP: delete the pod
+Jun  3 21:23:57.363: INFO: Waiting for pod downwardapi-volume-d2d83d8d-4526-411a-aad3-8e7d87724975 to disappear
+Jun  3 21:23:57.366: INFO: Pod downwardapi-volume-d2d83d8d-4526-411a-aad3-8e7d87724975 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:23:57.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-2190" for this suite.
+Jun  3 21:24:03.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:24:03.475: INFO: namespace downward-api-2190 deletion completed in 6.103531679s
+
+• [SLOW TEST:8.202 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  listing validating webhooks should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:24:03.475: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:24:04.417: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jun  3 21:24:06.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816244, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816244, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816244, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816244, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:24:09.451: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] listing validating webhooks should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Listing all of the created validation webhooks
+STEP: Creating a configMap that does not comply to the validation webhook rules
+STEP: Deleting the collection of validation webhooks
+STEP: Creating a configMap that does not comply to the validation webhook rules
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:24:09.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-9172" for this suite.
+Jun  3 21:24:15.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:24:15.761: INFO: namespace webhook-9172 deletion completed in 6.097108229s
+STEP: Destroying namespace "webhook-9172-markers" for this suite.
+Jun  3 21:24:21.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:24:21.863: INFO: namespace webhook-9172-markers deletion completed in 6.102152662s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:18.402 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  listing validating webhooks should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for pods for Subdomain [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:24:21.878: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for pods for Subdomain [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a test headless service
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3434.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3434.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3434.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3434.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3434.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3434.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3434.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3434.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun  3 21:24:26.047: INFO: DNS probes using dns-3434/dns-test-e615681f-6893-49b3-a8b7-26eac9bacec7 succeeded
+
+STEP: deleting the pod
+STEP: deleting the test headless service
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:24:26.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-3434" for this suite.
+Jun  3 21:24:32.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:24:32.206: INFO: namespace dns-3434 deletion completed in 6.118080423s
+
+• [SLOW TEST:10.328 seconds]
+[sig-network] DNS
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide DNS for pods for Subdomain [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should contain environment variables for services [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:24:32.206: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
+[It] should contain environment variables for services [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:24:34.281: INFO: Waiting up to 5m0s for pod "client-envvars-edfb1572-3f5e-4ce2-9b60-6844aba49521" in namespace "pods-3190" to be "success or failure"
+Jun  3 21:24:34.284: INFO: Pod "client-envvars-edfb1572-3f5e-4ce2-9b60-6844aba49521": Phase="Pending", Reason="", readiness=false. Elapsed: 3.006266ms
+Jun  3 21:24:36.289: INFO: Pod "client-envvars-edfb1572-3f5e-4ce2-9b60-6844aba49521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007232479s
+STEP: Saw pod success
+Jun  3 21:24:36.289: INFO: Pod "client-envvars-edfb1572-3f5e-4ce2-9b60-6844aba49521" satisfied condition "success or failure"
+Jun  3 21:24:36.292: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod client-envvars-edfb1572-3f5e-4ce2-9b60-6844aba49521 container env3cont: 
+STEP: delete the pod
+Jun  3 21:24:36.314: INFO: Waiting for pod client-envvars-edfb1572-3f5e-4ce2-9b60-6844aba49521 to disappear
+Jun  3 21:24:36.318: INFO: Pod client-envvars-edfb1572-3f5e-4ce2-9b60-6844aba49521 no longer exists
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:24:36.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-3190" for this suite.
+Jun  3 21:25:04.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:25:04.445: INFO: namespace pods-3190 deletion completed in 28.119717957s
+
+• [SLOW TEST:32.239 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should contain environment variables for services [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
+  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:25:04.445: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:25:06.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-6003" for this suite.
+Jun  3 21:25:56.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:25:56.627: INFO: namespace kubelet-test-6003 deletion completed in 50.107087972s
+
+• [SLOW TEST:52.182 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when scheduling a busybox Pod with hostAliases
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
+    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir wrapper volumes 
+  should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:25:56.627: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir-wrapper
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating 50 configmaps
+STEP: Creating RC which spawns configmap-volume pods
+Jun  3 21:25:56.882: INFO: Pod name wrapped-volume-race-942d6cb2-61e0-49ad-8a6d-65bcc87d61aa: Found 0 pods out of 5
+Jun  3 21:26:01.894: INFO: Pod name wrapped-volume-race-942d6cb2-61e0-49ad-8a6d-65bcc87d61aa: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-942d6cb2-61e0-49ad-8a6d-65bcc87d61aa in namespace emptydir-wrapper-8452, will wait for the garbage collector to delete the pods
+Jun  3 21:26:17.979: INFO: Deleting ReplicationController wrapped-volume-race-942d6cb2-61e0-49ad-8a6d-65bcc87d61aa took: 9.378261ms
+Jun  3 21:26:18.380: INFO: Terminating ReplicationController wrapped-volume-race-942d6cb2-61e0-49ad-8a6d-65bcc87d61aa pods took: 400.391661ms
+STEP: Creating RC which spawns configmap-volume pods
+Jun  3 21:26:58.502: INFO: Pod name wrapped-volume-race-942d9e7a-6d68-46d7-88ce-5e63cec96b44: Found 0 pods out of 5
+Jun  3 21:27:03.509: INFO: Pod name wrapped-volume-race-942d9e7a-6d68-46d7-88ce-5e63cec96b44: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-942d9e7a-6d68-46d7-88ce-5e63cec96b44 in namespace emptydir-wrapper-8452, will wait for the garbage collector to delete the pods
+Jun  3 21:27:13.594: INFO: Deleting ReplicationController wrapped-volume-race-942d9e7a-6d68-46d7-88ce-5e63cec96b44 took: 9.151425ms
+Jun  3 21:27:13.994: INFO: Terminating ReplicationController wrapped-volume-race-942d9e7a-6d68-46d7-88ce-5e63cec96b44 pods took: 400.295695ms
+STEP: Creating RC which spawns configmap-volume pods
+Jun  3 21:27:49.313: INFO: Pod name wrapped-volume-race-7665beee-2947-4e80-b073-6c507e7bf3c4: Found 0 pods out of 5
+Jun  3 21:27:54.320: INFO: Pod name wrapped-volume-race-7665beee-2947-4e80-b073-6c507e7bf3c4: Found 5 pods out of 5
+STEP: Ensuring each pod is running
+STEP: deleting ReplicationController wrapped-volume-race-7665beee-2947-4e80-b073-6c507e7bf3c4 in namespace emptydir-wrapper-8452, will wait for the garbage collector to delete the pods
+Jun  3 21:28:04.406: INFO: Deleting ReplicationController wrapped-volume-race-7665beee-2947-4e80-b073-6c507e7bf3c4 took: 11.451593ms
+Jun  3 21:28:04.806: INFO: Terminating ReplicationController wrapped-volume-race-7665beee-2947-4e80-b073-6c507e7bf3c4 pods took: 400.357086ms
+STEP: Cleaning up the configMaps
+[AfterEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:28:48.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-wrapper-8452" for this suite.
+Jun  3 21:28:56.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:28:56.999: INFO: namespace emptydir-wrapper-8452 deletion completed in 8.108502371s
+
+• [SLOW TEST:180.372 seconds]
+[sig-storage] EmptyDir wrapper volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  should not cause race condition when used for configmaps [Serial] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:28:56.999: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:28:57.064: INFO: Creating daemon "daemon-set" with a node selector
+STEP: Initially, daemon pods should not be running on any nodes.
+Jun  3 21:28:57.073: INFO: Number of nodes with available pods: 0
+Jun  3 21:28:57.073: INFO: Number of running nodes: 0, number of available pods: 0
+STEP: Change node label to blue, check that daemon pod is launched.
+Jun  3 21:28:57.090: INFO: Number of nodes with available pods: 0
+Jun  3 21:28:57.090: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:28:58.095: INFO: Number of nodes with available pods: 0
+Jun  3 21:28:58.095: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:28:59.096: INFO: Number of nodes with available pods: 1
+Jun  3 21:28:59.096: INFO: Number of running nodes: 1, number of available pods: 1
+STEP: Update the node label to green, and wait for daemons to be unscheduled
+Jun  3 21:28:59.119: INFO: Number of nodes with available pods: 1
+Jun  3 21:28:59.119: INFO: Number of running nodes: 0, number of available pods: 1
+Jun  3 21:29:00.123: INFO: Number of nodes with available pods: 0
+Jun  3 21:29:00.123: INFO: Number of running nodes: 0, number of available pods: 0
+STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
+Jun  3 21:29:00.137: INFO: Number of nodes with available pods: 0
+Jun  3 21:29:00.137: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:29:01.142: INFO: Number of nodes with available pods: 0
+Jun  3 21:29:01.142: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:29:02.141: INFO: Number of nodes with available pods: 0
+Jun  3 21:29:02.141: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:29:03.142: INFO: Number of nodes with available pods: 0
+Jun  3 21:29:03.142: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:29:04.141: INFO: Number of nodes with available pods: 1
+Jun  3 21:29:04.141: INFO: Number of running nodes: 1, number of available pods: 1
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9521, will wait for the garbage collector to delete the pods
+Jun  3 21:29:04.211: INFO: Deleting DaemonSet.extensions daemon-set took: 10.148534ms
+Jun  3 21:29:04.612: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.338691ms
+Jun  3 21:29:18.118: INFO: Number of nodes with available pods: 0
+Jun  3 21:29:18.118: INFO: Number of running nodes: 0, number of available pods: 0
+Jun  3 21:29:18.120: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9521/daemonsets","resourceVersion":"163554"},"items":null}
+
+Jun  3 21:29:18.124: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9521/pods","resourceVersion":"163554"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:29:18.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-9521" for this suite.
+Jun  3 21:29:24.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:29:24.252: INFO: namespace daemonsets-9521 deletion completed in 6.096707825s
+
+• [SLOW TEST:27.253 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:29:24.252: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir volume type on node default medium
+Jun  3 21:29:24.306: INFO: Waiting up to 5m0s for pod "pod-ccaba014-9bb8-40e2-a818-f2c6b4a9288c" in namespace "emptydir-2913" to be "success or failure"
+Jun  3 21:29:24.314: INFO: Pod "pod-ccaba014-9bb8-40e2-a818-f2c6b4a9288c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.448168ms
+Jun  3 21:29:26.319: INFO: Pod "pod-ccaba014-9bb8-40e2-a818-f2c6b4a9288c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012155983s
+STEP: Saw pod success
+Jun  3 21:29:26.319: INFO: Pod "pod-ccaba014-9bb8-40e2-a818-f2c6b4a9288c" satisfied condition "success or failure"
+Jun  3 21:29:26.322: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-ccaba014-9bb8-40e2-a818-f2c6b4a9288c container test-container: 
+STEP: delete the pod
+Jun  3 21:29:26.351: INFO: Waiting for pod pod-ccaba014-9bb8-40e2-a818-f2c6b4a9288c to disappear
+Jun  3 21:29:26.354: INFO: Pod pod-ccaba014-9bb8-40e2-a818-f2c6b4a9288c no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:29:26.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-2913" for this suite.
+Jun  3 21:29:32.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:29:32.469: INFO: namespace emptydir-2913 deletion completed in 6.110710358s
+
+• [SLOW TEST:8.217 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:29:32.469: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating projection with secret that has name projected-secret-test-map-2f128936-7c60-4ec2-a74f-36dad6bd0ecf
+STEP: Creating a pod to test consume secrets
+Jun  3 21:29:32.513: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59b804d5-dde9-45c6-85eb-3d431e69d13f" in namespace "projected-3136" to be "success or failure"
+Jun  3 21:29:32.516: INFO: Pod "pod-projected-secrets-59b804d5-dde9-45c6-85eb-3d431e69d13f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.77944ms
+Jun  3 21:29:34.521: INFO: Pod "pod-projected-secrets-59b804d5-dde9-45c6-85eb-3d431e69d13f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008061504s
+STEP: Saw pod success
+Jun  3 21:29:34.521: INFO: Pod "pod-projected-secrets-59b804d5-dde9-45c6-85eb-3d431e69d13f" satisfied condition "success or failure"
+Jun  3 21:29:34.525: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-secrets-59b804d5-dde9-45c6-85eb-3d431e69d13f container projected-secret-volume-test: 
+STEP: delete the pod
+Jun  3 21:29:34.550: INFO: Waiting for pod pod-projected-secrets-59b804d5-dde9-45c6-85eb-3d431e69d13f to disappear
+Jun  3 21:29:34.553: INFO: Pod pod-projected-secrets-59b804d5-dde9-45c6-85eb-3d431e69d13f no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:29:34.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-3136" for this suite.
+Jun  3 21:29:40.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:29:40.662: INFO: namespace projected-3136 deletion completed in 6.104519436s
+
+• [SLOW TEST:8.192 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:29:40.662: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:29:42.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-8407" for this suite.
+Jun  3 21:29:56.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:29:56.842: INFO: namespace containers-8407 deletion completed in 14.119209207s
+
+• [SLOW TEST:16.180 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:29:56.843: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
+STEP: Creating service test in namespace statefulset-7533
+[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Initializing watcher for selector baz=blah,foo=bar
+STEP: Creating stateful set ss in namespace statefulset-7533
+STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7533
+Jun  3 21:29:56.912: INFO: Found 0 stateful pods, waiting for 1
+Jun  3 21:30:06.916: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
+Jun  3 21:30:06.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-7533 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jun  3 21:30:07.174: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jun  3 21:30:07.174: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jun  3 21:30:07.174: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jun  3 21:30:07.177: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
+Jun  3 21:30:17.182: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jun  3 21:30:17.182: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:30:17.197: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999718s
+Jun  3 21:30:18.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995667621s
+Jun  3 21:30:19.210: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988776707s
+Jun  3 21:30:20.215: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982992147s
+Jun  3 21:30:21.220: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978068979s
+Jun  3 21:30:22.224: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.973180255s
+Jun  3 21:30:23.229: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968624868s
+Jun  3 21:30:24.234: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.963960032s
+Jun  3 21:30:25.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.958921177s
+Jun  3 21:30:26.245: INFO: Verifying statefulset ss doesn't scale past 1 for another 953.557059ms
+STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7533
+Jun  3 21:30:27.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-7533 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jun  3 21:30:27.493: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jun  3 21:30:27.493: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jun  3 21:30:27.493: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jun  3 21:30:27.497: INFO: Found 1 stateful pods, waiting for 3
+Jun  3 21:30:37.503: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun  3 21:30:37.503: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun  3 21:30:37.503: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Verifying that stateful set ss was scaled up in order
+STEP: Scale down will halt with unhealthy stateful pod
+Jun  3 21:30:37.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-7533 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jun  3 21:30:37.730: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jun  3 21:30:37.730: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jun  3 21:30:37.730: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jun  3 21:30:37.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-7533 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jun  3 21:30:38.028: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jun  3 21:30:38.028: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jun  3 21:30:38.028: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jun  3 21:30:38.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-7533 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
+Jun  3 21:30:38.277: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
+Jun  3 21:30:38.277: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
+Jun  3 21:30:38.277: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
+
+Jun  3 21:30:38.277: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:30:38.281: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
+Jun  3 21:30:48.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jun  3 21:30:48.289: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
+Jun  3 21:30:48.289: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
+Jun  3 21:30:48.303: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999634s
+Jun  3 21:30:49.307: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994596941s
+Jun  3 21:30:50.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98995043s
+Jun  3 21:30:51.317: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985090945s
+Jun  3 21:30:52.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979974531s
+Jun  3 21:30:53.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975461627s
+Jun  3 21:30:54.331: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970564396s
+Jun  3 21:30:55.336: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966189266s
+Jun  3 21:30:56.341: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.961284732s
+Jun  3 21:30:57.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.980239ms
+STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7533
+Jun  3 21:30:58.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-7533 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jun  3 21:30:58.584: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jun  3 21:30:58.584: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jun  3 21:30:58.584: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jun  3 21:30:58.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-7533 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jun  3 21:30:58.833: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jun  3 21:30:58.833: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jun  3 21:30:58.833: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jun  3 21:30:58.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=statefulset-7533 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
+Jun  3 21:30:59.074: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
+Jun  3 21:30:59.074: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
+Jun  3 21:30:59.074: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
+
+Jun  3 21:30:59.074: INFO: Scaling statefulset ss to 0
+STEP: Verifying that stateful set ss was scaled down in reverse order
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
+Jun  3 21:31:09.092: INFO: Deleting all statefulset in ns statefulset-7533
+Jun  3 21:31:09.095: INFO: Scaling statefulset ss to 0
+Jun  3 21:31:09.103: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:31:09.105: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:31:09.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-7533" for this suite.
+Jun  3 21:31:15.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:31:15.225: INFO: namespace statefulset-7533 deletion completed in 6.097567676s
+
+• [SLOW TEST:78.383 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  patching/updating a mutating webhook should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:31:15.225: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:31:15.973: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jun  3 21:31:17.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816675, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816675, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816675, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726816675, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:31:21.004: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] patching/updating a mutating webhook should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a mutating webhook configuration
+STEP: Updating a mutating webhook configuration's rules to not include the create operation
+STEP: Creating a configMap that should not be mutated
+STEP: Patching a mutating webhook configuration's rules to include the create operation
+STEP: Creating a configMap that should be mutated
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:31:21.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-1469" for this suite.
+Jun  3 21:31:27.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:31:27.195: INFO: namespace webhook-1469 deletion completed in 6.105919367s
+STEP: Destroying namespace "webhook-1469-markers" for this suite.
+Jun  3 21:31:33.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:31:33.290: INFO: namespace webhook-1469-markers deletion completed in 6.09492036s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:18.079 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  patching/updating a mutating webhook should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  listing mutating webhooks should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:31:33.305: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:31:34.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:31:37.498: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] listing mutating webhooks should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Listing all of the created validation webhooks
+STEP: Creating a configMap that should be mutated
+STEP: Deleting the collection of validation webhooks
+STEP: Creating a configMap that should not be mutated
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:31:37.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-5838" for this suite.
+Jun  3 21:31:43.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:31:43.795: INFO: namespace webhook-5838 deletion completed in 6.108730089s
+STEP: Destroying namespace "webhook-5838-markers" for this suite.
+Jun  3 21:31:49.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:31:49.904: INFO: namespace webhook-5838-markers deletion completed in 6.108860119s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:16.617 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  listing mutating webhooks should work [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-apps] Job 
+  should delete a job [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:31:49.922: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename job
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete a job [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a job
+STEP: Ensuring active pods == parallelism
+STEP: delete a job
+STEP: deleting Job.batch foo in namespace job-7091, will wait for the garbage collector to delete the pods
+Jun  3 21:31:52.030: INFO: Deleting Job.batch foo took: 9.945028ms
+Jun  3 21:31:52.431: INFO: Terminating Job.batch foo pods took: 400.308542ms
+STEP: Ensuring job was deleted
+[AfterEach] [sig-apps] Job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:32:34.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "job-7091" for this suite.
+Jun  3 21:32:40.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:32:40.439: INFO: namespace job-7091 deletion completed in 6.099145787s
+
+• [SLOW TEST:50.517 seconds]
+[sig-apps] Job
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should delete a job [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-apps] Job 
+  should adopt matching orphans and release non-matching pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:32:40.439: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename job
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should adopt matching orphans and release non-matching pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a job
+STEP: Ensuring active pods == parallelism
+STEP: Orphaning one of the Job's Pods
+Jun  3 21:32:42.999: INFO: Successfully updated pod "adopt-release-lrk69"
+STEP: Checking that the Job readopts the Pod
+Jun  3 21:32:42.999: INFO: Waiting up to 15m0s for pod "adopt-release-lrk69" in namespace "job-8426" to be "adopted"
+Jun  3 21:32:43.003: INFO: Pod "adopt-release-lrk69": Phase="Running", Reason="", readiness=true. Elapsed: 3.104398ms
+Jun  3 21:32:45.008: INFO: Pod "adopt-release-lrk69": Phase="Running", Reason="", readiness=true. Elapsed: 2.008367588s
+Jun  3 21:32:45.008: INFO: Pod "adopt-release-lrk69" satisfied condition "adopted"
+STEP: Removing the labels from the Job's Pod
+Jun  3 21:32:45.518: INFO: Successfully updated pod "adopt-release-lrk69"
+STEP: Checking that the Job releases the Pod
+Jun  3 21:32:45.519: INFO: Waiting up to 15m0s for pod "adopt-release-lrk69" in namespace "job-8426" to be "released"
+Jun  3 21:32:45.521: INFO: Pod "adopt-release-lrk69": Phase="Running", Reason="", readiness=true. Elapsed: 2.64983ms
+Jun  3 21:32:47.526: INFO: Pod "adopt-release-lrk69": Phase="Running", Reason="", readiness=true. Elapsed: 2.007512717s
+Jun  3 21:32:47.526: INFO: Pod "adopt-release-lrk69" satisfied condition "released"
+[AfterEach] [sig-apps] Job
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:32:47.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "job-8426" for this suite.
+Jun  3 21:33:35.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:33:35.634: INFO: namespace job-8426 deletion completed in 48.102971828s
+
+• [SLOW TEST:55.195 seconds]
+[sig-apps] Job
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should adopt matching orphans and release non-matching pods [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should mutate configmap [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:33:35.634: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:33:36.032: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:33:39.058: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should mutate configmap [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
+STEP: create a configmap that should be updated by the webhook
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:33:39.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-8316" for this suite.
+Jun  3 21:33:45.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:33:45.213: INFO: namespace webhook-8316 deletion completed in 6.112951423s
+STEP: Destroying namespace "webhook-8316-markers" for this suite.
+Jun  3 21:33:51.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:33:51.319: INFO: namespace webhook-8316-markers deletion completed in 6.106059149s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:15.708 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should mutate configmap [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[sig-api-machinery] Servers with support for Table transformation 
+  should return a 406 for a backend which does not implement metadata [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:33:51.342: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename tables
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
+[It] should return a 406 for a backend which does not implement metadata [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [sig-api-machinery] Servers with support for Table transformation
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:33:51.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "tables-7065" for this suite.
+Jun  3 21:33:57.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:33:57.491: INFO: namespace tables-7065 deletion completed in 6.106151445s
+
+• [SLOW TEST:6.149 seconds]
+[sig-api-machinery] Servers with support for Table transformation
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should return a 406 for a backend which does not implement metadata [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a configMap. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:33:57.491: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a ConfigMap
+STEP: Ensuring resource quota status captures configMap creation
+STEP: Deleting a ConfigMap
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:34:13.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-8870" for this suite.
+Jun  3 21:34:19.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:34:19.680: INFO: namespace resourcequota-8870 deletion completed in 6.103064991s
+
+• [SLOW TEST:22.189 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a configMap. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with secret pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:34:19.681: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
+STEP: Setting up data
+[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating pod pod-subpath-test-secret-lz7s
+STEP: Creating a pod to test atomic-volume-subpath
+Jun  3 21:34:19.727: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lz7s" in namespace "subpath-8964" to be "success or failure"
+Jun  3 21:34:19.729: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.601829ms
+Jun  3 21:34:21.733: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 2.006567156s
+Jun  3 21:34:23.738: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 4.011553056s
+Jun  3 21:34:25.742: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 6.015380816s
+Jun  3 21:34:27.747: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 8.019754315s
+Jun  3 21:34:29.751: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 10.024587024s
+Jun  3 21:34:31.756: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 12.028769111s
+Jun  3 21:34:33.761: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 14.034602189s
+Jun  3 21:34:35.766: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 16.039291007s
+Jun  3 21:34:37.770: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 18.043029599s
+Jun  3 21:34:39.775: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 20.047842044s
+Jun  3 21:34:41.780: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Running", Reason="", readiness=true. Elapsed: 22.052877788s
+Jun  3 21:34:43.785: INFO: Pod "pod-subpath-test-secret-lz7s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.058309233s
+STEP: Saw pod success
+Jun  3 21:34:43.785: INFO: Pod "pod-subpath-test-secret-lz7s" satisfied condition "success or failure"
+Jun  3 21:34:43.789: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-subpath-test-secret-lz7s container test-container-subpath-secret-lz7s: 
+STEP: delete the pod
+Jun  3 21:34:43.837: INFO: Waiting for pod pod-subpath-test-secret-lz7s to disappear
+Jun  3 21:34:43.840: INFO: Pod pod-subpath-test-secret-lz7s no longer exists
+STEP: Deleting pod pod-subpath-test-secret-lz7s
+Jun  3 21:34:43.840: INFO: Deleting pod "pod-subpath-test-secret-lz7s" in namespace "subpath-8964"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:34:43.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-8964" for this suite.
+Jun  3 21:34:49.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:34:49.969: INFO: namespace subpath-8964 deletion completed in 6.122416499s
+
+• [SLOW TEST:30.288 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  Atomic writer volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
+    should support subpaths with secret pod [LinuxOnly] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSS
+------------------------------
+[sig-network] Services 
+  should be able to create a functioning NodePort service [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:34:49.969: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+[It] should be able to create a functioning NodePort service [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating service nodeport-test with type=NodePort in namespace services-1186
+STEP: creating replication controller nodeport-test in namespace services-1186
+I0603 21:34:50.046464      25 runners.go:184] Created replication controller with name: nodeport-test, namespace: services-1186, replica count: 2
+Jun  3 21:34:53.097: INFO: Creating new exec pod
+I0603 21:34:53.097015      25 runners.go:184] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Jun  3 21:34:56.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-1186 execpod6lgqv -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
+Jun  3 21:34:56.595: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
+Jun  3 21:34:56.595: INFO: stdout: ""
+Jun  3 21:34:56.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-1186 execpod6lgqv -- /bin/sh -x -c nc -zv -t -w 2 172.19.146.202 80'
+Jun  3 21:34:56.819: INFO: stderr: "+ nc -zv -t -w 2 172.19.146.202 80\nConnection to 172.19.146.202 80 port [tcp/http] succeeded!\n"
+Jun  3 21:34:56.819: INFO: stdout: ""
+Jun  3 21:34:56.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-1186 execpod6lgqv -- /bin/sh -x -c nc -zv -t -w 2 10.45.43.24 31663'
+Jun  3 21:34:57.050: INFO: stderr: "+ nc -zv -t -w 2 10.45.43.24 31663\nConnection to 10.45.43.24 31663 port [tcp/31663] succeeded!\n"
+Jun  3 21:34:57.050: INFO: stdout: ""
+Jun  3 21:34:57.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-1186 execpod6lgqv -- /bin/sh -x -c nc -zv -t -w 2 10.45.43.10 31663'
+Jun  3 21:34:57.272: INFO: stderr: "+ nc -zv -t -w 2 10.45.43.10 31663\nConnection to 10.45.43.10 31663 port [tcp/31663] succeeded!\n"
+Jun  3 21:34:57.272: INFO: stdout: ""
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:34:57.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-1186" for this suite.
+Jun  3 21:35:03.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:35:03.404: INFO: namespace services-1186 deletion completed in 6.127111845s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95
+
+• [SLOW TEST:13.435 seconds]
+[sig-network] Services
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should be able to create a functioning NodePort service [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:35:03.405: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for the rc to be deleted
+STEP: Gathering metrics
+W0603 21:35:09.480726      25 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+Jun  3 21:35:09.480: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:35:09.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-5048" for this suite.
+Jun  3 21:35:15.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:35:15.588: INFO: namespace gc-5048 deletion completed in 6.102456011s
+
+• [SLOW TEST:12.183 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:35:15.588: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating projection with configMap that has name projected-configmap-test-upd-c74fb67a-8ca1-4dcb-9574-929432549352
+STEP: Creating the pod
+STEP: Updating configmap projected-configmap-test-upd-c74fb67a-8ca1-4dcb-9574-929432549352
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:36:32.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-8474" for this suite.
+Jun  3 21:37:00.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:37:00.194: INFO: namespace projected-8474 deletion completed in 28.104079519s
+
+• [SLOW TEST:104.606 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:37:00.194: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the container
+STEP: wait for the container to reach Succeeded
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Jun  3 21:37:02.253: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:37:02.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-4360" for this suite.
+Jun  3 21:37:08.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:37:08.398: INFO: namespace container-runtime-4360 deletion completed in 6.120090038s
+
+• [SLOW TEST:8.204 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  blackbox test
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
+    on terminated container
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
+      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
+      /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:37:08.398: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:37:08.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d926f231-ce46-42bb-b805-a892988ac841" in namespace "projected-798" to be "success or failure"
+Jun  3 21:37:08.444: INFO: Pod "downwardapi-volume-d926f231-ce46-42bb-b805-a892988ac841": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617787ms
+Jun  3 21:37:10.450: INFO: Pod "downwardapi-volume-d926f231-ce46-42bb-b805-a892988ac841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009107226s
+STEP: Saw pod success
+Jun  3 21:37:10.450: INFO: Pod "downwardapi-volume-d926f231-ce46-42bb-b805-a892988ac841" satisfied condition "success or failure"
+Jun  3 21:37:10.453: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-d926f231-ce46-42bb-b805-a892988ac841 container client-container: 
+STEP: delete the pod
+Jun  3 21:37:10.477: INFO: Waiting for pod downwardapi-volume-d926f231-ce46-42bb-b805-a892988ac841 to disappear
+Jun  3 21:37:10.480: INFO: Pod downwardapi-volume-d926f231-ce46-42bb-b805-a892988ac841 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:37:10.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-798" for this suite.
+Jun  3 21:37:16.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:37:16.600: INFO: namespace projected-798 deletion completed in 6.114708363s
+
+• [SLOW TEST:8.202 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  updates the published spec when one version gets renamed [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:37:16.600: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates the published spec when one version gets renamed [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: set up a multi version CRD
+Jun  3 21:37:16.641: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: rename a version
+STEP: check the new version name is served
+STEP: check the old version name is removed
+STEP: check the other version is not changed
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:37:36.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-6396" for this suite.
+Jun  3 21:37:42.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:37:42.698: INFO: namespace crd-publish-openapi-6396 deletion completed in 6.101611971s
+
+• [SLOW TEST:26.098 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  updates the published spec when one version gets renamed [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
+  watch on custom resource definition objects [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:37:42.698: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] watch on custom resource definition objects [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:37:42.735: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Creating first CR 
+Jun  3 21:37:43.439: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T21:37:43Z generation:1 name:name1 resourceVersion:165562 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:81ce6b17-991d-4b74-ad29-e69426136f2e] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Creating second CR
+Jun  3 21:37:53.448: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T21:37:53Z generation:1 name:name2 resourceVersion:165580 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11bf4ce0-b51d-4eb0-8444-bebf4cf24770] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Modifying first CR
+Jun  3 21:38:03.456: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T21:37:43Z generation:2 name:name1 resourceVersion:165598 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:81ce6b17-991d-4b74-ad29-e69426136f2e] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Modifying second CR
+Jun  3 21:38:13.463: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T21:37:53Z generation:2 name:name2 resourceVersion:165617 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11bf4ce0-b51d-4eb0-8444-bebf4cf24770] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Deleting first CR
+Jun  3 21:38:23.474: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T21:37:43Z generation:2 name:name1 resourceVersion:165635 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:81ce6b17-991d-4b74-ad29-e69426136f2e] num:map[num1:9223372036854775807 num2:1000000]]}
+STEP: Deleting second CR
+Jun  3 21:38:33.485: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-06-03T21:37:53Z generation:2 name:name2 resourceVersion:165655 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:11bf4ce0-b51d-4eb0-8444-bebf4cf24770] num:map[num1:9223372036854775807 num2:1000000]]}
+[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:38:43.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-watch-5731" for this suite.
+Jun  3 21:38:50.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:38:50.118: INFO: namespace crd-watch-5731 deletion completed in 6.114979122s
+
+• [SLOW TEST:67.420 seconds]
+[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  CustomResourceDefinition Watch
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
+    watch on custom resource definition objects [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command in a pod 
+  should print the output to logs [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:38:50.119: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should print the output to logs [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:38:52.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubelet-test-9058" for this suite.
+Jun  3 21:39:36.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:39:36.339: INFO: namespace kubelet-test-9058 deletion completed in 44.123060163s
+
+• [SLOW TEST:46.220 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  when scheduling a busybox command in a pod
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
+    should print the output to logs [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:39:36.339: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for all rs to be garbage collected
+STEP: expected 0 rs, got 1 rs
+STEP: expected 0 pods, got 2 pods
+STEP: Gathering metrics
+Jun  3 21:39:37.420: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+For garbage_collector_attempt_to_delete_work_duration:
+For garbage_collector_attempt_to_orphan_queue_latency:
+For garbage_collector_attempt_to_orphan_work_duration:
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+For garbage_collector_graph_changes_work_duration:
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+For namespace_queue_latency_sum:
+For namespace_queue_latency_count:
+For namespace_retries:
+For namespace_work_duration:
+For namespace_work_duration_sum:
+For namespace_work_duration_count:
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+W0603 21:39:37.420341      25 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
+Jun  3 21:39:37.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-4470" for this suite.
+Jun  3 21:39:43.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:39:43.526: INFO: namespace gc-4470 deletion completed in 6.100331181s
+
+• [SLOW TEST:7.186 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-node] ConfigMap 
+  should be consumable via environment variable [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:39:43.526: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via environment variable [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap configmap-8858/configmap-test-b3335870-a744-4892-a2d7-44aa517acd2e
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:39:43.572: INFO: Waiting up to 5m0s for pod "pod-configmaps-2dc5117f-8952-40dc-9878-dcf5141bee53" in namespace "configmap-8858" to be "success or failure"
+Jun  3 21:39:43.576: INFO: Pod "pod-configmaps-2dc5117f-8952-40dc-9878-dcf5141bee53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.576072ms
+Jun  3 21:39:45.581: INFO: Pod "pod-configmaps-2dc5117f-8952-40dc-9878-dcf5141bee53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008359249s
+STEP: Saw pod success
+Jun  3 21:39:45.581: INFO: Pod "pod-configmaps-2dc5117f-8952-40dc-9878-dcf5141bee53" satisfied condition "success or failure"
+Jun  3 21:39:45.583: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-2dc5117f-8952-40dc-9878-dcf5141bee53 container env-test: 
+STEP: delete the pod
+Jun  3 21:39:45.604: INFO: Waiting for pod pod-configmaps-2dc5117f-8952-40dc-9878-dcf5141bee53 to disappear
+Jun  3 21:39:45.607: INFO: Pod pod-configmaps-2dc5117f-8952-40dc-9878-dcf5141bee53 no longer exists
+[AfterEach] [sig-node] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:39:45.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-8858" for this suite.
+Jun  3 21:39:51.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:39:51.713: INFO: namespace configmap-8858 deletion completed in 6.102611837s
+
+• [SLOW TEST:8.187 seconds]
+[sig-node] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
+  should be consumable via environment variable [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should create a ResourceQuota and capture the life of a secret. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:39:51.713: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Discovering how many secrets are in namespace by default
+STEP: Counting existing ResourceQuota
+STEP: Creating a ResourceQuota
+STEP: Ensuring resource quota status is calculated
+STEP: Creating a Secret
+STEP: Ensuring resource quota status captures secret creation
+STEP: Deleting a secret
+STEP: Ensuring resource quota status released usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:40:08.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-8291" for this suite.
+Jun  3 21:40:14.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:40:14.918: INFO: namespace resourcequota-8291 deletion completed in 6.106677466s
+
+• [SLOW TEST:23.205 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should create a ResourceQuota and capture the life of a secret. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Proxy server 
+  should support proxy with --port 0  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:40:14.918: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should support proxy with --port 0  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: starting the proxy server
+Jun  3 21:40:14.951: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-005848369 proxy -p 0 --disable-filter'
+STEP: curling proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:40:15.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-395" for this suite.
+Jun  3 21:40:21.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:40:21.154: INFO: namespace kubectl-395 deletion completed in 6.116475832s
+
+• [SLOW TEST:6.235 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Proxy server
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1782
+    should support proxy with --port 0  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:40:21.154: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
+Jun  3 21:40:21.187: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jun  3 21:40:21.199: INFO: Waiting for terminating namespaces to be deleted...
+Jun  3 21:40:21.202: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-0 before test
+Jun  3 21:40:21.219: INFO: kube-proxy-ds-qrgfl from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.219: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: kube-flannel-ds-hznhg from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.219: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: csi-node-ntnx-plugin-pdc8c from ntnx-system started at 2020-06-03 01:26:50 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.219: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-58wws from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.219: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:40:21.219: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-0 from kube-system started at 2020-06-02 22:11:48 +0000 UTC (3 container statuses recorded)
+Jun  3 21:40:21.219: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: fluent-bit-mb264 from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.219: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: node-exporter-hkj7p from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.219: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:40:21.219: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-1 before test
+Jun  3 21:40:21.236: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-sz7h8 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.236: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:40:21.236: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-1 from kube-system started at 2020-06-02 22:13:08 +0000 UTC (3 container statuses recorded)
+Jun  3 21:40:21.236: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: kube-proxy-ds-8hv5j from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.236: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: kube-flannel-ds-zdlj6 from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.236: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: csi-node-ntnx-plugin-6cg44 from ntnx-system started at 2020-06-03 01:27:02 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.236: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: kube-dns-5c64dc6c6b-ls68z from kube-system started at 2020-06-02 22:16:18 +0000 UTC (3 container statuses recorded)
+Jun  3 21:40:21.236: INFO: 	Container dnsmasq ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: 	Container kubedns ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: 	Container sidecar ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: fluent-bit-zcqwz from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.236: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: node-exporter-dwrsb from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.236: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:40:21.236: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-0 before test
+Jun  3 21:40:21.252: INFO: kube-flannel-ds-qnlzb from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: kubernetes-events-printer-5c6d46dfdb-zcvlt from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container kubernetes-events-printer ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-7btt6 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:40:21.252: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: csi-node-ntnx-plugin-zbw4j from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: alertmanager-main-1 from ntnx-system started at 2020-06-03 21:01:20 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: fluent-bit-gb59k from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: elasticsearch-logging-0 from ntnx-system started at 2020-06-02 22:17:12 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container elasticsearch-logging ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: node-exporter-5q9qc from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: kube-proxy-ds-qt528 from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.252: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:40:21.252: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-1 before test
+Jun  3 21:40:21.260: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-szp8f from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:40:21.260: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: kube-flannel-ds-jhm9k from kube-system started at 2020-06-03 21:01:50 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: node-exporter-qwbtg from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: csi-node-ntnx-plugin-bh72v from ntnx-system started at 2020-06-03 21:01:44 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: fluent-bit-dn8fp from ntnx-system started at 2020-06-03 21:01:44 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: kube-proxy-ds-fgf9r from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: sonobuoy from sonobuoy started at 2020-06-03 20:08:28 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: sonobuoy-e2e-job-5435c8b63156474a from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container e2e ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: prometheus-k8s-1 from ntnx-system started at 2020-06-03 21:02:04 +0000 UTC (3 container statuses recorded)
+Jun  3 21:40:21.260: INFO: 	Container prometheus ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+Jun  3 21:40:21.260: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-2 before test
+Jun  3 21:40:21.279: INFO: kube-proxy-ds-gn6cv from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: csi-node-ntnx-plugin-wnbs7 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: prometheus-k8s-0 from ntnx-system started at 2020-06-02 22:20:28 +0000 UTC (3 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container prometheus ready: true, restart count 1
+Jun  3 21:40:21.279: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: kibana-logging-54b7d845-c94kw from ntnx-system started at 2020-06-03 21:01:17 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container kibana-logging ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container nginxhttp ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: csi-attacher-ntnx-plugin-0 from ntnx-system started at 2020-06-03 21:01:22 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container csi-attacher ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: node-exporter-hs75m from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: alertmanager-main-0 from ntnx-system started at 2020-06-02 22:20:10 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: elasticsearch-curator-cron-1591142460-cj4wj from ntnx-system started at 2020-06-03 00:01:05 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container curator ready: false, restart count 0
+Jun  3 21:40:21.279: INFO: fluent-bit-zgt4s from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: prometheus-operator-58f86dddd6-fkbmk from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container prometheus-operator ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: kube-flannel-ds-q4sbl from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: csi-provisioner-ntnx-plugin-0 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container csi-provisioner ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: kube-state-metrics-5d45657948-qkv6t from ntnx-system started at 2020-06-02 22:19:59 +0000 UTC (4 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container addon-resizer ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container kube-rbac-proxy-main ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container kube-rbac-proxy-self ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: 	Container kube-state-metrics ready: true, restart count 0
+Jun  3 21:40:21.279: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-p8d7c from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:40:21.279: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:40:21.279: INFO: 	Container systemd-logs ready: true, restart count 0
+[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Trying to launch a pod without a label to get a node which can launch it.
+STEP: Explicitly delete pod here to free the resource it takes.
+STEP: Trying to apply a random label on the found node.
+STEP: verifying the node has the label kubernetes.io/e2e-28004ea8-835c-41be-9f95-56fbc6e11e49 90
+STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
+STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
+STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
+STEP: removing the label kubernetes.io/e2e-28004ea8-835c-41be-9f95-56fbc6e11e49 off the node karbon-certification-ff5a6a-k8s-worker-1
+STEP: verifying the node doesn't have the label kubernetes.io/e2e-28004ea8-835c-41be-9f95-56fbc6e11e49
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:40:31.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-9970" for this suite.
+Jun  3 21:40:41.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:40:41.491: INFO: namespace sched-pred-9970 deletion completed in 10.097097326s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78
+
+• [SLOW TEST:20.337 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:40:41.491: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name secret-test-8021622d-6654-4ad0-93f1-22d8ad5f173e
+STEP: Creating a pod to test consume secrets
+Jun  3 21:40:41.537: INFO: Waiting up to 5m0s for pod "pod-secrets-238c6ce9-64e9-4776-be99-e2dcaf43339b" in namespace "secrets-5651" to be "success or failure"
+Jun  3 21:40:41.540: INFO: Pod "pod-secrets-238c6ce9-64e9-4776-be99-e2dcaf43339b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.965997ms
+Jun  3 21:40:43.544: INFO: Pod "pod-secrets-238c6ce9-64e9-4776-be99-e2dcaf43339b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006961863s
+STEP: Saw pod success
+Jun  3 21:40:43.544: INFO: Pod "pod-secrets-238c6ce9-64e9-4776-be99-e2dcaf43339b" satisfied condition "success or failure"
+Jun  3 21:40:43.547: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-secrets-238c6ce9-64e9-4776-be99-e2dcaf43339b container secret-volume-test: 
+STEP: delete the pod
+Jun  3 21:40:43.568: INFO: Waiting for pod pod-secrets-238c6ce9-64e9-4776-be99-e2dcaf43339b to disappear
+Jun  3 21:40:43.571: INFO: Pod pod-secrets-238c6ce9-64e9-4776-be99-e2dcaf43339b no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:40:43.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-5651" for this suite.
+Jun  3 21:40:49.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:40:49.684: INFO: namespace secrets-5651 deletion completed in 6.109042309s
+
+• [SLOW TEST:8.193 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:40:49.684: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name secret-test-580945a9-62d7-47a0-a78d-a9ddf4a62230
+STEP: Creating a pod to test consume secrets
+Jun  3 21:40:49.726: INFO: Waiting up to 5m0s for pod "pod-secrets-9e7d384e-a853-405d-856f-c48b105c6d64" in namespace "secrets-1887" to be "success or failure"
+Jun  3 21:40:49.729: INFO: Pod "pod-secrets-9e7d384e-a853-405d-856f-c48b105c6d64": Phase="Pending", Reason="", readiness=false. Elapsed: 3.245029ms
+Jun  3 21:40:51.734: INFO: Pod "pod-secrets-9e7d384e-a853-405d-856f-c48b105c6d64": Phase="Running", Reason="", readiness=true. Elapsed: 2.007928223s
+Jun  3 21:40:53.738: INFO: Pod "pod-secrets-9e7d384e-a853-405d-856f-c48b105c6d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012335295s
+STEP: Saw pod success
+Jun  3 21:40:53.738: INFO: Pod "pod-secrets-9e7d384e-a853-405d-856f-c48b105c6d64" satisfied condition "success or failure"
+Jun  3 21:40:53.741: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-secrets-9e7d384e-a853-405d-856f-c48b105c6d64 container secret-volume-test: 
+STEP: delete the pod
+Jun  3 21:40:53.763: INFO: Waiting for pod pod-secrets-9e7d384e-a853-405d-856f-c48b105c6d64 to disappear
+Jun  3 21:40:53.765: INFO: Pod pod-secrets-9e7d384e-a853-405d-856f-c48b105c6d64 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:40:53.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-1887" for this suite.
+Jun  3 21:40:59.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:40:59.870: INFO: namespace secrets-1887 deletion completed in 6.101118581s
+
+• [SLOW TEST:10.186 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-storage] Downward API volume 
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:40:59.870: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating the pod
+Jun  3 21:41:02.445: INFO: Successfully updated pod "labelsupdate50fdee0d-86e9-4ae3-bffd-24097a2e7868"
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:41:06.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-5650" for this suite.
+Jun  3 21:41:30.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:41:30.585: INFO: namespace downward-api-5650 deletion completed in 24.104184216s
+
+• [SLOW TEST:30.715 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Should recreate evicted statefulset [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:41:30.585: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
+STEP: Creating service test in namespace statefulset-9130
+[It] Should recreate evicted statefulset [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Looking for a node to schedule stateful set and pod
+STEP: Creating pod with conflicting port in namespace statefulset-9130
+STEP: Creating statefulset with conflicting port in namespace statefulset-9130
+STEP: Waiting until pod test-pod will start running in namespace statefulset-9130
+STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9130
+Jun  3 21:41:34.659: INFO: Observed stateful pod in namespace: statefulset-9130, name: ss-0, uid: 3e339809-78c6-4533-8a61-197d9833aed4, status phase: Failed. Waiting for statefulset controller to delete.
+Jun  3 21:41:34.660: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9130
+STEP: Removing pod with conflicting port in namespace statefulset-9130
+STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9130 and will be in running state
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
+Jun  3 21:41:38.693: INFO: Deleting all statefulset in ns statefulset-9130
+Jun  3 21:41:38.697: INFO: Scaling statefulset ss to 0
+Jun  3 21:41:48.718: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:41:48.722: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:41:48.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-9130" for this suite.
+Jun  3 21:41:54.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:41:54.844: INFO: namespace statefulset-9130 deletion completed in 6.10349045s
+
+• [SLOW TEST:24.259 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+    Should recreate evicted statefulset [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SS
+------------------------------
+[sig-network] Services 
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:41:54.845: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+[It] should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating service multi-endpoint-test in namespace services-8237
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8237 to expose endpoints map[]
+Jun  3 21:41:54.892: INFO: Get endpoints failed (5.915721ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
+Jun  3 21:41:55.896: INFO: successfully validated that service multi-endpoint-test in namespace services-8237 exposes endpoints map[] (1.009710309s elapsed)
+STEP: Creating pod pod1 in namespace services-8237
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8237 to expose endpoints map[pod1:[100]]
+Jun  3 21:41:57.927: INFO: successfully validated that service multi-endpoint-test in namespace services-8237 exposes endpoints map[pod1:[100]] (2.022346559s elapsed)
+STEP: Creating pod pod2 in namespace services-8237
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8237 to expose endpoints map[pod1:[100] pod2:[101]]
+Jun  3 21:41:59.965: INFO: successfully validated that service multi-endpoint-test in namespace services-8237 exposes endpoints map[pod1:[100] pod2:[101]] (2.030978336s elapsed)
+STEP: Deleting pod pod1 in namespace services-8237
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8237 to expose endpoints map[pod2:[101]]
+Jun  3 21:42:00.989: INFO: successfully validated that service multi-endpoint-test in namespace services-8237 exposes endpoints map[pod2:[101]] (1.017349467s elapsed)
+STEP: Deleting pod pod2 in namespace services-8237
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8237 to expose endpoints map[]
+Jun  3 21:42:01.000: INFO: successfully validated that service multi-endpoint-test in namespace services-8237 exposes endpoints map[] (3.219565ms elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:42:01.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-8237" for this suite.
+Jun  3 21:42:29.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:42:29.155: INFO: namespace services-8237 deletion completed in 28.115981028s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95
+
+• [SLOW TEST:34.311 seconds]
+[sig-network] Services
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:42:29.156: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a test headless service
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8158.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8158.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8158.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8158.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8158.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8158.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun  3 21:42:31.260: INFO: DNS probes using dns-8158/dns-test-ac47e49e-ea86-4f6c-adf3-bbd5873e9f5c succeeded
+
+STEP: deleting the pod
+STEP: deleting the test headless service
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:42:31.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "dns-8158" for this suite.
+Jun  3 21:42:37.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:42:37.402: INFO: namespace dns-8158 deletion completed in 6.104284405s
+
+• [SLOW TEST:8.246 seconds]
+[sig-network] DNS
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:42:37.402: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
+[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+Jun  3 21:42:37.439: INFO: PodSpec: initContainers in spec.initContainers
+Jun  3 21:43:21.016: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c1a334fc-60ad-428b-b530-4b58c25379dd", GenerateName:"", Namespace:"init-container-8153", SelfLink:"/api/v1/namespaces/init-container-8153/pods/pod-init-c1a334fc-60ad-428b-b530-4b58c25379dd", UID:"ec4fdd68-a533-4693-924b-27ba46132061", ResourceVersion:"166730", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726817357, loc:(*time.Location)(0x789e8e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"439262477"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pxl8z", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002ce6f80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pxl8z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pxl8z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pxl8z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0062d8648), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"karbon-certification-ff5a6a-k8s-worker-1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019b2300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0062d86c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0062d86e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0062d86e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0062d86ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817357, loc:(*time.Location)(0x789e8e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817357, loc:(*time.Location)(0x789e8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817357, loc:(*time.Location)(0x789e8e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817357, loc:(*time.Location)(0x789e8e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.45.43.12", PodIP:"172.20.2.202", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.20.2.202"}}, StartTime:(*v1.Time)(0xc003170e00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00287a850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00287a8c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d84f0fd4476bbcb3d721c414042d86a35e70b7e03e127ee93bebe78abbcf9bb0", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003170e40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003170e20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0062d876f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:43:21.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-8153" for this suite.
+Jun  3 21:43:49.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:43:49.128: INFO: namespace init-container-8153 deletion completed in 28.106269143s
+
+• [SLOW TEST:71.726 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] ConfigMap 
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:43:49.129: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap configmap-5521/configmap-test-d67db9ad-7817-4804-937b-3e5f321db30d
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:43:49.171: INFO: Waiting up to 5m0s for pod "pod-configmaps-3f59f8d5-cebf-4455-85aa-5ed15c364d29" in namespace "configmap-5521" to be "success or failure"
+Jun  3 21:43:49.178: INFO: Pod "pod-configmaps-3f59f8d5-cebf-4455-85aa-5ed15c364d29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.994657ms
+Jun  3 21:43:51.184: INFO: Pod "pod-configmaps-3f59f8d5-cebf-4455-85aa-5ed15c364d29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012141844s
+STEP: Saw pod success
+Jun  3 21:43:51.184: INFO: Pod "pod-configmaps-3f59f8d5-cebf-4455-85aa-5ed15c364d29" satisfied condition "success or failure"
+Jun  3 21:43:51.187: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-3f59f8d5-cebf-4455-85aa-5ed15c364d29 container env-test: 
+STEP: delete the pod
+Jun  3 21:43:51.220: INFO: Waiting for pod pod-configmaps-3f59f8d5-cebf-4455-85aa-5ed15c364d29 to disappear
+Jun  3 21:43:51.224: INFO: Pod pod-configmaps-3f59f8d5-cebf-4455-85aa-5ed15c364d29 no longer exists
+[AfterEach] [sig-node] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:43:51.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-5521" for this suite.
+Jun  3 21:43:57.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:43:57.338: INFO: namespace configmap-5521 deletion completed in 6.108697355s
+
+• [SLOW TEST:8.209 seconds]
+[sig-node] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-node] Downward API 
+  should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:43:57.338: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward api env vars
+Jun  3 21:43:57.380: INFO: Waiting up to 5m0s for pod "downward-api-f3330468-ee37-41cf-b91c-adb4130d1797" in namespace "downward-api-1619" to be "success or failure"
+Jun  3 21:43:57.387: INFO: Pod "downward-api-f3330468-ee37-41cf-b91c-adb4130d1797": Phase="Pending", Reason="", readiness=false. Elapsed: 6.942059ms
+Jun  3 21:43:59.392: INFO: Pod "downward-api-f3330468-ee37-41cf-b91c-adb4130d1797": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01144592s
+STEP: Saw pod success
+Jun  3 21:43:59.392: INFO: Pod "downward-api-f3330468-ee37-41cf-b91c-adb4130d1797" satisfied condition "success or failure"
+Jun  3 21:43:59.395: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downward-api-f3330468-ee37-41cf-b91c-adb4130d1797 container dapi-container: 
+STEP: delete the pod
+Jun  3 21:43:59.424: INFO: Waiting for pod downward-api-f3330468-ee37-41cf-b91c-adb4130d1797 to disappear
+Jun  3 21:43:59.427: INFO: Pod downward-api-f3330468-ee37-41cf-b91c-adb4130d1797 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:43:59.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-1619" for this suite.
+Jun  3 21:44:05.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:44:05.541: INFO: namespace downward-api-1619 deletion completed in 6.109701358s
+
+• [SLOW TEST:8.203 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
+  should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:44:05.541: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the container
+STEP: wait for the container to reach Failed
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Jun  3 21:44:08.610: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:44:08.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-284" for this suite.
+Jun  3 21:44:14.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:44:14.737: INFO: namespace container-runtime-284 deletion completed in 6.107158387s
+
+• [SLOW TEST:9.196 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  blackbox test
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
+    on terminated container
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
+      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+      /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[k8s.io] Pods 
+  should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:44:14.737: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
+[It] should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:44:14.771: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:44:18.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-1024" for this suite.
+Jun  3 21:45:02.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:45:03.056: INFO: namespace pods-1024 deletion completed in 44.111026922s
+
+• [SLOW TEST:48.319 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:45:03.057: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test substitution in container's args
+Jun  3 21:45:03.099: INFO: Waiting up to 5m0s for pod "var-expansion-4899544e-9d99-432a-9811-c3ae6118c931" in namespace "var-expansion-4165" to be "success or failure"
+Jun  3 21:45:03.104: INFO: Pod "var-expansion-4899544e-9d99-432a-9811-c3ae6118c931": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495639ms
+Jun  3 21:45:05.108: INFO: Pod "var-expansion-4899544e-9d99-432a-9811-c3ae6118c931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009184506s
+STEP: Saw pod success
+Jun  3 21:45:05.108: INFO: Pod "var-expansion-4899544e-9d99-432a-9811-c3ae6118c931" satisfied condition "success or failure"
+Jun  3 21:45:05.111: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod var-expansion-4899544e-9d99-432a-9811-c3ae6118c931 container dapi-container: 
+STEP: delete the pod
+Jun  3 21:45:05.132: INFO: Waiting for pod var-expansion-4899544e-9d99-432a-9811-c3ae6118c931 to disappear
+Jun  3 21:45:05.136: INFO: Pod var-expansion-4899544e-9d99-432a-9811-c3ae6118c931 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:45:05.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "var-expansion-4165" for this suite.
+Jun  3 21:45:11.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:45:11.249: INFO: namespace var-expansion-4165 deletion completed in 6.108575336s
+
+• [SLOW TEST:8.193 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  works for multiple CRDs of same group but different versions [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:45:11.250: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] works for multiple CRDs of same group but different versions [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
+Jun  3 21:45:11.282: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
+Jun  3 21:45:25.054: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:45:28.655: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:45:42.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-2253" for this suite.
+Jun  3 21:45:48.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:45:48.927: INFO: namespace crd-publish-openapi-2253 deletion completed in 6.112258742s
+
+• [SLOW TEST:37.678 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  works for multiple CRDs of same group but different versions [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl replace 
+  should update a single-container pod's image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:45:48.928: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Kubectl replace
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1704
+[It] should update a single-container pod's image  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: running the image docker.io/library/httpd:2.4.38-alpine
+Jun  3 21:45:48.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9886'
+Jun  3 21:45:49.343: INFO: stderr: ""
+Jun  3 21:45:49.343: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
+STEP: verifying the pod e2e-test-httpd-pod is running
+STEP: verifying the pod e2e-test-httpd-pod was created
+Jun  3 21:45:54.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pod e2e-test-httpd-pod --namespace=kubectl-9886 -o json'
+Jun  3 21:45:54.487: INFO: stderr: ""
+Jun  3 21:45:54.487: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-06-03T21:45:49Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-9886\",\n        \"resourceVersion\": \"167183\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9886/pods/e2e-test-httpd-pod\",\n        \"uid\": \"f776d66c-ab19-4833-bacc-01cc0e17ea2c\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jpgdm\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"karbon-certification-ff5a6a-k8s-worker-1\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jpgdm\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jpgdm\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-03T21:45:49Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-03T21:45:51Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-03T21:45:51Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-06-03T21:45:49Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://3a7784fcb36216b6e611abdc7bcd1370f5a5106c1da584d3c9c5595f11526565\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-06-03T21:45:50Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.45.43.12\",\n        \"phase\": \"Running\",\n        \"podIP\": \"172.20.2.208\",\n        \"podIPs\": [\n            {\n                \"ip\": \"172.20.2.208\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-06-03T21:45:49Z\"\n    }\n}\n"
+STEP: replace the image in the pod
+Jun  3 21:45:54.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 replace -f - --namespace=kubectl-9886'
+Jun  3 21:45:54.775: INFO: stderr: ""
+Jun  3 21:45:54.775: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
+STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
+[AfterEach] Kubectl replace
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1709
+Jun  3 21:45:54.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete pods e2e-test-httpd-pod --namespace=kubectl-9886'
+Jun  3 21:46:04.282: INFO: stderr: ""
+Jun  3 21:46:04.282: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:46:04.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-9886" for this suite.
+Jun  3 21:46:10.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:46:10.393: INFO: namespace kubectl-9886 deletion completed in 6.103083011s
+
+• [SLOW TEST:21.465 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl replace
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700
+    should update a single-container pod's image  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Secrets 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:46:10.393: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating secret with name s-test-opt-del-64a0ba72-6e0b-44dc-80fc-04331dc24b7d
+STEP: Creating secret with name s-test-opt-upd-b4e4ede4-2956-49b7-bf6c-c3b18a62d14d
+STEP: Creating the pod
+STEP: Deleting secret s-test-opt-del-64a0ba72-6e0b-44dc-80fc-04331dc24b7d
+STEP: Updating secret s-test-opt-upd-b4e4ede4-2956-49b7-bf6c-c3b18a62d14d
+STEP: Creating secret with name s-test-opt-create-35bea0b7-1e4d-47f9-8b71-ca350e44b0f3
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:47:34.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-8948" for this suite.
+Jun  3 21:47:47.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:47:47.119: INFO: namespace secrets-8948 deletion completed in 12.125049643s
+
+• [SLOW TEST:96.726 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:47:47.120: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:47:47.167: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1425350-5ad5-49e1-b0b4-c180544053f6" in namespace "downward-api-4611" to be "success or failure"
+Jun  3 21:47:47.173: INFO: Pod "downwardapi-volume-c1425350-5ad5-49e1-b0b4-c180544053f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.308078ms
+Jun  3 21:47:49.177: INFO: Pod "downwardapi-volume-c1425350-5ad5-49e1-b0b4-c180544053f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009622557s
+STEP: Saw pod success
+Jun  3 21:47:49.177: INFO: Pod "downwardapi-volume-c1425350-5ad5-49e1-b0b4-c180544053f6" satisfied condition "success or failure"
+Jun  3 21:47:49.180: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-c1425350-5ad5-49e1-b0b4-c180544053f6 container client-container: 
+STEP: delete the pod
+Jun  3 21:47:49.202: INFO: Waiting for pod downwardapi-volume-c1425350-5ad5-49e1-b0b4-c180544053f6 to disappear
+Jun  3 21:47:49.205: INFO: Pod downwardapi-volume-c1425350-5ad5-49e1-b0b4-c180544053f6 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:47:49.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-4611" for this suite.
+Jun  3 21:47:55.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:47:55.308: INFO: namespace downward-api-4611 deletion completed in 6.098958156s
+
+• [SLOW TEST:8.189 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:47:55.309: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
+[It] should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating pod
+Jun  3 21:47:57.368: INFO: Pod pod-hostip-76b25143-3445-480b-9d87-08478478b860 has hostIP: 10.45.43.12
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:47:57.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-3909" for this suite.
+Jun  3 21:48:09.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:48:09.472: INFO: namespace pods-3909 deletion completed in 12.100397255s
+
+• [SLOW TEST:14.164 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:48:09.473: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-test-upd-485a47df-6d60-4a13-8120-4fea2cf33ae7
+STEP: Creating the pod
+STEP: Updating configmap configmap-test-upd-485a47df-6d60-4a13-8120-4fea2cf33ae7
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:48:13.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-7264" for this suite.
+Jun  3 21:48:25.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:48:25.690: INFO: namespace configmap-7264 deletion completed in 12.115347741s
+
+• [SLOW TEST:16.217 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] ResourceQuota 
+  should verify ResourceQuota with terminating scopes. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:48:25.691: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename resourcequota
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should verify ResourceQuota with terminating scopes. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a ResourceQuota with terminating scope
+STEP: Ensuring ResourceQuota status is calculated
+STEP: Creating a ResourceQuota with not terminating scope
+STEP: Ensuring ResourceQuota status is calculated
+STEP: Creating a long running pod
+STEP: Ensuring resource quota with not terminating scope captures the pod usage
+STEP: Ensuring resource quota with terminating scope ignored the pod usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+STEP: Creating a terminating pod
+STEP: Ensuring resource quota with terminating scope captures the pod usage
+STEP: Ensuring resource quota with not terminating scope ignored the pod usage
+STEP: Deleting the pod
+STEP: Ensuring resource quota status released the pod usage
+[AfterEach] [sig-api-machinery] ResourceQuota
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:48:41.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "resourcequota-8425" for this suite.
+Jun  3 21:48:47.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:48:47.940: INFO: namespace resourcequota-8425 deletion completed in 6.107193962s
+
+• [SLOW TEST:22.250 seconds]
+[sig-api-machinery] ResourceQuota
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should verify ResourceQuota with terminating scopes. [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSS
+------------------------------
+[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
+  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:48:47.940: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods Set QOS Class
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
+[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying QOS class is set on the pod
+[AfterEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:48:47.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-2248" for this suite.
+Jun  3 21:49:16.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:49:16.122: INFO: namespace pods-2248 deletion completed in 28.121220359s
+
+• [SLOW TEST:28.182 seconds]
+[k8s.io] [sig-node] Pods Extended
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  [k8s.io] Pods Set QOS Class
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+    should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should honor timeout [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:49:16.123: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:49:16.768: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jun  3 21:49:18.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817756, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817756, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817756, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817756, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:49:21.798: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should honor timeout [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Setting timeout (1s) shorter than webhook latency (5s)
+STEP: Registering slow webhook via the AdmissionRegistration API
+STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
+STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
+STEP: Registering slow webhook via the AdmissionRegistration API
+STEP: Having no error when timeout is longer than webhook latency
+STEP: Registering slow webhook via the AdmissionRegistration API
+STEP: Having no error when timeout is empty (defaulted to 10s in v1)
+STEP: Registering slow webhook via the AdmissionRegistration API
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:49:33.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-747" for this suite.
+Jun  3 21:49:39.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:49:40.068: INFO: namespace webhook-747 deletion completed in 6.119853808s
+STEP: Destroying namespace "webhook-747-markers" for this suite.
+Jun  3 21:49:46.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:49:46.172: INFO: namespace webhook-747-markers deletion completed in 6.104093338s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:30.065 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should honor timeout [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[k8s.io] Security Context When creating a container with runAsUser 
+  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:49:46.187: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename security-context-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
+[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:49:46.234: INFO: Waiting up to 5m0s for pod "busybox-user-65534-69671be6-b92d-48f0-8ed7-38121867c8ce" in namespace "security-context-test-6878" to be "success or failure"
+Jun  3 21:49:46.238: INFO: Pod "busybox-user-65534-69671be6-b92d-48f0-8ed7-38121867c8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.806009ms
+Jun  3 21:49:48.245: INFO: Pod "busybox-user-65534-69671be6-b92d-48f0-8ed7-38121867c8ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010858769s
+Jun  3 21:49:50.250: INFO: Pod "busybox-user-65534-69671be6-b92d-48f0-8ed7-38121867c8ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016047811s
+Jun  3 21:49:50.250: INFO: Pod "busybox-user-65534-69671be6-b92d-48f0-8ed7-38121867c8ce" satisfied condition "success or failure"
+[AfterEach] [k8s.io] Security Context
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:49:50.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "security-context-test-6878" for this suite.
+Jun  3 21:49:56.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:49:56.371: INFO: namespace security-context-test-6878 deletion completed in 6.11590849s
+
+• [SLOW TEST:10.184 seconds]
+[k8s.io] Security Context
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  When creating a container with runAsUser
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:44
+    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] NoExecuteTaintManager Single Pod [Serial] 
+  removing taint cancels eviction [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:49:56.372: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename taint-single-pod
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:164
+Jun  3 21:49:56.404: INFO: Waiting up to 1m0s for all nodes to be ready
+Jun  3 21:50:56.437: INFO: Waiting for terminating namespaces to be deleted...
+[It] removing taint cancels eviction [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:50:56.440: INFO: Starting informer...
+STEP: Starting pod...
+Jun  3 21:50:56.654: INFO: Pod is running on karbon-certification-ff5a6a-k8s-worker-1. Tainting Node
+STEP: Trying to apply a taint on the Node
+STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
+STEP: Waiting short time to make sure Pod is queued for deletion
+Jun  3 21:50:56.670: INFO: Pod wasn't evicted. Proceeding
+Jun  3 21:50:56.670: INFO: Removing taint from Node
+STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute
+STEP: Waiting some time to make sure that toleration time passed.
+Jun  3 21:52:11.712: INFO: Pod wasn't evicted. Test successful
+[AfterEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:52:11.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "taint-single-pod-541" for this suite.
+Jun  3 21:52:23.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:52:23.815: INFO: namespace taint-single-pod-541 deletion completed in 12.096310921s
+
+• [SLOW TEST:147.443 seconds]
+[sig-scheduling] NoExecuteTaintManager Single Pod [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  removing taint cancels eviction [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:52:23.815: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 21:52:23.857: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c387eb98-05c4-4abe-b142-2886a9c0dfa3" in namespace "projected-4911" to be "success or failure"
+Jun  3 21:52:23.860: INFO: Pod "downwardapi-volume-c387eb98-05c4-4abe-b142-2886a9c0dfa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294768ms
+Jun  3 21:52:25.865: INFO: Pod "downwardapi-volume-c387eb98-05c4-4abe-b142-2886a9c0dfa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007356551s
+STEP: Saw pod success
+Jun  3 21:52:25.865: INFO: Pod "downwardapi-volume-c387eb98-05c4-4abe-b142-2886a9c0dfa3" satisfied condition "success or failure"
+Jun  3 21:52:25.868: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-c387eb98-05c4-4abe-b142-2886a9c0dfa3 container client-container: 
+STEP: delete the pod
+Jun  3 21:52:25.898: INFO: Waiting for pod downwardapi-volume-c387eb98-05c4-4abe-b142-2886a9c0dfa3 to disappear
+Jun  3 21:52:25.902: INFO: Pod downwardapi-volume-c387eb98-05c4-4abe-b142-2886a9c0dfa3 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:52:25.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-4911" for this suite.
+Jun  3 21:52:31.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:52:32.026: INFO: namespace projected-4911 deletion completed in 6.119880409s
+
+• [SLOW TEST:8.211 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
+  should be able to convert from CR v1 to CR v2 [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:52:32.027: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
+STEP: Setting up server cert
+STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
+STEP: Deploying the custom resource conversion webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:52:32.744: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
+Jun  3 21:52:34.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817952, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817952, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817952, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726817952, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:52:37.807: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
+[It] should be able to convert from CR v1 to CR v2 [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:52:37.811: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Creating a v1 custom resource
+STEP: v2 custom resource should be converted
+[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:52:38.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-webhook-1546" for this suite.
+Jun  3 21:52:44.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:52:45.079: INFO: namespace crd-webhook-1546 deletion completed in 6.113649386s
+[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137
+
+• [SLOW TEST:13.068 seconds]
+[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should be able to convert from CR v1 to CR v2 [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Guestbook application 
+  should create and stop a working application  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:52:45.095: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should create and stop a working application  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating all guestbook components
+Jun  3 21:52:45.128: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: redis-slave
+  labels:
+    app: redis
+    role: slave
+    tier: backend
+spec:
+  ports:
+  - port: 6379
+  selector:
+    app: redis
+    role: slave
+    tier: backend
+
+Jun  3 21:52:45.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-4180'
+Jun  3 21:52:45.389: INFO: stderr: ""
+Jun  3 21:52:45.389: INFO: stdout: "service/redis-slave created\n"
+Jun  3 21:52:45.389: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: redis-master
+  labels:
+    app: redis
+    role: master
+    tier: backend
+spec:
+  ports:
+  - port: 6379
+    targetPort: 6379
+  selector:
+    app: redis
+    role: master
+    tier: backend
+
+Jun  3 21:52:45.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-4180'
+Jun  3 21:52:45.630: INFO: stderr: ""
+Jun  3 21:52:45.630: INFO: stdout: "service/redis-master created\n"
+Jun  3 21:52:45.630: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: frontend
+  labels:
+    app: guestbook
+    tier: frontend
+spec:
+  # if your cluster supports it, uncomment the following to automatically create
+  # an external load-balanced IP for the frontend service.
+  # type: LoadBalancer
+  ports:
+  - port: 80
+  selector:
+    app: guestbook
+    tier: frontend
+
+Jun  3 21:52:45.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-4180'
+Jun  3 21:52:45.850: INFO: stderr: ""
+Jun  3 21:52:45.850: INFO: stdout: "service/frontend created\n"
+Jun  3 21:52:45.850: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: frontend
+spec:
+  replicas: 3
+  selector:
+    matchLabels:
+      app: guestbook
+      tier: frontend
+  template:
+    metadata:
+      labels:
+        app: guestbook
+        tier: frontend
+    spec:
+      containers:
+      - name: php-redis
+        image: gcr.io/google-samples/gb-frontend:v6
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        env:
+        - name: GET_HOSTS_FROM
+          value: dns
+          # If your cluster config does not include a dns service, then to
+          # instead access environment variables to find service host
+          # info, comment out the 'value: dns' line above, and uncomment the
+          # line below:
+          # value: env
+        ports:
+        - containerPort: 80
+
+Jun  3 21:52:45.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-4180'
+Jun  3 21:52:46.053: INFO: stderr: ""
+Jun  3 21:52:46.053: INFO: stdout: "deployment.apps/frontend created\n"
+Jun  3 21:52:46.053: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: redis-master
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: redis
+      role: master
+      tier: backend
+  template:
+    metadata:
+      labels:
+        app: redis
+        role: master
+        tier: backend
+    spec:
+      containers:
+      - name: master
+        image: docker.io/library/redis:5.0.5-alpine
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        ports:
+        - containerPort: 6379
+
+Jun  3 21:52:46.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-4180'
+Jun  3 21:52:46.287: INFO: stderr: ""
+Jun  3 21:52:46.287: INFO: stdout: "deployment.apps/redis-master created\n"
+Jun  3 21:52:46.287: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: redis-slave
+spec:
+  replicas: 2
+  selector:
+    matchLabels:
+      app: redis
+      role: slave
+      tier: backend
+  template:
+    metadata:
+      labels:
+        app: redis
+        role: slave
+        tier: backend
+    spec:
+      containers:
+      - name: slave
+        image: docker.io/library/redis:5.0.5-alpine
+        # We are only implementing the dns option of:
+        # https://github.com/kubernetes/examples/blob/97c7ed0eb6555a4b667d2877f965d392e00abc45/guestbook/redis-slave/run.sh
+        command: [ "redis-server", "--slaveof", "redis-master", "6379" ]
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        env:
+        - name: GET_HOSTS_FROM
+          value: dns
+          # If your cluster config does not include a dns service, then to
+          # instead access an environment variable to find the master
+          # service's host, comment out the 'value: dns' line above, and
+          # uncomment the line below:
+          # value: env
+        ports:
+        - containerPort: 6379
+
+Jun  3 21:52:46.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-4180'
+Jun  3 21:52:46.596: INFO: stderr: ""
+Jun  3 21:52:46.596: INFO: stdout: "deployment.apps/redis-slave created\n"
+STEP: validating guestbook app
+Jun  3 21:52:46.596: INFO: Waiting for all frontend pods to be Running.
+Jun  3 21:53:11.647: INFO: Waiting for frontend to serve content.
+Jun  3 21:53:11.668: INFO: Trying to add a new entry to the guestbook.
+Jun  3 21:53:11.682: INFO: Verifying that added entry can be retrieved.
+STEP: using delete to clean up resources
+Jun  3 21:53:11.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-4180'
+Jun  3 21:53:11.824: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun  3 21:53:11.824: INFO: stdout: "service \"redis-slave\" force deleted\n"
+STEP: using delete to clean up resources
+Jun  3 21:53:11.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-4180'
+Jun  3 21:53:11.939: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun  3 21:53:11.939: INFO: stdout: "service \"redis-master\" force deleted\n"
+STEP: using delete to clean up resources
+Jun  3 21:53:11.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-4180'
+Jun  3 21:53:12.067: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun  3 21:53:12.067: INFO: stdout: "service \"frontend\" force deleted\n"
+STEP: using delete to clean up resources
+Jun  3 21:53:12.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-4180'
+Jun  3 21:53:12.170: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun  3 21:53:12.170: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
+STEP: using delete to clean up resources
+Jun  3 21:53:12.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-4180'
+Jun  3 21:53:12.288: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun  3 21:53:12.288: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
+STEP: using delete to clean up resources
+Jun  3 21:53:12.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 delete --grace-period=0 --force -f - --namespace=kubectl-4180'
+Jun  3 21:53:12.396: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun  3 21:53:12.396: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:53:12.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-4180" for this suite.
+Jun  3 21:53:40.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:53:40.513: INFO: namespace kubectl-4180 deletion completed in 28.112427023s
+
+• [SLOW TEST:55.419 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Guestbook application
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:333
+    should create and stop a working application  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:53:40.514: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
+Jun  3 21:53:40.553: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jun  3 21:53:40.568: INFO: Waiting for terminating namespaces to be deleted...
+Jun  3 21:53:40.571: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-0 before test
+Jun  3 21:53:40.586: INFO: kube-proxy-ds-qrgfl from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.586: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: kube-flannel-ds-hznhg from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.586: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-0 from kube-system started at 2020-06-02 22:11:48 +0000 UTC (3 container statuses recorded)
+Jun  3 21:53:40.586: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: fluent-bit-mb264 from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.586: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: node-exporter-hkj7p from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.586: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: csi-node-ntnx-plugin-pdc8c from ntnx-system started at 2020-06-03 01:26:50 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.586: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-58wws from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.586: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:53:40.586: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:53:40.586: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-master-1 before test
+Jun  3 21:53:40.601: INFO: kube-dns-5c64dc6c6b-ls68z from kube-system started at 2020-06-02 22:16:18 +0000 UTC (3 container statuses recorded)
+Jun  3 21:53:40.601: INFO: 	Container dnsmasq ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: 	Container kubedns ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: 	Container sidecar ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: fluent-bit-zcqwz from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.601: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: node-exporter-dwrsb from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.601: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-sz7h8 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.601: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:53:40.601: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: kube-apiserver-karbon-certification-ff5a6a-k8s-master-1 from kube-system started at 2020-06-02 22:13:08 +0000 UTC (3 container statuses recorded)
+Jun  3 21:53:40.601: INFO: 	Container kube-apiserver ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: 	Container kube-controller-manager ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: 	Container kube-scheduler ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: kube-proxy-ds-8hv5j from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.601: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: kube-flannel-ds-zdlj6 from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.601: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: csi-node-ntnx-plugin-6cg44 from ntnx-system started at 2020-06-03 01:27:02 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.601: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:53:40.601: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-0 before test
+Jun  3 21:53:40.617: INFO: csi-node-ntnx-plugin-zbw4j from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: alertmanager-main-1 from ntnx-system started at 2020-06-03 21:01:20 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: fluent-bit-gb59k from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: elasticsearch-logging-0 from ntnx-system started at 2020-06-02 22:17:12 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container elasticsearch-logging ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: node-exporter-5q9qc from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: kube-proxy-ds-qt528 from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: kube-flannel-ds-qnlzb from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: kubernetes-events-printer-5c6d46dfdb-zcvlt from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container kubernetes-events-printer ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-7btt6 from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.617: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:53:40.617: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:53:40.617: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-1 before test
+Jun  3 21:53:40.626: INFO: sonobuoy-e2e-job-5435c8b63156474a from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container e2e ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-szp8f from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:53:40.626: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: prometheus-k8s-1 from ntnx-system started at 2020-06-03 21:51:14 +0000 UTC (3 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container prometheus ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: node-exporter-qwbtg from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: kube-flannel-ds-lfq9z from kube-system started at 2020-06-03 21:51:28 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: kube-proxy-ds-fgf9r from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: csi-node-ntnx-plugin-b6lrd from ntnx-system started at 2020-06-03 21:50:58 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: sonobuoy from sonobuoy started at 2020-06-03 20:08:28 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: fluent-bit-f8th5 from ntnx-system started at 2020-06-03 21:51:04 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.626: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:53:40.626: INFO: 
+Logging pods the kubelet thinks is on node karbon-certification-ff5a6a-k8s-worker-2 before test
+Jun  3 21:53:40.644: INFO: prometheus-k8s-0 from ntnx-system started at 2020-06-02 22:20:28 +0000 UTC (3 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container prometheus ready: true, restart count 1
+Jun  3 21:53:40.644: INFO: 	Container prometheus-config-reloader ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container rules-configmap-reloader ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: kibana-logging-54b7d845-c94kw from ntnx-system started at 2020-06-03 21:01:17 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container kibana-logging ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container nginxhttp ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: csi-attacher-ntnx-plugin-0 from ntnx-system started at 2020-06-03 21:01:22 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container csi-attacher ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: kube-proxy-ds-gn6cv from kube-system started at 2020-06-02 22:15:45 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: csi-node-ntnx-plugin-wnbs7 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container csi-node-ntnx-plugin ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container driver-registrar ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: elasticsearch-curator-cron-1591142460-cj4wj from ntnx-system started at 2020-06-03 00:01:05 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container curator ready: false, restart count 0
+Jun  3 21:53:40.644: INFO: node-exporter-hs75m from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container kube-rbac-proxy ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container node-exporter ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: alertmanager-main-0 from ntnx-system started at 2020-06-02 22:20:10 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container alertmanager ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container config-reloader ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: fluent-bit-zgt4s from ntnx-system started at 2020-06-02 22:17:03 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container fluent-bit ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: prometheus-operator-58f86dddd6-fkbmk from ntnx-system started at 2020-06-02 22:20:00 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container prometheus-operator ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: kube-state-metrics-5d45657948-qkv6t from ntnx-system started at 2020-06-02 22:19:59 +0000 UTC (4 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container addon-resizer ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container kube-rbac-proxy-main ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container kube-rbac-proxy-self ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container kube-state-metrics ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: sonobuoy-systemd-logs-daemon-set-65ffaae02d3a49ed-p8d7c from sonobuoy started at 2020-06-03 20:08:29 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun  3 21:53:40.644: INFO: 	Container systemd-logs ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: kube-flannel-ds-q4sbl from kube-system started at 2020-06-02 22:16:10 +0000 UTC (1 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container kube-flannel ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: csi-provisioner-ntnx-plugin-0 from ntnx-system started at 2020-06-02 22:16:38 +0000 UTC (2 container statuses recorded)
+Jun  3 21:53:40.644: INFO: 	Container csi-provisioner ready: true, restart count 0
+Jun  3 21:53:40.644: INFO: 	Container ntnx-csi-plugin ready: true, restart count 0
+[It] validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Trying to launch a pod without a label to get a node which can launch it.
+STEP: Explicitly delete pod here to free the resource it takes.
+STEP: Trying to apply a random label on the found node.
+STEP: verifying the node has the label kubernetes.io/e2e-85181225-e7c1-439a-a37c-28c6e6b53350 42
+STEP: Trying to relaunch the pod, now with labels.
+STEP: removing the label kubernetes.io/e2e-85181225-e7c1-439a-a37c-28c6e6b53350 off the node karbon-certification-ff5a6a-k8s-worker-1
+STEP: verifying the node doesn't have the label kubernetes.io/e2e-85181225-e7c1-439a-a37c-28c6e6b53350
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:53:44.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-8876" for this suite.
+Jun  3 21:53:58.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:53:58.820: INFO: namespace sched-pred-8876 deletion completed in 14.097178129s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78
+
+• [SLOW TEST:18.306 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
+  validates that NodeSelector is respected if matching  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with projected pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:53:58.820: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
+STEP: Setting up data
+[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating pod pod-subpath-test-projected-5rqk
+STEP: Creating a pod to test atomic-volume-subpath
+Jun  3 21:53:58.869: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5rqk" in namespace "subpath-590" to be "success or failure"
+Jun  3 21:53:58.873: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.820712ms
+Jun  3 21:54:00.878: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 2.008468181s
+Jun  3 21:54:02.882: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 4.012840235s
+Jun  3 21:54:04.887: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 6.017417611s
+Jun  3 21:54:06.891: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 8.021367247s
+Jun  3 21:54:08.895: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 10.025524985s
+Jun  3 21:54:10.899: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 12.02993024s
+Jun  3 21:54:12.903: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 14.033638601s
+Jun  3 21:54:14.908: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 16.03845616s
+Jun  3 21:54:16.912: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 18.042792606s
+Jun  3 21:54:18.916: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Running", Reason="", readiness=true. Elapsed: 20.046744653s
+Jun  3 21:54:20.920: INFO: Pod "pod-subpath-test-projected-5rqk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.050997818s
+STEP: Saw pod success
+Jun  3 21:54:20.920: INFO: Pod "pod-subpath-test-projected-5rqk" satisfied condition "success or failure"
+Jun  3 21:54:20.924: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-subpath-test-projected-5rqk container test-container-subpath-projected-5rqk: 
+STEP: delete the pod
+Jun  3 21:54:20.953: INFO: Waiting for pod pod-subpath-test-projected-5rqk to disappear
+Jun  3 21:54:20.956: INFO: Pod pod-subpath-test-projected-5rqk no longer exists
+STEP: Deleting pod pod-subpath-test-projected-5rqk
+Jun  3 21:54:20.956: INFO: Deleting pod "pod-subpath-test-projected-5rqk" in namespace "subpath-590"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:54:20.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-590" for this suite.
+Jun  3 21:54:26.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:54:27.077: INFO: namespace subpath-590 deletion completed in 6.113533401s
+
+• [SLOW TEST:28.256 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
+  Atomic writer volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
+    should support subpaths with projected pod [LinuxOnly] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  should have a working scale subresource [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:54:27.077: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
+STEP: Creating service test in namespace statefulset-4716
+[It] should have a working scale subresource [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating statefulset ss in namespace statefulset-4716
+Jun  3 21:54:27.135: INFO: Found 0 stateful pods, waiting for 1
+Jun  3 21:54:37.140: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+STEP: getting scale subresource
+STEP: updating a scale subresource
+STEP: verifying the statefulset Spec.Replicas was modified
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
+Jun  3 21:54:37.158: INFO: Deleting all statefulset in ns statefulset-4716
+Jun  3 21:54:37.162: INFO: Scaling statefulset ss to 0
+Jun  3 21:54:47.190: INFO: Waiting for statefulset status.replicas updated to 0
+Jun  3 21:54:47.193: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:54:47.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-4716" for this suite.
+Jun  3 21:54:53.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:54:53.324: INFO: namespace statefulset-4716 deletion completed in 6.113575224s
+
+• [SLOW TEST:26.247 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+    should have a working scale subresource [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:54:53.325: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:54:53.396: INFO: Create a RollingUpdate DaemonSet
+Jun  3 21:54:53.402: INFO: Check that daemon pods launch on every node of the cluster
+Jun  3 21:54:53.408: INFO: Number of nodes with available pods: 0
+Jun  3 21:54:53.408: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:54:54.420: INFO: Number of nodes with available pods: 0
+Jun  3 21:54:54.420: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:54:55.416: INFO: Number of nodes with available pods: 4
+Jun  3 21:54:55.416: INFO: Node karbon-certification-ff5a6a-k8s-master-0 is running more than one daemon pod
+Jun  3 21:54:56.417: INFO: Number of nodes with available pods: 5
+Jun  3 21:54:56.417: INFO: Number of running nodes: 5, number of available pods: 5
+Jun  3 21:54:56.417: INFO: Update the DaemonSet to trigger a rollout
+Jun  3 21:54:56.425: INFO: Updating DaemonSet daemon-set
+Jun  3 21:54:59.444: INFO: Roll back the DaemonSet before rollout is complete
+Jun  3 21:54:59.454: INFO: Updating DaemonSet daemon-set
+Jun  3 21:54:59.454: INFO: Make sure DaemonSet rollback is complete
+Jun  3 21:54:59.458: INFO: Wrong image for pod: daemon-set-lrk72. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
+Jun  3 21:54:59.458: INFO: Pod daemon-set-lrk72 is not available
+Jun  3 21:55:00.468: INFO: Wrong image for pod: daemon-set-lrk72. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
+Jun  3 21:55:00.468: INFO: Pod daemon-set-lrk72 is not available
+Jun  3 21:55:01.468: INFO: Pod daemon-set-7wprl is not available
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5016, will wait for the garbage collector to delete the pods
+Jun  3 21:55:01.540: INFO: Deleting DaemonSet.extensions daemon-set took: 9.498104ms
+Jun  3 21:55:01.641: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.309708ms
+Jun  3 21:55:14.345: INFO: Number of nodes with available pods: 0
+Jun  3 21:55:14.345: INFO: Number of running nodes: 0, number of available pods: 0
+Jun  3 21:55:14.347: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5016/daemonsets","resourceVersion":"169208"},"items":null}
+
+Jun  3 21:55:14.350: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5016/pods","resourceVersion":"169208"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:55:14.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-5016" for this suite.
+Jun  3 21:55:20.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:55:20.477: INFO: namespace daemonsets-5016 deletion completed in 6.105245078s
+
+• [SLOW TEST:27.152 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:55:20.477: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating projection with secret that has name projected-secret-test-ab6cb18f-6455-45fb-ba6a-35cbaae61e8e
+STEP: Creating a pod to test consume secrets
+Jun  3 21:55:20.525: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9e03081-508a-4fe1-854b-840f858245ee" in namespace "projected-728" to be "success or failure"
+Jun  3 21:55:20.529: INFO: Pod "pod-projected-secrets-f9e03081-508a-4fe1-854b-840f858245ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.221844ms
+Jun  3 21:55:22.533: INFO: Pod "pod-projected-secrets-f9e03081-508a-4fe1-854b-840f858245ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007927755s
+STEP: Saw pod success
+Jun  3 21:55:22.533: INFO: Pod "pod-projected-secrets-f9e03081-508a-4fe1-854b-840f858245ee" satisfied condition "success or failure"
+Jun  3 21:55:22.537: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-secrets-f9e03081-508a-4fe1-854b-840f858245ee container projected-secret-volume-test: 
+STEP: delete the pod
+Jun  3 21:55:22.561: INFO: Waiting for pod pod-projected-secrets-f9e03081-508a-4fe1-854b-840f858245ee to disappear
+Jun  3 21:55:22.564: INFO: Pod pod-projected-secrets-f9e03081-508a-4fe1-854b-840f858245ee no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:55:22.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-728" for this suite.
+Jun  3 21:55:28.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:55:28.678: INFO: namespace projected-728 deletion completed in 6.105914765s
+
+• [SLOW TEST:8.202 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:55:28.679: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name projected-configmap-test-volume-map-75a0e589-697b-4c6b-ba6e-fd2d34e492dd
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:55:28.736: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-259d5852-3514-4b50-89f9-929d66255185" in namespace "projected-1714" to be "success or failure"
+Jun  3 21:55:28.742: INFO: Pod "pod-projected-configmaps-259d5852-3514-4b50-89f9-929d66255185": Phase="Pending", Reason="", readiness=false. Elapsed: 5.688917ms
+Jun  3 21:55:30.747: INFO: Pod "pod-projected-configmaps-259d5852-3514-4b50-89f9-929d66255185": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010819682s
+STEP: Saw pod success
+Jun  3 21:55:30.747: INFO: Pod "pod-projected-configmaps-259d5852-3514-4b50-89f9-929d66255185" satisfied condition "success or failure"
+Jun  3 21:55:30.750: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-projected-configmaps-259d5852-3514-4b50-89f9-929d66255185 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun  3 21:55:30.771: INFO: Waiting for pod pod-projected-configmaps-259d5852-3514-4b50-89f9-929d66255185 to disappear
+Jun  3 21:55:30.774: INFO: Pod pod-projected-configmaps-259d5852-3514-4b50-89f9-929d66255185 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:55:30.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-1714" for this suite.
+Jun  3 21:55:36.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:55:36.879: INFO: namespace projected-1714 deletion completed in 6.101441849s
+
+• [SLOW TEST:8.200 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
+  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Kubectl cluster-info 
+  should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:55:36.879: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: validating cluster-info
+Jun  3 21:55:36.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 cluster-info'
+Jun  3 21:55:37.025: INFO: stderr: ""
+Jun  3 21:55:37.025: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.19.0.1:443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.19.0.1:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:55:37.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-8235" for this suite.
+Jun  3 21:55:43.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:55:43.165: INFO: namespace kubectl-8235 deletion completed in 6.1336087s
+
+• [SLOW TEST:6.286 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl cluster-info
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:974
+    should check if Kubernetes master services is included in cluster-info  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:55:43.165: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
+[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 21:55:43.209: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
+Jun  3 21:55:43.216: INFO: Pod name sample-pod: Found 0 pods out of 1
+Jun  3 21:55:48.220: INFO: Pod name sample-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Jun  3 21:55:48.220: INFO: Creating deployment "test-rolling-update-deployment"
+Jun  3 21:55:48.225: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
+Jun  3 21:55:48.234: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
+Jun  3 21:55:50.242: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
+Jun  3 21:55:50.245: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
+Jun  3 21:55:50.259: INFO: Deployment "test-rolling-update-deployment":
+&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-3298 /apis/apps/v1/namespaces/deployment-3298/deployments/test-rolling-update-deployment 29323f00-0656-43d0-801c-24043eb9a697 169431 1 2020-06-03 21:55:48 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001da3c18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-06-03 21:55:48 +0000 UTC,LastTransitionTime:2020-06-03 21:55:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-55d946486" has successfully progressed.,LastUpdateTime:2020-06-03 21:55:49 +0000 UTC,LastTransitionTime:2020-06-03 21:55:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}
+
+Jun  3 21:55:50.264: INFO: New ReplicaSet "test-rolling-update-deployment-55d946486" of Deployment "test-rolling-update-deployment":
+&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-55d946486  deployment-3298 /apis/apps/v1/namespaces/deployment-3298/replicasets/test-rolling-update-deployment-55d946486 0082e977-9243-4ede-b60c-b102634860f6 169420 1 2020-06-03 21:55:48 +0000 UTC   map[name:sample-pod pod-template-hash:55d946486] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 29323f00-0656-43d0-801c-24043eb9a697 0xc003df00f0 0xc003df00f1}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 55d946486,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:55d946486] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003df0158  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:55:50.264: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
+Jun  3 21:55:50.264: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-3298 /apis/apps/v1/namespaces/deployment-3298/replicasets/test-rolling-update-controller 2c04b5b9-f263-49f5-a7b3-49f94e02cc6c 169430 2 2020-06-03 21:55:43 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 29323f00-0656-43d0-801c-24043eb9a697 0xc003df0027 0xc003df0028}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003df0088  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
+Jun  3 21:55:50.268: INFO: Pod "test-rolling-update-deployment-55d946486-x8qrg" is available:
+&Pod{ObjectMeta:{test-rolling-update-deployment-55d946486-x8qrg test-rolling-update-deployment-55d946486- deployment-3298 /api/v1/namespaces/deployment-3298/pods/test-rolling-update-deployment-55d946486-x8qrg c4f2f092-2a4e-4dcf-af4e-bc4122a2983b 169419 0 2020-06-03 21:55:48 +0000 UTC   map[name:sample-pod pod-template-hash:55d946486] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-55d946486 0082e977-9243-4ede-b60c-b102634860f6 0xc003df05d0 0xc003df05d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9jg9k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9jg9k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9jg9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:karbon-certification-ff5a6a-k8s-worker-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:55:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:55:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:55:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-06-03 21:55:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.45.43.12,PodIP:172.20.2.236,StartTime:2020-06-03 21:55:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-06-03 21:55:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://8b6f243834597971df2e5ca6041fd6e0d52a310b956aa611dcddaadd4f6d26e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:172.20.2.236,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:55:50.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-3298" for this suite.
+Jun  3 21:55:56.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:55:56.379: INFO: namespace deployment-3298 deletion completed in 6.106788015s
+
+• [SLOW TEST:13.214 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
+  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+[sig-network] Services 
+  should be able to change the type from ExternalName to ClusterIP [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:55:56.379: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
+[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating a service externalname-service with the type=ExternalName in namespace services-3166
+STEP: changing the ExternalName service to type=ClusterIP
+STEP: creating replication controller externalname-service in namespace services-3166
+I0603 21:55:56.448450      25 runners.go:184] Created replication controller with name: externalname-service, namespace: services-3166, replica count: 2
+Jun  3 21:55:59.499: INFO: Creating new exec pod
+I0603 21:55:59.499108      25 runners.go:184] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Jun  3 21:56:02.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-3166 execpod5b4bk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
+Jun  3 21:56:02.983: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
+Jun  3 21:56:02.983: INFO: stdout: ""
+Jun  3 21:56:02.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 exec --namespace=services-3166 execpod5b4bk -- /bin/sh -x -c nc -zv -t -w 2 172.19.143.12 80'
+Jun  3 21:56:03.223: INFO: stderr: "+ nc -zv -t -w 2 172.19.143.12 80\nConnection to 172.19.143.12 80 port [tcp/http] succeeded!\n"
+Jun  3 21:56:03.223: INFO: stdout: ""
+Jun  3 21:56:03.223: INFO: Cleaning up the ExternalName to ClusterIP test service
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:56:03.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "services-3166" for this suite.
+Jun  3 21:56:09.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:56:09.370: INFO: namespace services-3166 deletion completed in 6.116097234s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95
+
+• [SLOW TEST:12.991 seconds]
+[sig-network] Services
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
+  should be able to change the type from ExternalName to ClusterIP [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:56:09.371: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0777 on tmpfs
+Jun  3 21:56:09.414: INFO: Waiting up to 5m0s for pod "pod-b6241049-eee0-43d6-b40e-9be24aa7369c" in namespace "emptydir-2629" to be "success or failure"
+Jun  3 21:56:09.416: INFO: Pod "pod-b6241049-eee0-43d6-b40e-9be24aa7369c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264855ms
+Jun  3 21:56:11.422: INFO: Pod "pod-b6241049-eee0-43d6-b40e-9be24aa7369c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00733094s
+Jun  3 21:56:13.426: INFO: Pod "pod-b6241049-eee0-43d6-b40e-9be24aa7369c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012281596s
+STEP: Saw pod success
+Jun  3 21:56:13.427: INFO: Pod "pod-b6241049-eee0-43d6-b40e-9be24aa7369c" satisfied condition "success or failure"
+Jun  3 21:56:13.429: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-b6241049-eee0-43d6-b40e-9be24aa7369c container test-container: 
+STEP: delete the pod
+Jun  3 21:56:13.452: INFO: Waiting for pod pod-b6241049-eee0-43d6-b40e-9be24aa7369c to disappear
+Jun  3 21:56:13.455: INFO: Pod pod-b6241049-eee0-43d6-b40e-9be24aa7369c no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:56:13.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-2629" for this suite.
+Jun  3 21:56:19.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:56:19.569: INFO: namespace emptydir-2629 deletion completed in 6.110268269s
+
+• [SLOW TEST:10.198 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test on terminated container 
+  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:56:19.569: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: create the container
+STEP: wait for the container to reach Succeeded
+STEP: get the container status
+STEP: the container should be terminated
+STEP: the termination message should be set
+Jun  3 21:56:22.642: INFO: Expected: &{OK} to match Container's Termination Message: OK --
+STEP: delete the container
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:56:22.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-8582" for this suite.
+Jun  3 21:56:28.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:56:28.773: INFO: namespace container-runtime-8582 deletion completed in 6.111230479s
+
+• [SLOW TEST:9.204 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  blackbox test
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
+    on terminated container
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
+      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
+      /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSS
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:56:28.773: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
+[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:57:28.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-5338" for this suite.
+Jun  3 21:57:56.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:57:56.941: INFO: namespace container-probe-5338 deletion completed in 28.108518052s
+
+• [SLOW TEST:88.168 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:57:56.941: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Jun  3 21:57:56.984: INFO: Waiting up to 5m0s for pod "pod-ae4f5d16-0c3d-4161-8cf1-2c918d83048d" in namespace "emptydir-9813" to be "success or failure"
+Jun  3 21:57:56.988: INFO: Pod "pod-ae4f5d16-0c3d-4161-8cf1-2c918d83048d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.56081ms
+Jun  3 21:57:58.992: INFO: Pod "pod-ae4f5d16-0c3d-4161-8cf1-2c918d83048d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007897752s
+STEP: Saw pod success
+Jun  3 21:57:58.992: INFO: Pod "pod-ae4f5d16-0c3d-4161-8cf1-2c918d83048d" satisfied condition "success or failure"
+Jun  3 21:57:58.995: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-ae4f5d16-0c3d-4161-8cf1-2c918d83048d container test-container: 
+STEP: delete the pod
+Jun  3 21:57:59.036: INFO: Waiting for pod pod-ae4f5d16-0c3d-4161-8cf1-2c918d83048d to disappear
+Jun  3 21:57:59.040: INFO: Pod pod-ae4f5d16-0c3d-4161-8cf1-2c918d83048d no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:57:59.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-9813" for this suite.
+Jun  3 21:58:05.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:58:05.151: INFO: namespace emptydir-9813 deletion completed in 6.105209249s
+
+• [SLOW TEST:8.209 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide pod UID as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:58:05.151: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide pod UID as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward api env vars
+Jun  3 21:58:05.188: INFO: Waiting up to 5m0s for pod "downward-api-f4bfbdf4-ffe2-4917-8b63-6dc47d887466" in namespace "downward-api-6095" to be "success or failure"
+Jun  3 21:58:05.191: INFO: Pod "downward-api-f4bfbdf4-ffe2-4917-8b63-6dc47d887466": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027694ms
+Jun  3 21:58:07.196: INFO: Pod "downward-api-f4bfbdf4-ffe2-4917-8b63-6dc47d887466": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008339268s
+STEP: Saw pod success
+Jun  3 21:58:07.196: INFO: Pod "downward-api-f4bfbdf4-ffe2-4917-8b63-6dc47d887466" satisfied condition "success or failure"
+Jun  3 21:58:07.199: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downward-api-f4bfbdf4-ffe2-4917-8b63-6dc47d887466 container dapi-container: 
+STEP: delete the pod
+Jun  3 21:58:07.224: INFO: Waiting for pod downward-api-f4bfbdf4-ffe2-4917-8b63-6dc47d887466 to disappear
+Jun  3 21:58:07.226: INFO: Pod downward-api-f4bfbdf4-ffe2-4917-8b63-6dc47d887466 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:58:07.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-6095" for this suite.
+Jun  3 21:58:13.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:58:13.339: INFO: namespace downward-api-6095 deletion completed in 6.108393576s
+
+• [SLOW TEST:8.188 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
+  should provide pod UID as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:58:13.339: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Performing setup for networking test in namespace pod-network-test-5310
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun  3 21:58:13.373: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun  3 21:58:37.503: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.20.3.132 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5310 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:58:37.503: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:58:38.627: INFO: Found all expected endpoints: [netserver-0]
+Jun  3 21:58:38.632: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.20.1.25 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5310 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:58:38.632: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:58:39.759: INFO: Found all expected endpoints: [netserver-1]
+Jun  3 21:58:39.763: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.20.0.47 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5310 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:58:39.764: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:58:40.891: INFO: Found all expected endpoints: [netserver-2]
+Jun  3 21:58:40.894: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.20.2.244 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5310 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:58:40.894: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:58:42.024: INFO: Found all expected endpoints: [netserver-3]
+Jun  3 21:58:42.028: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 172.20.4.36 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5310 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun  3 21:58:42.028: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+Jun  3 21:58:43.160: INFO: Found all expected endpoints: [netserver-4]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:58:43.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-5310" for this suite.
+Jun  3 21:58:55.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:58:55.268: INFO: namespace pod-network-test-5310 deletion completed in 12.102730771s
+
+• [SLOW TEST:41.929 seconds]
+[sig-network] Networking
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:58:55.268: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating configMap with name configmap-test-volume-34153b2c-7dad-4144-9a0e-cc2609e0e12f
+STEP: Creating a pod to test consume configMaps
+Jun  3 21:58:55.333: INFO: Waiting up to 5m0s for pod "pod-configmaps-86050042-fdf2-48f3-98dc-d35326bfd679" in namespace "configmap-801" to be "success or failure"
+Jun  3 21:58:55.339: INFO: Pod "pod-configmaps-86050042-fdf2-48f3-98dc-d35326bfd679": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105018ms
+Jun  3 21:58:57.345: INFO: Pod "pod-configmaps-86050042-fdf2-48f3-98dc-d35326bfd679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011334342s
+STEP: Saw pod success
+Jun  3 21:58:57.345: INFO: Pod "pod-configmaps-86050042-fdf2-48f3-98dc-d35326bfd679" satisfied condition "success or failure"
+Jun  3 21:58:57.348: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod pod-configmaps-86050042-fdf2-48f3-98dc-d35326bfd679 container configmap-volume-test: 
+STEP: delete the pod
+Jun  3 21:58:57.368: INFO: Waiting for pod pod-configmaps-86050042-fdf2-48f3-98dc-d35326bfd679 to disappear
+Jun  3 21:58:57.371: INFO: Pod pod-configmaps-86050042-fdf2-48f3-98dc-d35326bfd679 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:58:57.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-801" for this suite.
+Jun  3 21:59:03.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:59:03.482: INFO: namespace configmap-801 deletion completed in 6.106839168s
+
+• [SLOW TEST:8.213 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+S
+------------------------------
+[sig-cli] Kubectl client Kubectl api-versions 
+  should check if v1 is in available api versions  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:59:03.482: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[It] should check if v1 is in available api versions  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: validating api versions
+Jun  3 21:59:03.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 api-versions'
+Jun  3 21:59:03.605: INFO: stderr: ""
+Jun  3 21:59:03.606: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nmonitoring.coreos.com/v1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1alpha1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:59:03.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-9388" for this suite.
+Jun  3 21:59:09.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:59:09.723: INFO: namespace kubectl-9388 deletion completed in 6.112488109s
+
+• [SLOW TEST:6.242 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Kubectl api-versions
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:738
+    should check if v1 is in available api versions  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
+  should mutate pod and apply defaults after mutation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:59:09.724: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename webhook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
+STEP: Setting up server cert
+STEP: Create role binding to let webhook read extension-apiserver-authentication
+STEP: Deploying the webhook pod
+STEP: Wait for the deployment to be ready
+Jun  3 21:59:10.358: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
+Jun  3 21:59:12.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726818350, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726818350, loc:(*time.Location)(0x789e8e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726818350, loc:(*time.Location)(0x789e8e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726818350, loc:(*time.Location)(0x789e8e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+STEP: Deploying the webhook service
+STEP: Verifying the service has paired with the endpoint
+Jun  3 21:59:15.392: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
+[It] should mutate pod and apply defaults after mutation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Registering the mutating pod webhook via the AdmissionRegistration API
+STEP: create a pod that should be updated by the webhook
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:59:15.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "webhook-4440" for this suite.
+Jun  3 21:59:27.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:59:27.586: INFO: namespace webhook-4440 deletion completed in 12.106320732s
+STEP: Destroying namespace "webhook-4440-markers" for this suite.
+Jun  3 21:59:33.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 21:59:33.682: INFO: namespace webhook-4440-markers deletion completed in 6.096376077s
+[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103
+
+• [SLOW TEST:23.972 seconds]
+[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  should mutate pod and apply defaults after mutation [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 21:59:33.696: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
+[It] should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating the pod
+Jun  3 21:59:36.265: INFO: Successfully updated pod "labelsupdate5a32404e-ae90-4cb4-97cc-cdf3a55e4e60"
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 21:59:38.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-9106" for this suite.
+Jun  3 22:00:06.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 22:00:06.400: INFO: namespace projected-9106 deletion completed in 28.108638239s
+
+• [SLOW TEST:32.704 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
+  should update labels on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 22:00:06.401: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
+[It] should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating a pod to test downward API volume plugin
+Jun  3 22:00:06.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94aa0779-8a53-494b-a12c-c8d668978158" in namespace "downward-api-1090" to be "success or failure"
+Jun  3 22:00:06.453: INFO: Pod "downwardapi-volume-94aa0779-8a53-494b-a12c-c8d668978158": Phase="Pending", Reason="", readiness=false. Elapsed: 6.893916ms
+Jun  3 22:00:08.459: INFO: Pod "downwardapi-volume-94aa0779-8a53-494b-a12c-c8d668978158": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013136317s
+STEP: Saw pod success
+Jun  3 22:00:08.459: INFO: Pod "downwardapi-volume-94aa0779-8a53-494b-a12c-c8d668978158" satisfied condition "success or failure"
+Jun  3 22:00:08.463: INFO: Trying to get logs from node karbon-certification-ff5a6a-k8s-worker-1 pod downwardapi-volume-94aa0779-8a53-494b-a12c-c8d668978158 container client-container: 
+STEP: delete the pod
+Jun  3 22:00:08.494: INFO: Waiting for pod downwardapi-volume-94aa0779-8a53-494b-a12c-c8d668978158 to disappear
+Jun  3 22:00:08.497: INFO: Pod downwardapi-volume-94aa0779-8a53-494b-a12c-c8d668978158 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 22:00:08.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-1090" for this suite.
+Jun  3 22:00:14.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 22:00:14.604: INFO: namespace downward-api-1090 deletion completed in 6.102096484s
+
+• [SLOW TEST:8.204 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client Update Demo 
+  should do a rolling update of a replication controller  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 22:00:14.605: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
+[BeforeEach] Update Demo
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
+[It] should do a rolling update of a replication controller  [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: creating the initial replication controller
+Jun  3 22:00:14.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 create -f - --namespace=kubectl-279'
+Jun  3 22:00:14.897: INFO: stderr: ""
+Jun  3 22:00:14.897: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun  3 22:00:14.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-279'
+Jun  3 22:00:14.997: INFO: stderr: ""
+Jun  3 22:00:14.997: INFO: stdout: "update-demo-nautilus-gz9w2 update-demo-nautilus-m8ncd "
+Jun  3 22:00:14.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-gz9w2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:15.102: INFO: stderr: ""
+Jun  3 22:00:15.102: INFO: stdout: ""
+Jun  3 22:00:15.102: INFO: update-demo-nautilus-gz9w2 is created but not running
+Jun  3 22:00:20.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-279'
+Jun  3 22:00:20.195: INFO: stderr: ""
+Jun  3 22:00:20.195: INFO: stdout: "update-demo-nautilus-gz9w2 update-demo-nautilus-m8ncd "
+Jun  3 22:00:20.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-gz9w2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:20.286: INFO: stderr: ""
+Jun  3 22:00:20.286: INFO: stdout: "true"
+Jun  3 22:00:20.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-gz9w2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:20.391: INFO: stderr: ""
+Jun  3 22:00:20.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  3 22:00:20.391: INFO: validating pod update-demo-nautilus-gz9w2
+Jun  3 22:00:20.396: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  3 22:00:20.396: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  3 22:00:20.396: INFO: update-demo-nautilus-gz9w2 is verified up and running
+Jun  3 22:00:20.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-m8ncd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:20.485: INFO: stderr: ""
+Jun  3 22:00:20.485: INFO: stdout: "true"
+Jun  3 22:00:20.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-nautilus-m8ncd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:20.586: INFO: stderr: ""
+Jun  3 22:00:20.586: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun  3 22:00:20.586: INFO: validating pod update-demo-nautilus-m8ncd
+Jun  3 22:00:20.592: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun  3 22:00:20.593: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun  3 22:00:20.593: INFO: update-demo-nautilus-m8ncd is verified up and running
+STEP: rolling-update to new replication controller
+Jun  3 22:00:20.596: INFO: scanned /root for discovery docs: 
+Jun  3 22:00:20.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-279'
+Jun  3 22:00:43.117: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
+Jun  3 22:00:43.118: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun  3 22:00:43.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-279'
+Jun  3 22:00:43.220: INFO: stderr: ""
+Jun  3 22:00:43.220: INFO: stdout: "update-demo-kitten-9djn5 update-demo-kitten-khkr9 update-demo-nautilus-gz9w2 "
+STEP: Replicas for name=update-demo: expected=2 actual=3
+Jun  3 22:00:48.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-279'
+Jun  3 22:00:48.320: INFO: stderr: ""
+Jun  3 22:00:48.320: INFO: stdout: "update-demo-kitten-9djn5 update-demo-kitten-khkr9 "
+Jun  3 22:00:48.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-kitten-9djn5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:48.418: INFO: stderr: ""
+Jun  3 22:00:48.418: INFO: stdout: "true"
+Jun  3 22:00:48.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-kitten-9djn5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:48.513: INFO: stderr: ""
+Jun  3 22:00:48.513: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
+Jun  3 22:00:48.513: INFO: validating pod update-demo-kitten-9djn5
+Jun  3 22:00:48.520: INFO: got data: {
+  "image": "kitten.jpg"
+}
+
+Jun  3 22:00:48.520: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
+Jun  3 22:00:48.520: INFO: update-demo-kitten-9djn5 is verified up and running
+Jun  3 22:00:48.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-kitten-khkr9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:48.609: INFO: stderr: ""
+Jun  3 22:00:48.609: INFO: stdout: "true"
+Jun  3 22:00:48.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 get pods update-demo-kitten-khkr9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-279'
+Jun  3 22:00:48.706: INFO: stderr: ""
+Jun  3 22:00:48.706: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
+Jun  3 22:00:48.706: INFO: validating pod update-demo-kitten-khkr9
+Jun  3 22:00:48.712: INFO: got data: {
+  "image": "kitten.jpg"
+}
+
+Jun  3 22:00:48.712: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
+Jun  3 22:00:48.712: INFO: update-demo-kitten-khkr9 is verified up and running
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 22:00:48.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-279" for this suite.
+Jun  3 22:01:16.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 22:01:16.822: INFO: namespace kubectl-279 deletion completed in 28.105997751s
+
+• [SLOW TEST:62.217 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  Update Demo
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275
+    should do a rolling update of a replication controller  [Conformance]
+    /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
+  works for CRD without validation schema [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 22:01:16.822: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename crd-publish-openapi
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] works for CRD without validation schema [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+Jun  3 22:01:16.853: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
+Jun  3 22:01:20.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-9962 create -f -'
+Jun  3 22:01:21.054: INFO: stderr: ""
+Jun  3 22:01:21.054: INFO: stdout: "e2e-test-crd-publish-openapi-7475-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
+Jun  3 22:01:21.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-9962 delete e2e-test-crd-publish-openapi-7475-crds test-cr'
+Jun  3 22:01:21.162: INFO: stderr: ""
+Jun  3 22:01:21.162: INFO: stdout: "e2e-test-crd-publish-openapi-7475-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
+Jun  3 22:01:21.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-9962 apply -f -'
+Jun  3 22:01:21.367: INFO: stderr: ""
+Jun  3 22:01:21.367: INFO: stdout: "e2e-test-crd-publish-openapi-7475-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
+Jun  3 22:01:21.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 --namespace=crd-publish-openapi-9962 delete e2e-test-crd-publish-openapi-7475-crds test-cr'
+Jun  3 22:01:21.519: INFO: stderr: ""
+Jun  3 22:01:21.519: INFO: stdout: "e2e-test-crd-publish-openapi-7475-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
+STEP: kubectl explain works to explain CR without validation schema
+Jun  3 22:01:21.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-005848369 explain e2e-test-crd-publish-openapi-7475-crds'
+Jun  3 22:01:21.703: INFO: stderr: ""
+Jun  3 22:01:21.703: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7475-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
+[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 22:01:25.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "crd-publish-openapi-9962" for this suite.
+Jun  3 22:01:31.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 22:01:31.455: INFO: namespace crd-publish-openapi-9962 deletion completed in 6.105017494s
+
+• [SLOW TEST:14.632 seconds]
+[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
+  works for CRD without validation schema [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
+STEP: Creating a kubernetes client
+Jun  3 22:01:31.456: INFO: >>> kubeConfig: /tmp/kubeconfig-005848369
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
+[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+STEP: Creating pod liveness-782267b3-40ee-4db2-8ae2-da04bcf9d995 in namespace container-probe-8614
+Jun  3 22:01:33.503: INFO: Started pod liveness-782267b3-40ee-4db2-8ae2-da04bcf9d995 in namespace container-probe-8614
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun  3 22:01:33.506: INFO: Initial restart count of pod liveness-782267b3-40ee-4db2-8ae2-da04bcf9d995 is 0
+Jun  3 22:01:51.550: INFO: Restart count of pod container-probe-8614/liveness-782267b3-40ee-4db2-8ae2-da04bcf9d995 is now 1 (18.044014448s elapsed)
+Jun  3 22:02:11.595: INFO: Restart count of pod container-probe-8614/liveness-782267b3-40ee-4db2-8ae2-da04bcf9d995 is now 2 (38.088267127s elapsed)
+Jun  3 22:02:31.639: INFO: Restart count of pod container-probe-8614/liveness-782267b3-40ee-4db2-8ae2-da04bcf9d995 is now 3 (58.132150481s elapsed)
+Jun  3 22:02:51.686: INFO: Restart count of pod container-probe-8614/liveness-782267b3-40ee-4db2-8ae2-da04bcf9d995 is now 4 (1m18.179937472s elapsed)
+Jun  3 22:03:55.834: INFO: Restart count of pod container-probe-8614/liveness-782267b3-40ee-4db2-8ae2-da04bcf9d995 is now 5 (2m22.327581144s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
+Jun  3 22:03:55.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-8614" for this suite.
+Jun  3 22:04:01.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun  3 22:04:01.964: INFO: namespace container-probe-8614 deletion completed in 6.11129238s
+
+• [SLOW TEST:150.508 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
+  should have monotonically increasing restart count [NodeConformance] [Conformance]
+  /workspace/anago-v1.16.8-beta.0.65+ef1ba35b1a4560/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
+------------------------------
+SSJun  3 22:04:01.964: INFO: Running AfterSuite actions on all nodes
+Jun  3 22:04:01.964: INFO: Running AfterSuite actions on node 1
+Jun  3 22:04:01.964: INFO: Skipping dumping logs from cluster
+
+Ran 276 of 4731 Specs in 6927.468 seconds
+SUCCESS! -- 276 Passed | 0 Failed | 0 Pending | 4455 Skipped
+PASS
+
+Ginkgo ran 1 suite in 1h55m28.988910565s
+Test Suite Passed
diff --git a/v1.16/ntnx-karbon/junit_01.xml b/v1.16/ntnx-karbon/junit_01.xml
new file mode 100644
index 0000000000..a116094b53
--- /dev/null
+++ b/v1.16/ntnx-karbon/junit_01.xml
@@ -0,0 +1,13644 @@
+
+  
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+  
\ No newline at end of file