Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent successful containers from restarting with OnFailure restart policy #46

Closed
wants to merge 6,638 commits into from

Conversation

joelsmith
Copy link

What this PR does / why we need it:
This is a follow-on to kubernetes#54597 which makes sure that its validation
also applies to pods with a restart policy of OnFailure. This
deficiency was pointed out by @smarterclayton here:
kubernetes#54530 (comment)

Which issue this PR fixes This is another fix to address kubernetes#54499

Release note:

NONE

Kubernetes Submit Queue and others added 30 commits October 23, 2017 10:07
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use CIDR-aware proxy resolver for SPDY RoundTripper

**What this PR does / why we need it**: `kubectl attach` for example doesn't work if NO_PROXY specifies API endpoint IP via CIDR notation.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes kubernetes#54407

**Special notes for your reviewer**: Potentially it will be good to get that change to 1.8.x

**Release note**:
```release-note
- API machinery's httpstream/spdy calls now support CIDR notation for NO_PROXY
```
`kubectl alpha diff` lets you diff your resources against live
resources, or last applied, or even preview what changes are going to be
applied on the cluster.

This is still quite premature, and mostly untested.
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Automatic merge from submit-queue (batch tested with PRs 54363, 54333). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

[images/hyperkube]add kube-aggerator link

**What this PR does / why we need it**:
add kube-aggerator link
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```NONE
```
…lver

Automatic merge from submit-queue (batch tested with PRs 54363, 54333). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Ensure port on resolved service host

The resolved host should include a port so it can be used by dialers directly. It's also not necessary to reparse the URL when constructing directly.

```release-note
NONE
```
addressed gnufied's review comments

addressed Michelle Au's review comments
…ress-merge

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E stress test for vSphere Cloud Provider Volume lifecycle operations

**What this PR does / why we need it**:
This PR adds an E2E test for vSphere Cloud Provider which can induce the stress to attach/detach/delete the volumes in parallel with multiple threads based on user configurable values for number of threads and iterations per thread.

Test performs following tasks.

- Create Storage Classes of 4 Categories (Default, SC with Non Default Datastore, SC with SPBM Policy, SC with VSAN Storage Capalibilies.)
- Read VCP_STRESS_INSTANCES and VCP_STRESS_ITERATIONS from System Environment.
- Launch goroutine for volume lifecycle operations.
- Each instance of routine iterates for n times, where n is read from system env - VCP_STRESS_ITERATIONS
- Each iteration creates 1 PVC, 1 POD using the provisioned PV, Verify disk is attached to the node, Verify pod can access the volume, delete the pod and finally delete the PVC.

**Which issue this PR fixes**
fixes # https://github.com/vmware/kubernetes/issues/297

**Special notes for your reviewer**:
Test Logs
```
# export VSPHERE_SPBM_POLICY_NAME=gold
# export VSPHERE_DATASTORE=vsanDatastore
# export VCP_STRESS_INSTANCES=5
# export VCP_STRESS_ITERATIONS=2
# go run hack/e2e.go --check-version-skew=false -v -test '--test_args=--ginkgo.focus=vsphere\sstress\stests'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build278564968/command-line-arguments/_obj/exe/e2e:
  -get
    	go get -u kubetest if old or not installed (default true)
  -old duration
    	Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/09 17:50:58 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/09 17:50:58 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/09 17:50:58 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/09 17:50:58 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/09 17:50:58 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vsphere\sstress\stests...
2017/10/09 17:50:58 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/09 17:50:59 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 368.788119ms
2017/10/09 17:50:59 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.16775+93408b3d08957e", GitCommit:"93408b3d08957ea52f587dadbe06850af860ab71", GitTreeState:"clean", BuildDate:"2017-10-10T00:41:24Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.838+9782a5a0a9c351", GitCommit:"9782a5a0a9c3517c5dc35e7826dfcab963cf3d9c", GitTreeState:"clean", BuildDate:"2017-10-09T07:15:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/09 17:50:59 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 304.191318ms
2017/10/09 17:50:59 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vsphere\sstress\stests
Conformance test: not doing test setup.
Oct  9 17:51:01.086: INFO: Overriding default scale value of zero to 1
Oct  9 17:51:01.086: INFO: Overriding default milliseconds value of zero to 5000
I1009 17:51:01.327180   15282 e2e.go:369] Starting e2e run "15d29041-ad55-11e7-9400-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1507596660 - Will randomize all specs
Will run 1 of 701 specs

Oct  9 17:51:01.370: INFO: >>> kubeConfig: /root/.kube/config
Oct  9 17:51:01.377: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct  9 17:51:01.413: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct  9 17:51:01.543: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct  9 17:51:01.543: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct  9 17:51:01.547: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct  9 17:51:01.548: INFO: Dumping network health container logs from all nodes...
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vsphere cloud provider stress [Feature:vsphere] 
  vsphere stress tests
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_stress.go:125
[BeforeEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct  9 17:51:01.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_stress.go:72
[It] vsphere stress tests
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_stress.go:125
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Instance: [Thread:1], Iteration: [1] : Creating PVC using the Storage Class: sc-default
STEP: Instance: [Thread:1], Iteration: [1] : Waiting for claim: pvc-rhvzf to be in bound phase
Oct  9 17:51:01.895: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-rhvzf to have phase Bound
STEP: Instance: [Thread:2], Iteration: [1] : Creating PVC using the Storage Class: sc-vsan
Oct  9 17:51:01.899: INFO: PersistentVolumeClaim pvc-rhvzf found but phase is Pending instead of Bound.
STEP: Instance: [Thread:3], Iteration: [1] : Creating PVC using the Storage Class: sc-spbm
STEP: Instance: [Thread:2], Iteration: [1] : Waiting for claim: pvc-zvs8j to be in bound phase
Oct  9 17:51:01.915: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zvs8j to have phase Bound
STEP: Instance: [Thread:3], Iteration: [1] : Waiting for claim: pvc-2xqld to be in bound phase
Oct  9 17:51:01.916: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-2xqld to have phase Bound
Oct  9 17:51:01.921: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
Oct  9 17:51:01.921: INFO: PersistentVolumeClaim pvc-zvs8j found but phase is Pending instead of Bound.
STEP: Instance: [Thread:5], Iteration: [1] : Creating PVC using the Storage Class: sc-default
STEP: Instance: [Thread:4], Iteration: [1] : Creating PVC using the Storage Class: sc-user-specified-ds
STEP: Instance: [Thread:5], Iteration: [1] : Waiting for claim: pvc-5m5qz to be in bound phase
Oct  9 17:51:01.940: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-5m5qz to have phase Bound
STEP: Instance: [Thread:4], Iteration: [1] : Waiting for claim: pvc-jwhb7 to be in bound phase
Oct  9 17:51:01.949: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jwhb7 to have phase Bound
Oct  9 17:51:01.949: INFO: PersistentVolumeClaim pvc-5m5qz found but phase is Pending instead of Bound.
Oct  9 17:51:01.958: INFO: PersistentVolumeClaim pvc-jwhb7 found but phase is Pending instead of Bound.
Oct  9 17:51:03.905: INFO: PersistentVolumeClaim pvc-rhvzf found but phase is Pending instead of Bound.
Oct  9 17:51:03.925: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
Oct  9 17:51:03.926: INFO: PersistentVolumeClaim pvc-zvs8j found but phase is Pending instead of Bound.
Oct  9 17:51:03.955: INFO: PersistentVolumeClaim pvc-5m5qz found but phase is Pending instead of Bound.
Oct  9 17:51:03.964: INFO: PersistentVolumeClaim pvc-jwhb7 found and phase=Bound (2.014321484s)
STEP: Instance: [Thread:4], Iteration: [1] : Creating Pod using the claim: pvc-jwhb7
Oct  9 17:51:05.910: INFO: PersistentVolumeClaim pvc-rhvzf found but phase is Pending instead of Bound.
Oct  9 17:51:05.930: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
Oct  9 17:51:05.930: INFO: PersistentVolumeClaim pvc-zvs8j found but phase is Pending instead of Bound.
Oct  9 17:51:05.960: INFO: PersistentVolumeClaim pvc-5m5qz found but phase is Pending instead of Bound.
Oct  9 17:51:07.915: INFO: PersistentVolumeClaim pvc-rhvzf found and phase=Bound (6.020575276s)
STEP: Instance: [Thread:1], Iteration: [1] : Creating Pod using the claim: pvc-rhvzf
Oct  9 17:51:07.934: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
Oct  9 17:51:07.936: INFO: PersistentVolumeClaim pvc-zvs8j found but phase is Pending instead of Bound.
Oct  9 17:51:07.968: INFO: PersistentVolumeClaim pvc-5m5qz found and phase=Bound (6.027676262s)
STEP: Instance: [Thread:5], Iteration: [1] : Creating Pod using the claim: pvc-5m5qz
Oct  9 17:51:09.940: INFO: PersistentVolumeClaim pvc-zvs8j found but phase is Pending instead of Bound.
Oct  9 17:51:09.940: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
Oct  9 17:51:11.945: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
Oct  9 17:51:11.946: INFO: PersistentVolumeClaim pvc-zvs8j found but phase is Pending instead of Bound.
Oct  9 17:51:13.950: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
Oct  9 17:51:13.951: INFO: PersistentVolumeClaim pvc-zvs8j found but phase is Pending instead of Bound.
Oct  9 17:51:15.954: INFO: PersistentVolumeClaim pvc-zvs8j found but phase is Pending instead of Bound.
Oct  9 17:51:15.954: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
STEP: Instance: [Thread:4], Iteration: [1] : Waiting for the Pod: pvc-tester-rptwh to be in the running state
STEP: Instance: [Thread:4], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-16804ea2-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node1
STEP: Instance: [Thread:4], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-16804ea2-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-rptwh
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:51:16.262: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-rptwh --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:51:16.752: INFO: stderr: ""
Oct  9 17:51:16.752: INFO: stdout: ""
STEP: Instance: [Thread:4], Iteration: [1] : Deleting pod: pvc-tester-rptwh
Oct  9 17:51:16.752: INFO: Deleting pod pvc-tester-rptwh
Oct  9 17:51:16.761: INFO: Waiting up to 5m0s for pod "pvc-tester-rptwh" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:51:16.768: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 6.93674ms
STEP: Instance: [Thread:1], Iteration: [1] : Waiting for the Pod: pvc-tester-4mj7m to be in the running state
STEP: Instance: [Thread:1], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-1678c1b4-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node3
Oct  9 17:51:17.961: INFO: PersistentVolumeClaim pvc-2xqld found but phase is Pending instead of Bound.
Oct  9 17:51:17.961: INFO: PersistentVolumeClaim pvc-zvs8j found and phase=Bound (16.046192588s)
STEP: Instance: [Thread:2], Iteration: [1] : Creating Pod using the claim: pvc-zvs8j
STEP: Instance: [Thread:1], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-1678c1b4-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-4mj7m
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:51:18.208: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-4mj7m --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:51:18.794: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 2.033071649s
Oct  9 17:51:18.999: INFO: stderr: ""
Oct  9 17:51:18.999: INFO: stdout: ""
STEP: Instance: [Thread:1], Iteration: [1] : Deleting pod: pvc-tester-4mj7m
Oct  9 17:51:18.999: INFO: Deleting pod pvc-tester-4mj7m
Oct  9 17:51:19.047: INFO: Waiting up to 5m0s for pod "pvc-tester-4mj7m" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:51:19.060: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 13.3838ms
Oct  9 17:51:19.968: INFO: PersistentVolumeClaim pvc-2xqld found and phase=Bound (18.051615752s)
STEP: Instance: [Thread:3], Iteration: [1] : Creating Pod using the claim: pvc-2xqld
Oct  9 17:51:20.801: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 4.03982234s
Oct  9 17:51:21.065: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 2.018156624s
Oct  9 17:51:22.807: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 6.045445351s
Oct  9 17:51:23.069: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 4.022426785s
Oct  9 17:51:24.812: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 8.051188597s
Oct  9 17:51:25.073: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 6.026287211s
STEP: Instance: [Thread:5], Iteration: [1] : Waiting for the Pod: pvc-tester-chzs7 to be in the running state
STEP: Instance: [Thread:5], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-168008a1-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node2
STEP: Instance: [Thread:5], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-168008a1-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-chzs7
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:51:26.238: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-chzs7 --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:51:26.734: INFO: stderr: ""
Oct  9 17:51:26.734: INFO: stdout: ""
STEP: Instance: [Thread:5], Iteration: [1] : Deleting pod: pvc-tester-chzs7
Oct  9 17:51:26.734: INFO: Deleting pod pvc-tester-chzs7
Oct  9 17:51:26.746: INFO: Waiting up to 5m0s for pod "pvc-tester-chzs7" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:51:26.756: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 9.035165ms
Oct  9 17:51:26.819: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 10.058217873s
Oct  9 17:51:27.080: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 8.033449133s
Oct  9 17:51:28.761: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 2.014968719s
Oct  9 17:51:28.825: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 12.063793421s
Oct  9 17:51:29.088: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 10.041667217s
Oct  9 17:51:30.767: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 4.020329766s
Oct  9 17:51:30.832: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 14.070674886s
Oct  9 17:51:31.096: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 12.049489254s
Oct  9 17:51:32.772: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 6.025647318s
Oct  9 17:51:32.838: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 16.076326935s
Oct  9 17:51:33.103: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 14.056365138s
Oct  9 17:51:34.779: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 8.032771596s
Oct  9 17:51:34.844: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 18.08248384s
Oct  9 17:51:35.113: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 16.066547959s
STEP: Instance: [Thread:2], Iteration: [1] : Waiting for the Pod: pvc-tester-dc5wn to be in the running state
STEP: Instance: [Thread:2], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-167a93ac-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node4
STEP: Instance: [Thread:2], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-167a93ac-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-dc5wn
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:51:36.251: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-dc5wn --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:51:36.748: INFO: stderr: ""
Oct  9 17:51:36.748: INFO: stdout: ""
STEP: Instance: [Thread:2], Iteration: [1] : Deleting pod: pvc-tester-dc5wn
Oct  9 17:51:36.748: INFO: Deleting pod pvc-tester-dc5wn
Oct  9 17:51:36.763: INFO: Waiting up to 5m0s for pod "pvc-tester-dc5wn" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:51:36.768: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 5.263893ms
Oct  9 17:51:36.784: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 10.03756344s
Oct  9 17:51:36.850: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 20.088344736s
Oct  9 17:51:37.124: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 18.077434044s
Oct  9 17:51:38.782: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 2.019746606s
Oct  9 17:51:38.801: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 12.054521268s
Oct  9 17:51:38.865: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 22.103792805s
Oct  9 17:51:39.140: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 20.093429568s
Oct  9 17:51:40.790: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 4.027097108s
Oct  9 17:51:40.806: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 14.059417058s
Oct  9 17:51:40.872: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 24.110446949s
Oct  9 17:51:41.146: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 22.099044249s
Oct  9 17:51:42.795: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 6.03264245s
Oct  9 17:51:42.811: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 16.064140339s
Oct  9 17:51:42.877: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 26.116020085s
Oct  9 17:51:43.150: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 24.103779148s
Oct  9 17:51:44.801: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 8.03865792s
Oct  9 17:51:44.819: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 18.072869647s
Oct  9 17:51:44.886: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 28.124741033s
Oct  9 17:51:45.158: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 26.111815212s
STEP: Instance: [Thread:3], Iteration: [1] : Waiting for the Pod: pvc-tester-jxm6s to be in the running state
STEP: Instance: [Thread:3], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-167ab992-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node3
STEP: Instance: [Thread:3], Iteration: [1] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-167ab992-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-jxm6s
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:51:46.256: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-jxm6s --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:51:46.731: INFO: stderr: ""
Oct  9 17:51:46.731: INFO: stdout: ""
STEP: Instance: [Thread:3], Iteration: [1] : Deleting pod: pvc-tester-jxm6s
Oct  9 17:51:46.731: INFO: Deleting pod pvc-tester-jxm6s
Oct  9 17:51:46.740: INFO: Waiting up to 5m0s for pod "pvc-tester-jxm6s" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:51:46.746: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 5.407006ms
Oct  9 17:51:46.807: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 10.044451733s
Oct  9 17:51:46.825: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 20.078324904s
Oct  9 17:51:46.893: INFO: Pod "pvc-tester-rptwh": Phase="Running", Reason="", readiness=true. Elapsed: 30.131884236s
Oct  9 17:51:47.164: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 28.117674803s
Oct  9 17:51:48.750: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 2.009882128s
Oct  9 17:51:48.814: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 12.051617489s
Oct  9 17:51:48.830: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 22.083337193s
Oct  9 17:51:48.898: INFO: Pod "pvc-tester-rptwh": Phase="Pending", Reason="", readiness=false. Elapsed: 32.136342568s
Oct  9 17:51:49.169: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=true. Elapsed: 30.122615329s
Oct  9 17:51:50.755: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 4.014480477s
Oct  9 17:51:50.820: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 14.057147982s
Oct  9 17:51:50.834: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 24.087982363s
Oct  9 17:51:50.904: INFO: Pod "pvc-tester-rptwh": Phase="Pending", Reason="", readiness=false. Elapsed: 34.14249379s
Oct  9 17:51:51.174: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=false. Elapsed: 32.127626877s
Oct  9 17:51:52.761: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 6.020322527s
Oct  9 17:51:52.824: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 16.061607637s
Oct  9 17:51:52.838: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 26.091831456s
Oct  9 17:51:52.909: INFO: Pod "pvc-tester-rptwh": Phase="Pending", Reason="", readiness=false. Elapsed: 36.147366617s
Oct  9 17:51:53.180: INFO: Pod "pvc-tester-4mj7m": Phase="Running", Reason="", readiness=false. Elapsed: 34.133796454s
Oct  9 17:51:54.767: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 8.027058818s
Oct  9 17:51:54.829: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 18.066035535s
Oct  9 17:51:54.843: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 28.096173077s
Oct  9 17:51:54.914: INFO: Pod "pvc-tester-rptwh": Phase="Pending", Reason="", readiness=false. Elapsed: 38.152737429s
Oct  9 17:51:55.185: INFO: Pod "pvc-tester-4mj7m" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-4mj7m" not found
Oct  9 17:51:55.185: INFO: Ignore "not found" error above. Pod "pvc-tester-4mj7m" successfully deleted
STEP: Instance: [Thread:1], Iteration: [1] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-1678c1b4-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node3
Oct  9 17:51:56.773: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 10.033072025s
Oct  9 17:51:56.833: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 20.070428813s
Oct  9 17:51:56.848: INFO: Pod "pvc-tester-chzs7": Phase="Running", Reason="", readiness=true. Elapsed: 30.101189782s
Oct  9 17:51:56.918: INFO: Pod "pvc-tester-rptwh" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-rptwh" not found
Oct  9 17:51:56.918: INFO: Ignore "not found" error above. Pod "pvc-tester-rptwh" successfully deleted
STEP: Instance: [Thread:4], Iteration: [1] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-16804ea2-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node1
Oct  9 17:51:58.779: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 12.038949555s
Oct  9 17:51:58.837: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 22.074708478s
Oct  9 17:51:58.851: INFO: Pod "pvc-tester-chzs7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.105057156s
Oct  9 17:52:00.784: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 14.04412356s
Oct  9 17:52:00.843: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 24.080562885s
Oct  9 17:52:00.856: INFO: Pod "pvc-tester-chzs7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.109805682s
Oct  9 17:52:02.789: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 16.049148061s
Oct  9 17:52:02.849: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 26.086231649s
Oct  9 17:52:02.861: INFO: Pod "pvc-tester-chzs7" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-chzs7" not found
Oct  9 17:52:02.861: INFO: Ignore "not found" error above. Pod "pvc-tester-chzs7" successfully deleted
STEP: Instance: [Thread:5], Iteration: [1] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-168008a1-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node2
Oct  9 17:52:04.796: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 18.055356249s
Oct  9 17:52:04.855: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 28.09190524s
Oct  9 17:52:05.311: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-1678c1b4-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Instance: [Thread:1], Iteration: [1] : Deleting the Claim: pvc-rhvzf
Oct  9 17:52:05.311: INFO: Deleting PersistentVolumeClaim "pvc-rhvzf"
STEP: Instance: [Thread:1], Iteration: [2] : Creating PVC using the Storage Class: sc-default
STEP: Instance: [Thread:1], Iteration: [2] : Waiting for claim: pvc-62bqt to be in bound phase
Oct  9 17:52:05.349: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-62bqt to have phase Bound
Oct  9 17:52:05.354: INFO: PersistentVolumeClaim pvc-62bqt found but phase is Pending instead of Bound.
Oct  9 17:52:06.801: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 20.06083511s
Oct  9 17:52:06.861: INFO: Pod "pvc-tester-dc5wn": Phase="Running", Reason="", readiness=true. Elapsed: 30.098390861s
Oct  9 17:52:07.034: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-16804ea2-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node1".
STEP: Instance: [Thread:4], Iteration: [1] : Deleting the Claim: pvc-jwhb7
Oct  9 17:52:07.034: INFO: Deleting PersistentVolumeClaim "pvc-jwhb7"
STEP: Instance: [Thread:4], Iteration: [2] : Creating PVC using the Storage Class: sc-user-specified-ds
STEP: Instance: [Thread:4], Iteration: [2] : Waiting for claim: pvc-dq5fv to be in bound phase
Oct  9 17:52:07.062: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-dq5fv to have phase Bound
Oct  9 17:52:07.072: INFO: PersistentVolumeClaim pvc-dq5fv found but phase is Pending instead of Bound.
Oct  9 17:52:07.358: INFO: PersistentVolumeClaim pvc-62bqt found and phase=Bound (2.008995823s)
STEP: Instance: [Thread:1], Iteration: [2] : Creating Pod using the claim: pvc-62bqt
Oct  9 17:52:08.807: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 22.066668316s
Oct  9 17:52:08.866: INFO: Pod "pvc-tester-dc5wn" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-dc5wn" not found
Oct  9 17:52:08.866: INFO: Ignore "not found" error above. Pod "pvc-tester-dc5wn" successfully deleted
STEP: Instance: [Thread:2], Iteration: [1] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-167a93ac-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node4
Oct  9 17:52:09.078: INFO: PersistentVolumeClaim pvc-dq5fv found and phase=Bound (2.015958443s)
STEP: Instance: [Thread:4], Iteration: [2] : Creating Pod using the claim: pvc-dq5fv
Oct  9 17:52:10.813: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 24.073206532s
Oct  9 17:52:12.819: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 26.078696186s
Oct  9 17:52:12.988: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-168008a1-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Instance: [Thread:5], Iteration: [1] : Deleting the Claim: pvc-5m5qz
Oct  9 17:52:12.988: INFO: Deleting PersistentVolumeClaim "pvc-5m5qz"
STEP: Instance: [Thread:5], Iteration: [2] : Creating PVC using the Storage Class: sc-default
STEP: Instance: [Thread:5], Iteration: [2] : Waiting for claim: pvc-xt9wf to be in bound phase
Oct  9 17:52:13.005: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-xt9wf to have phase Bound
Oct  9 17:52:13.017: INFO: PersistentVolumeClaim pvc-xt9wf found but phase is Pending instead of Bound.
Oct  9 17:52:14.824: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 28.084246449s
Oct  9 17:52:15.022: INFO: PersistentVolumeClaim pvc-xt9wf found and phase=Bound (2.01689098s)
STEP: Instance: [Thread:5], Iteration: [2] : Creating Pod using the claim: pvc-xt9wf
STEP: Instance: [Thread:1], Iteration: [2] : Waiting for the Pod: pvc-tester-sp495 to be in the running state
STEP: Instance: [Thread:1], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3c4b0b87-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node1
STEP: Instance: [Thread:1], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3c4b0b87-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-sp495
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:52:15.624: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-sp495 --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:52:16.836: INFO: Pod "pvc-tester-jxm6s": Phase="Running", Reason="", readiness=true. Elapsed: 30.096124474s
Oct  9 17:52:17.005: INFO: stderr: ""
Oct  9 17:52:17.005: INFO: stdout: ""
STEP: Instance: [Thread:1], Iteration: [2] : Deleting pod: pvc-tester-sp495
Oct  9 17:52:17.006: INFO: Deleting pod pvc-tester-sp495
Oct  9 17:52:17.014: INFO: Waiting up to 5m0s for pod "pvc-tester-sp495" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:52:17.019: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 4.899879ms
Oct  9 17:52:18.843: INFO: Pod "pvc-tester-jxm6s": Phase="Pending", Reason="", readiness=false. Elapsed: 32.102881535s
Oct  9 17:52:18.987: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-167a93ac-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Instance: [Thread:2], Iteration: [1] : Deleting the Claim: pvc-zvs8j
Oct  9 17:52:18.988: INFO: Deleting PersistentVolumeClaim "pvc-zvs8j"
STEP: Instance: [Thread:2], Iteration: [2] : Creating PVC using the Storage Class: sc-vsan
STEP: Instance: [Thread:2], Iteration: [2] : Waiting for claim: pvc-k7g5b to be in bound phase
Oct  9 17:52:19.013: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-k7g5b to have phase Bound
Oct  9 17:52:19.020: INFO: PersistentVolumeClaim pvc-k7g5b found but phase is Pending instead of Bound.
Oct  9 17:52:19.024: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 2.009607414s
Oct  9 17:52:20.850: INFO: Pod "pvc-tester-jxm6s": Phase="Pending", Reason="", readiness=false. Elapsed: 34.109500453s
Oct  9 17:52:21.026: INFO: PersistentVolumeClaim pvc-k7g5b found but phase is Pending instead of Bound.
Oct  9 17:52:21.028: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 4.014247594s
Oct  9 17:52:22.857: INFO: Pod "pvc-tester-jxm6s": Phase="Pending", Reason="", readiness=false. Elapsed: 36.116908532s
Oct  9 17:52:23.035: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 6.020815878s
Oct  9 17:52:23.035: INFO: PersistentVolumeClaim pvc-k7g5b found but phase is Pending instead of Bound.
STEP: Instance: [Thread:4], Iteration: [2] : Waiting for the Pod: pvc-tester-ln72g to be in the running state
STEP: Instance: [Thread:4], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3d4f03c3-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node2
STEP: Instance: [Thread:4], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3d4f03c3-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-ln72g
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:52:23.366: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-ln72g --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:52:23.841: INFO: stderr: ""
Oct  9 17:52:23.841: INFO: stdout: ""
STEP: Instance: [Thread:4], Iteration: [2] : Deleting pod: pvc-tester-ln72g
Oct  9 17:52:23.841: INFO: Deleting pod pvc-tester-ln72g
Oct  9 17:52:23.854: INFO: Waiting up to 5m0s for pod "pvc-tester-ln72g" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:52:23.859: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 4.950418ms
Oct  9 17:52:24.862: INFO: Pod "pvc-tester-jxm6s" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-jxm6s" not found
Oct  9 17:52:24.862: INFO: Ignore "not found" error above. Pod "pvc-tester-jxm6s" successfully deleted
STEP: Instance: [Thread:3], Iteration: [1] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-167ab992-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node3
Oct  9 17:52:25.040: INFO: PersistentVolumeClaim pvc-k7g5b found but phase is Pending instead of Bound.
Oct  9 17:52:25.041: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 8.026880939s
STEP: Instance: [Thread:5], Iteration: [2] : Waiting for the Pod: pvc-tester-9ccbr to be in the running state
STEP: Instance: [Thread:5], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-40dac6d6-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node4
STEP: Instance: [Thread:5], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-40dac6d6-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-9ccbr
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:52:25.289: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-9ccbr --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:52:25.776: INFO: stderr: ""
Oct  9 17:52:25.776: INFO: stdout: ""
STEP: Instance: [Thread:5], Iteration: [2] : Deleting pod: pvc-tester-9ccbr
Oct  9 17:52:25.776: INFO: Deleting pod pvc-tester-9ccbr
Oct  9 17:52:25.784: INFO: Waiting up to 5m0s for pod "pvc-tester-9ccbr" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:52:25.789: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 5.077074ms
Oct  9 17:52:25.865: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 2.010455778s
Oct  9 17:52:27.049: INFO: PersistentVolumeClaim pvc-k7g5b found but phase is Pending instead of Bound.
Oct  9 17:52:27.050: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 10.036188038s
Oct  9 17:52:27.795: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 2.010755611s
Oct  9 17:52:27.870: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 4.015435683s
Oct  9 17:52:29.057: INFO: PersistentVolumeClaim pvc-k7g5b found but phase is Pending instead of Bound.
Oct  9 17:52:29.057: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 12.042647665s
Oct  9 17:52:29.799: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 4.015473785s
Oct  9 17:52:29.874: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 6.019526541s
Oct  9 17:52:31.063: INFO: PersistentVolumeClaim pvc-k7g5b found and phase=Bound (12.050203168s)
Oct  9 17:52:31.070: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 14.055968911s
STEP: Instance: [Thread:2], Iteration: [2] : Creating Pod using the claim: pvc-k7g5b
Oct  9 17:52:31.804: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 6.020069131s
Oct  9 17:52:31.878: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 8.023939374s
Oct  9 17:52:33.076: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 16.062024505s
Oct  9 17:52:33.809: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 8.024668351s
Oct  9 17:52:33.883: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 10.028299403s
Oct  9 17:52:34.981: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-167ab992-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Instance: [Thread:3], Iteration: [1] : Deleting the Claim: pvc-2xqld
Oct  9 17:52:34.981: INFO: Deleting PersistentVolumeClaim "pvc-2xqld"
STEP: Instance: [Thread:3], Iteration: [2] : Creating PVC using the Storage Class: sc-spbm
STEP: Instance: [Thread:3], Iteration: [2] : Waiting for claim: pvc-6fzlg to be in bound phase
Oct  9 17:52:35.003: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-6fzlg to have phase Bound
Oct  9 17:52:35.007: INFO: PersistentVolumeClaim pvc-6fzlg found but phase is Pending instead of Bound.
Oct  9 17:52:35.081: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 18.066651271s
Oct  9 17:52:35.813: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 10.029090054s
Oct  9 17:52:35.888: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 12.033339409s
Oct  9 17:52:37.012: INFO: PersistentVolumeClaim pvc-6fzlg found but phase is Pending instead of Bound.
Oct  9 17:52:37.086: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 20.071714001s
Oct  9 17:52:37.818: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 12.03425865s
Oct  9 17:52:37.892: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 14.038121797s
Oct  9 17:52:39.017: INFO: PersistentVolumeClaim pvc-6fzlg found but phase is Pending instead of Bound.
Oct  9 17:52:39.090: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 22.076384364s
Oct  9 17:52:39.823: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 14.03919873s
Oct  9 17:52:39.898: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 16.043326459s
Oct  9 17:52:41.023: INFO: PersistentVolumeClaim pvc-6fzlg found but phase is Pending instead of Bound.
Oct  9 17:52:41.096: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 24.08157391s
Oct  9 17:52:41.828: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 16.043772601s
Oct  9 17:52:41.902: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 18.047447776s
Oct  9 17:52:43.027: INFO: PersistentVolumeClaim pvc-6fzlg found but phase is Pending instead of Bound.
Oct  9 17:52:43.100: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 26.085733296s
Oct  9 17:52:43.834: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 18.049781572s
Oct  9 17:52:43.908: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 20.053282297s
Oct  9 17:52:45.032: INFO: PersistentVolumeClaim pvc-6fzlg found but phase is Pending instead of Bound.
Oct  9 17:52:45.103: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 28.089513313s
Oct  9 17:52:45.839: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 20.054711207s
Oct  9 17:52:45.913: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 22.058430309s
Oct  9 17:52:47.046: INFO: PersistentVolumeClaim pvc-6fzlg found and phase=Bound (12.043259324s)
STEP: Instance: [Thread:3], Iteration: [2] : Creating Pod using the claim: pvc-6fzlg
Oct  9 17:52:47.108: INFO: Pod "pvc-tester-sp495": Phase="Running", Reason="", readiness=true. Elapsed: 30.093660084s
Oct  9 17:52:47.845: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 22.060724894s
Oct  9 17:52:47.918: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 24.063376246s
Oct  9 17:52:49.112: INFO: Pod "pvc-tester-sp495" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-sp495" not found
Oct  9 17:52:49.112: INFO: Ignore "not found" error above. Pod "pvc-tester-sp495" successfully deleted
STEP: Instance: [Thread:1], Iteration: [2] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3c4b0b87-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node1
Oct  9 17:52:49.852: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 24.067751559s
Oct  9 17:52:49.923: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 26.069102215s
Oct  9 17:52:51.866: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 26.081690003s
Oct  9 17:52:51.931: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 28.07627797s
Oct  9 17:52:53.870: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 28.086324102s
Oct  9 17:52:53.936: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=true. Elapsed: 30.081664905s
STEP: Instance: [Thread:2], Iteration: [2] : Waiting for the Pod: pvc-tester-s78sw to be in the running state
STEP: Instance: [Thread:2], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-446eb6d3-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node3
STEP: Instance: [Thread:2], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-446eb6d3-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-s78sw
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:52:55.331: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-s78sw --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:52:55.889: INFO: Pod "pvc-tester-9ccbr": Phase="Running", Reason="", readiness=true. Elapsed: 30.105318047s
Oct  9 17:52:55.892: INFO: stderr: ""
Oct  9 17:52:55.892: INFO: stdout: ""
STEP: Instance: [Thread:2], Iteration: [2] : Deleting pod: pvc-tester-s78sw
Oct  9 17:52:55.892: INFO: Deleting pod pvc-tester-s78sw
Oct  9 17:52:55.899: INFO: Waiting up to 5m0s for pod "pvc-tester-s78sw" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:52:55.905: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 6.090844ms
Oct  9 17:52:55.942: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=false. Elapsed: 32.087404597s
Oct  9 17:52:57.896: INFO: Pod "pvc-tester-9ccbr" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-9ccbr" not found
Oct  9 17:52:57.896: INFO: Ignore "not found" error above. Pod "pvc-tester-9ccbr" successfully deleted
STEP: Instance: [Thread:5], Iteration: [2] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-40dac6d6-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node4
Oct  9 17:52:57.910: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 2.011411806s
Oct  9 17:52:57.948: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=false. Elapsed: 34.094015959s
Oct  9 17:52:59.230: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3c4b0b87-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node1".
STEP: Instance: [Thread:1], Iteration: [2] : Deleting the Claim: pvc-62bqt
Oct  9 17:52:59.230: INFO: Deleting PersistentVolumeClaim "pvc-62bqt"
Oct  9 17:52:59.239: INFO: Deleting PersistentVolumeClaim "pvc-62bqt"
Oct  9 17:52:59.243: INFO: Deleting PersistentVolumeClaim "pvc-rhvzf"
Oct  9 17:52:59.915: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 4.015945507s
Oct  9 17:52:59.953: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=false. Elapsed: 36.09898893s
Oct  9 17:53:01.920: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 6.021629725s
Oct  9 17:53:01.958: INFO: Pod "pvc-tester-ln72g": Phase="Running", Reason="", readiness=false. Elapsed: 38.103461259s
STEP: Instance: [Thread:3], Iteration: [2] : Waiting for the Pod: pvc-tester-qb2tk to be in the running state
STEP: Instance: [Thread:3], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-4df73922-ad55-11e7-a775-0050569cce2c.vmdk is attached to the node VM: kubernetes-node2
STEP: Instance: [Thread:3], Iteration: [2] : Verifing the volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-4df73922-ad55-11e7-a775-0050569cce2c.vmdk is accessible in the pod: pvc-tester-qb2tk
STEP: Verify the volume is accessible and available in the pod
Oct  9 17:53:03.337: INFO: Running '/root/divyenp/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.192.57.171 --kubeconfig=/root/.kube/config exec pvc-tester-qb2tk --namespace=e2e-tests-vcp-stress-dxrkx -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct  9 17:53:03.810: INFO: stderr: ""
Oct  9 17:53:03.811: INFO: stdout: ""
STEP: Instance: [Thread:3], Iteration: [2] : Deleting pod: pvc-tester-qb2tk
Oct  9 17:53:03.811: INFO: Deleting pod pvc-tester-qb2tk
Oct  9 17:53:03.825: INFO: Waiting up to 5m0s for pod "pvc-tester-qb2tk" in namespace "e2e-tests-vcp-stress-dxrkx" to be "terminated due to deadline exceeded"
Oct  9 17:53:03.832: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 6.26982ms
Oct  9 17:53:03.926: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 8.027348625s
Oct  9 17:53:03.962: INFO: Pod "pvc-tester-ln72g" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-ln72g" not found
Oct  9 17:53:03.963: INFO: Ignore "not found" error above. Pod "pvc-tester-ln72g" successfully deleted
STEP: Instance: [Thread:4], Iteration: [2] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3d4f03c3-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node2
Oct  9 17:53:05.839: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 2.01364314s
Oct  9 17:53:05.934: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 10.03492506s
Oct  9 17:53:07.845: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 4.019234355s
Oct  9 17:53:07.940: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 12.041369494s
Oct  9 17:53:08.011: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-40dac6d6-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Instance: [Thread:5], Iteration: [2] : Deleting the Claim: pvc-xt9wf
Oct  9 17:53:08.011: INFO: Deleting PersistentVolumeClaim "pvc-xt9wf"
Oct  9 17:53:08.020: INFO: Deleting PersistentVolumeClaim "pvc-xt9wf"
Oct  9 17:53:08.024: INFO: Deleting PersistentVolumeClaim "pvc-5m5qz"
Oct  9 17:53:09.850: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 6.024746047s
Oct  9 17:53:09.946: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 14.047387123s
Oct  9 17:53:11.855: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 8.030120224s
Oct  9 17:53:11.951: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 16.052132244s
Oct  9 17:53:13.861: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 10.035671133s
Oct  9 17:53:13.955: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 18.056536216s
Oct  9 17:53:14.092: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3d4f03c3-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Instance: [Thread:4], Iteration: [2] : Deleting the Claim: pvc-dq5fv
Oct  9 17:53:14.092: INFO: Deleting PersistentVolumeClaim "pvc-dq5fv"
Oct  9 17:53:14.099: INFO: Deleting PersistentVolumeClaim "pvc-dq5fv"
Oct  9 17:53:14.104: INFO: Deleting PersistentVolumeClaim "pvc-jwhb7"
Oct  9 17:53:15.867: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 12.042114923s
Oct  9 17:53:15.960: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 20.061433791s
Oct  9 17:53:17.872: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 14.046870231s
Oct  9 17:53:17.965: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 22.065753282s
Oct  9 17:53:19.879: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 16.053246581s
Oct  9 17:53:19.970: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 24.070874243s
Oct  9 17:53:21.885: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 18.059560605s
Oct  9 17:53:21.975: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 26.075734362s
Oct  9 17:53:23.890: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 20.064951947s
Oct  9 17:53:23.980: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 28.080766952s
Oct  9 17:53:25.897: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 22.071477008s
Oct  9 17:53:25.985: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=true. Elapsed: 30.085685437s
Oct  9 17:53:27.902: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 24.077026267s
Oct  9 17:53:27.989: INFO: Pod "pvc-tester-s78sw": Phase="Running", Reason="", readiness=false. Elapsed: 32.089820823s
Oct  9 17:53:29.909: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 26.083376709s
Oct  9 17:53:30.001: INFO: Pod "pvc-tester-s78sw": Phase="Pending", Reason="", readiness=false. Elapsed: 34.101689567s
Oct  9 17:53:31.916: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 28.090324224s
Oct  9 17:53:32.005: INFO: Pod "pvc-tester-s78sw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.105854171s
Oct  9 17:53:33.921: INFO: Pod "pvc-tester-qb2tk": Phase="Running", Reason="", readiness=true. Elapsed: 30.095752079s
Oct  9 17:53:34.009: INFO: Pod "pvc-tester-s78sw" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-s78sw" not found
Oct  9 17:53:34.009: INFO: Ignore "not found" error above. Pod "pvc-tester-s78sw" successfully deleted
STEP: Instance: [Thread:2], Iteration: [2] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-446eb6d3-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node3
Oct  9 17:53:35.926: INFO: Pod "pvc-tester-qb2tk" in namespace "e2e-tests-vcp-stress-dxrkx" not found. Error: pods "pvc-tester-qb2tk" not found
Oct  9 17:53:35.926: INFO: Ignore "not found" error above. Pod "pvc-tester-qb2tk" successfully deleted
STEP: Instance: [Thread:3], Iteration: [2] : Waiting for volume: [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-4df73922-ad55-11e7-a775-0050569cce2c.vmdk to be detached from the node: kubernetes-node2
Oct  9 17:53:44.119: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-446eb6d3-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Instance: [Thread:2], Iteration: [2] : Deleting the Claim: pvc-k7g5b
Oct  9 17:53:44.119: INFO: Deleting PersistentVolumeClaim "pvc-k7g5b"
Oct  9 17:53:44.126: INFO: Deleting PersistentVolumeClaim "pvc-k7g5b"
Oct  9 17:53:44.131: INFO: Deleting PersistentVolumeClaim "pvc-zvs8j"
Oct  9 17:53:46.043: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-4df73922-ad55-11e7-a775-0050569cce2c.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Instance: [Thread:3], Iteration: [2] : Deleting the Claim: pvc-6fzlg
Oct  9 17:53:46.043: INFO: Deleting PersistentVolumeClaim "pvc-6fzlg"
Oct  9 17:53:46.051: INFO: Deleting PersistentVolumeClaim "pvc-6fzlg"
Oct  9 17:53:46.055: INFO: Deleting PersistentVolumeClaim "pvc-2xqld"
[AfterEach] [sig-storage] vsphere cloud provider stress [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct  9 17:53:46.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-vcp-stress-dxrkx" for this suite.
Oct  9 17:53:54.214: INFO: namespace: e2e-tests-vcp-stress-dxrkx, resource: bindings, ignored listing per whitelist
Oct  9 17:53:54.286: INFO: namespace e2e-tests-vcp-stress-dxrkx deletion completed in 8.175682274s

• [SLOW TEST:172.729 seconds]
[sig-storage] vsphere cloud provider stress [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  vsphere stress tests
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_stress.go:125
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct  9 17:53:54.292: INFO: Running AfterSuite actions on all node
Oct  9 17:53:54.292: INFO: Running AfterSuite actions on node 1

Ran 1 of 701 Specs in 172.923 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 700 Skipped PASS

Ginkgo ran 1 suite in 2m53.787863724s
Test Suite Passed
2017/10/09 17:53:54 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vsphere\sstress\stests' finished in 2m54.698907311s
2017/10/09 17:53:54 e2e.go:81: Done
```

VMware Reviewers: @rohitjogvmw @BaluDontu @tusharnt

**Release note**:

```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53903, 53914, 54374). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add PodDisruptionBudget to scheduler cache.

**What this PR does / why we need it**:
This is the first step to add support for PodDisruptionBudget during preemption. This PR adds PDB to scheduler cache.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**: None

**Release note**:

```release-note
Add PodDisruptionBudget to scheduler cache.
```

ref/ kubernetes#53913
…ration-1020

Automatic merge from submit-queue (batch tested with PRs 53903, 53914, 54374). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Migrate resource relevant e2e test files to sig scheduling

**What this PR does / why we need it**:

Migrate resource relevant e2e test files to sig scheduling. Not fully sure whether these e2e files belong to sig-node or sig-scheduling, feel free to contact me if you have better solution.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue kubernetes#49161

**Special notes for your reviewer**:

**Release note**:
```release-note
none
```
…phic-scale-client

Automatic merge from submit-queue (batch tested with PRs 53743, 53564). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Polymorphic Scale Client

This PR introduces a polymorphic scale client based on discovery information that's able to scale scalable resources in arbitrary group-versions, as long as they present the scale subresource in their discovery information.

Currently, it supports `extensions/v1beta1.Scale` and `autoscaling/v1.Scale`, but supporting other versions of scale if/when we produce them should be fairly trivial.

It also updates the HPA to use this client, meaning the HPA will now work on any scalable resource, not just things in the `extensions/v1beta1` API group.

**Release note**:
```release-note
Introduces a polymorphic scale client, allowing HorizontalPodAutoscalers to properly function on scalable resources in any API group.
```

Unblocks kubernetes#29698
Unblocks kubernetes#38756
Unblocks kubernetes#49504 
Fixes kubernetes#38810
…ugin-dir-flag

Automatic merge from submit-queue (batch tested with PRs 53743, 53564). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

kubelet: remove the --network-plugin-dir flag

**What this PR does / why we need it**:
This flag has been replaced with `--cni-bin-dir`,  and has been deprecated in Kubernetes 1.7.
It is safe to remove in Kubernetes 1.9 according to the deprecation policy.

**Which issue this PR fixes**: fixes kubernetes#46410

**Special notes for your reviewer**:
/assign @mtaufen @freehan @dchen1107

**Release note**:
```release-note
Remove the --network-plugin-dir flag.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

certs: remove always nil error from New signature

```release-note-none
```
```release-notes
* Logging cleanups
* Updates kube-dns to use client-go 3
* Updates containers to use alpine as the base image on all platforms
* Adds support for IPv6
```
…Pod-diff

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Updating E2E test for deleting PVC when PVC is in use

**What this PR does / why we need it**:
This test updates an existing e2e test and adds extra verification.
Updated workflow of the test is as below
1. Create PVC, Wait until PV is provisioned. Create POD using PVC.
2. Verify POD is running and PV is attached to the node.
3. Delete PVC.
4. Verify Volume remains attached to the pod after deleting claim.
5. Verify Volume is accessible in the pod after deleting claim.
6. Verify associated PV is present and its status should be failed.
7. Delete Pod and wait until PV is unmounted and detached from the Node.
6. Wait and Verify PV is deleted after POD is deleted.



**Which issue this PR fixes**
fixes # vmware-archive#279

**Special notes for your reviewer**:
Test logs
```
# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build371606839/command-line-arguments/_obj/exe/e2e:
  -get
    	go get -u kubetest if old or not installed (default true)
  -old duration
    	Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:42:40 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:42:40 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:42:40 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/16 15:42:40 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/16 15:42:40 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod...
2017/10/16 15:42:40 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:42:40 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 293.775296ms
2017/10/16 15:42:40 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.913+297ab03890a6a7-dirty", GitCommit:"297ab03890a6a76f268eb5415e0fb16f20b2309e", GitTreeState:"dirty", BuildDate:"2017-10-16T20:50:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:42:40 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 317.940582ms
2017/10/16 15:42:40 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod
Conformance test: not doing test setup.
Oct 16 15:42:42.327: INFO: Overriding default scale value of zero to 1
Oct 16 15:42:42.327: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:42:42.577720    8325 e2e.go:369] Starting e2e run "51f11717-b2c3-11e7-bd54-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508193761 - Will randomize all specs
Will run 1 of 706 specs

Oct 16 15:42:42.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 16 15:42:42.686: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:42:42.724: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:42:42.883: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:42:42.883: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:42:42.891: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:42:42.891: INFO: Dumping network health container logs from all nodes...
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy] [sig-storage] persistentvolumereclaim:vsphere 
  should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
[BeforeEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:42:42.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:48
Oct 16 15:42:42.994: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[BeforeEach] [sig-storage] persistentvolumereclaim:vsphere
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:56
[It] should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
STEP: running testSetupVSpherePersistentVolumeReclaim
STEP: creating vmdk
STEP: creating the pv
STEP: creating the pvc
Oct 16 15:42:44.595: INFO: Waiting for PV vspherepv-ksccp to bind to PVC pvc-n4rq7
Oct 16 15:42:44.595: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-n4rq7 to have phase Bound
Oct 16 15:42:44.606: INFO: PersistentVolumeClaim pvc-n4rq7 found but phase is Pending instead of Bound.
Oct 16 15:42:47.625: INFO: PersistentVolumeClaim pvc-n4rq7 found and phase=Bound (3.029926391s)
Oct 16 15:42:47.625: INFO: Waiting up to 5m0s for PersistentVolume vspherepv-ksccp to have phase Bound
Oct 16 15:42:47.632: INFO: PersistentVolume vspherepv-ksccp found and phase=Bound (6.598243ms)
STEP: Creating the Pod
STEP: Deleting the Claim
Oct 16 15:42:59.709: INFO: Deleting PersistentVolumeClaim "pvc-n4rq7"
STEP: Verify the volume is attached to the node
STEP: Verify the volume is accessible and available in the pod
Oct 16 15:43:00.076: INFO: Running '/root/divyenp/kubernetes/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/root/.kube/config exec pvc-tester-r9ww9 --namespace=e2e-tests-persistentvolumereclaim-6pfpf -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:43:00.604: INFO: stderr: ""
Oct 16 15:43:00.604: INFO: stdout: ""
Oct 16 15:43:00.604: INFO: Verified that Volume is accessible in the POD after deleting PV claim
Oct 16 15:43:00.610: INFO: Waiting up to 1m0s for PersistentVolume vspherepv-ksccp to have phase Failed
Oct 16 15:43:00.619: INFO: PersistentVolume vspherepv-ksccp found and phase=Failed (9.016306ms)
STEP: Deleting the Pod
Oct 16 15:43:00.619: INFO: Deleting pod pvc-tester-r9ww9
Oct 16 15:43:00.650: INFO: Waiting up to 5m0s for pod "pvc-tester-r9ww9" in namespace "e2e-tests-persistentvolumereclaim-6pfpf" to be "terminated due to deadline exceeded"
Oct 16 15:43:00.668: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 18.507993ms
Oct 16 15:43:02.675: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 2.024854663s
Oct 16 15:43:04.682: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 4.03197856s
Oct 16 15:43:06.688: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 6.037718623s
Oct 16 15:43:08.697: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 8.047192574s
Oct 16 15:43:10.703: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 10.052754761s
Oct 16 15:43:12.708: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 12.057876018s
Oct 16 15:43:14.714: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 14.063962712s
Oct 16 15:43:16.719: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 16.068826626s
Oct 16 15:43:18.725: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 18.074735397s
Oct 16 15:43:20.730: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 20.080498293s
Oct 16 15:43:22.736: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 22.086586123s
Oct 16 15:43:24.742: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 24.092219324s
Oct 16 15:43:26.747: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 26.097385301s
Oct 16 15:43:28.753: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 28.103127591s
Oct 16 15:43:30.758: INFO: Pod "pvc-tester-r9ww9": Phase="Running", Reason="", readiness=true. Elapsed: 30.108014823s
Oct 16 15:43:32.764: INFO: Pod "pvc-tester-r9ww9": Phase="Pending", Reason="", readiness=false. Elapsed: 32.113847674s
Oct 16 15:43:34.772: INFO: Pod "pvc-tester-r9ww9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.122010171s
Oct 16 15:43:36.787: INFO: Pod "pvc-tester-r9ww9" in namespace "e2e-tests-persistentvolumereclaim-6pfpf" not found. Error: pods "pvc-tester-r9ww9" not found
Oct 16 15:43:36.787: INFO: Ignore "not found" error above. Pod "pvc-tester-r9ww9" successfully deleted
STEP: Verify PV is detached from the node after Pod is deleted
Oct 16 15:43:46.913: INFO: Waiting for Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" to detach from "kubernetes-node2".
Oct 16 15:43:56.918: INFO: Waiting for Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" to detach from "kubernetes-node2".
Oct 16 15:44:06.905: INFO: Volume "[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/e2e-vmdk-1508193763110460154.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Verify PV should be deleted automatically
Oct 16 15:44:06.905: INFO: Waiting up to 30s for PersistentVolume vspherepv-ksccp to get deleted
Oct 16 15:44:06.909: INFO: PersistentVolume vspherepv-ksccp was removed
[AfterEach] [sig-storage] persistentvolumereclaim:vsphere
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:62
STEP: running testCleanupVSpherePersistentVolumeReclaim
Oct 16 15:44:06.962: INFO: Deleting PersistentVolume "vspherepv-ksccp"
[AfterEach] [sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 16 15:44:06.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-persistentvolumereclaim-6pfpf" for this suite.
Oct 16 15:44:15.325: INFO: namespace: e2e-tests-persistentvolumereclaim-6pfpf, resource: bindings, ignored listing per whitelist
Oct 16 15:44:15.638: INFO: namespace e2e-tests-persistentvolumereclaim-6pfpf deletion completed in 8.651759385s

• [SLOW TEST:92.734 seconds]
[sig-storage] PersistentVolumes [Feature:ReclaimPolicy]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  [sig-storage] persistentvolumereclaim:vsphere
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
    should not detach and unmount PV when associated pvc with delete as reclaimPolicy is deleted when it is in use by the pod
    /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/pv_reclaimpolicy.go:136
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 16 15:44:15.651: INFO: Running AfterSuite actions on all node
Oct 16 15:44:15.651: INFO: Running AfterSuite actions on node 1

Ran 1 of 706 Specs in 92.974 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 705 Skipped PASS

Ginkgo ran 1 suite in 1m33.830856163s
Test Suite Passed
2017/10/16 15:44:15 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=should\snot\sdetach\sand\sunmount\sPV\swhen\sassociated\spvc\swith\sdelete\sas\sreclaimPolicy\sis\sdeleted\swhen\sit\sis\sin\suse\sby\sthe\spod' finished in 1m34.75838192s
2017/10/16 15:44:15 e2e.go:81: Done
```
VVMware Reviewers: @rohitjogvmw @BaluDontu @tusharnt
**Release note**:

```release-note
None
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add a notice for node e2e config files

ref kubernetes#53542 and patched up with kubernetes/test-infra#5107

So while migrating the jobs to prow, I haven't kill the `*.properties` files yet because some lingering jobs, and possibly local tests are still using them. We have a copy of image-config.yaml in test-infra, and all *.properties file is merged into job configs.

Add a notice to remind people also update the job configs in test-infra. Also add myself as a reviewer here so I can subscribe some notice. I'll remove them once I cleaned up all legacy files here.

/assign @yguo0905 @dashpole @yujuhong
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add kubectl create --raw -f

Adds `--raw` to `kubectl create` to match `kubectl get --raw`.  It re-uses the transport, reads the input stream (stdin or a single file for now) and posts directly to the endpoint specified.  This let's you direct data directly at a subresource (as a for instance).  I'd like to see this extended to `kubectl replace` too, so that we have full access to subresources via scripting without having to reproduce the transports.

@kubernetes/sig-cli-pr-reviews 

```release-note
add `--raw` to `kubectl create` to POST using the normal transport
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Volunteer to be reviewer of DaemonSet

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A


**Release note**:

```release-note
None
```
Kubernetes Submit Queue and others added 28 commits October 26, 2017 17:07
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

correct the kubeDeps.Cloud instead of kcfg.Cloud

**What this PR does / why we need it**:
default to hostname if kubeDeps.Cloud == nil not kcfg.Cloud
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Enable metadata concealment for tests

**What this PR does / why we need it**: Metadata concealment is going to beta for v1.9; enable it by default in tests.  Also, just use `ENABLE_METADATA_CONCEALMENT` instead of two different vars.  Work toward kubernetes#8867.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: none

**Special notes for your reviewer**:

**Release note**:

```release-note
Metadata concealment on GCE is now controlled by the `ENABLE_METADATA_CONCEALMENT` env var.  See cluster/gce/config-default.sh for more info.
```
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove federation

This PR removes the federation codebase and associated tooling from the tree.

The first commit just removes the `federation` path and should be uncontroversial.  The second commit removes references and associated tooling and suggests careful review.

Requirements for merge:

- [x] Bazel jobs no longer hard-code federation as a target ([test infra kubernetes#4983](kubernetes/test-infra#4983))
- [x] `federation-e2e` jobs are not run by default for k/k

**Release note**:

```release-note
Development of Kubernetes Federation has moved to github.com/kubernetes/federation.  This move out of tree also means that Federation will begin releasing separately from Kubernetes.  The impact of this is Federation-specific behavior will no longer be included in kubectl, kubefed will no longer be released as part of Kubernetes, and the Federation servers will no longer be included in the hyperkube binary and image.
```

cc: @kubernetes/sig-multicluster-pr-reviews @kubernetes/sig-testing-pr-reviews
…econform

Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add conformance annotations for expansion and service tests

Signed-off-by: Brad Topol <btopol@us.ibm.com>

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds expansion and service test conformance annotations to the e2e test suite.

The PR fixes a portion of kubernetes#53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.



```release-note NONE
```
…nform

Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add service latency and secret related conformance annotations

Signed-off-by: Brad Topol <btopol@us.ibm.com>

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds service latency and secret related conformance annotations to the e2e test suite.

The PR fixes a portion of kubernetes#53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:

Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.

**Release note**:

```release-note NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix bad format anchor in CHANGELOG

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:
We should update the relnotes associated scripts.

**Release note**:

```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

allow windows mount path

**What this PR does / why we need it**:
Currently mount path onlly allow Linux absolute path,  allow windows mount path in this PR.
This code snippet in kubelet will run in both Linux and Windows, so use IsAbs func to tell whether it's a absolute path is not sufficient as for k8s windows cluster, the master is Linux and agent is Windows node.

**Special notes for your reviewer**:
The example pod with mount path is like below:
```
---
kind: Pod
apiVersion: v1
metadata:
  name: pod-uses-shared-hdd-5g
  labels:
    name: storage
spec:
  containers:
  - image: microsoft/iis
    name: az-c-01
    volumeMounts:
    - name: blobdisk01
      mountPath: 'F:'
  nodeSelector:
    beta.kubernetes.io/os: windows
  volumes:
  - name: blobdisk01
    persistentVolumeClaim:
      claimName: pv-dd-shared-hdd-5
```


**Release note**:

```release-note
```
…_node

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add Windows support to the system verification check

**What this PR does / why we need it**:  This PR (in conjunction with kubernetes#53553 ) adds initial support for adding a Windows worker node to a Kubernetes cluster using
 kubeadm.  It was suggested on that PR to open a separate PR for the changes in test/e2e_node for review by sig-node devs.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes kubernetes#364 in conjuction with kubernetes#53553 

**Special notes for your reviewer**:

**Release note**:

```release-note
Add Windows support to the system verification check
```
Automatic merge from submit-queue (batch tested with PRs 53730, 51608, 54459, 54534, 54585). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

RBD Plugin: Implement Attacher/Detacher interfaces.

**What this PR does / why we need it**:

This PR continues @rootfs 's work in kubernetes#33660. It implements volume.Attacher/Volume.Detacher interfaces to resolve RBD image locking and makes RBD plugin more robust.

Summary of interfaces and what they do for RBD plugin:

- Attacher.Attach(): does nothing
- Attacher.VolumesAreAttached(): method to query volume attach status
- Attacher.GetDeviceMountPath(): method to get device mount path 
- Attacher.WaitForAttach(): kubelet maps the image on the node (and lock the image if needed)
- Attacher.MountDevice(): kubelet mounts device at the device mount path
- Detacher.UnmountDevice: kubelet unmounts device from the device mount path (currently, we need to unmaps image from the node here) (and unlock the image if needed)
- Detacher.Detach(): does nothing

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

fixes kubernetes#50142.

**Special notes for your reviewer**:

RBD changes:

  1) Modify rbdPlugin to implement volume.AttachableVolumePlugin interface.
  2) Add rbdAttacher/rbdDetacher structs to implement
  volume.Attacher/Detacher interfaces.
  3) Add mount.SafeFormatAndMount/mount.Exec fields to rbdPlugin, and setup them in
  rbdPlugin.Init for later uses. Attacher/Mounter/Unmounter/Detacher
  reference rbdPlugin to use mounter and exec. This simplifies
  code.
  4) Add testcase struct to abstract RBD Plugin test case, etc.
  5) Add newRBD constructor to unify rbd struct initialization.

Non-RBD changes:

  1) Fix FakeMounter.IsLikelyNotMountPoint to return ErrNotExist if the
  directory does not exist. Mounter.IsLikelyNotMountPoint interface
  requires this, and RBD plugin depends on it.
  2) ~~Extend Detacher.Detach method to pass `*volume.Spec`, RBD plugin
  needs it to detach device from the node.~~
  3) ~~Extend Volume.Spec struct to include namespace string, RBD Plugin needs
  it to locate objects (e.g. secrets) in Pod's namespace.~~
  4) ~~Update RABC bootstrap policy to allow
  `system:controller:attachdetach-controller` cluster role to get
  Secrets object. RBD attach/detach needs to access secrets object in
  Pod's namespace.~~

**Release note**:

```
NONE
```
…ult-pv-kind

Automatic merge from submit-queue (batch tested with PRs 53730, 51608, 54459, 54534, 54585). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix azure storage account num exhausting issue

**What this PR does / why we need it**:
If customer is using the default storage class of azure-disk, create lots of azure disk pvs by using default storage class of azure-disk, the storage account num would be exhausted in the azure subscription. Change default `kind` value of azure disk storge class from `Dedicated` to `Shared`, which means only a few storage accounts would be created even there are even hundreds of azure disk PVs.

**Which issue this PR fixes**:
fixes kubernetes#54669
fix storage account num exhausting issue when lots of azure disk pvs are created by using the default storage class of azure-disk

**Special notes for your reviewer**:
fix azure storage account num exhausting issue when lots of azure disk pvs are created by using the default storage class of azure-disk
I would suggest also cherry pick this fix to v1.7, v1.8

**Release note**:

```
fix azure storage account num exhausting issue
```

/sig azure
@karataliu @rootfs @brendanburns
…conform

Automatic merge from submit-queue (batch tested with PRs 53730, 51608, 54459, 54534, 54585). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add conformance annotations for projected volume tests

Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add projected volume related conformance annotations

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds projected volume related related conformance annotations to the e2e test suite.

The PR fixes a portion of kubernetes#53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.

**Release note**:

```release-note NONE
```
…workingconform

Automatic merge from submit-queue (batch tested with PRs 53730, 51608, 54459, 54534, 54585). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add probe, pre_stop, and networking related container annotations.

Signed-off-by: Brad Topol <btopol@us.ibm.com>

Add probe, pre_stop, and networking related container annotations.

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds probe, pre_stop, and networking related conformance annotations to the e2e test suite.

The PR fixes a portion of kubernetes#53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the pod based e2e conformance tests.
Special notes for your reviewer:

Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.



**Release note**:

```release-note NONE
```
…or-message

Automatic merge from submit-queue (batch tested with PRs 54656, 54552, 54389, 53634, 54408). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Better error messages and logging while registering device plugins.

Related to: kubernetes#51993

/sig scheduling

**Release note**:
```release-note
NONE
```
…nit-tests

Automatic merge from submit-queue (batch tested with PRs 54656, 54552, 54389, 53634, 54408). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding unit tests to methods of netsh

What this PR does / why we need it:

Add unit tests, thank you!
…-file-test-cases

Automatic merge from submit-queue (batch tested with PRs 54656, 54552, 54389, 53634, 54408). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding unit tests to methods of file's util

What this PR does / why we need it:

Add unit tests, thank you!
Automatic merge from submit-queue (batch tested with PRs 54656, 54552, 54389, 53634, 54408). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove hard code of session affinity timeout in winows kernel proxy

**What this PR does / why we need it**:

Remove hard code of session affinity timeout in winows kernel proxy - we have already done this in userspace, iptables and ipvs proxy.

**Which issue this PR fixes**: 

fixes kubernetes#53636 

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

/sig network
/area kube-proxy
Automatic merge from submit-queue (batch tested with PRs 54656, 54552, 54389, 53634, 54408). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add file backed state to cpu manager

**What this PR does / why we need it**:
Adds file backed `State` implementation to cpu manger with tests.
Reads from `State` are done from memory, while each write triggers state save to a file.

Any failure in reading the state file results in empty state

Next PR: kubernetes#54409
…int_reconciler_type

Automatic merge from submit-queue (batch tested with PRs 54419, 53545). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

change alpha-endpoint-reconciler-type argument to endpoint-reconciler-type

**What this PR does / why we need it**: Tweaks the endpoint reconciler argument to remove 'alpha', because according to this [comment](kubernetes#50984 (comment)) we are preferring to document the flags.

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54419, 53545). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Updating Calico to v2.6.1

**What this PR does / why we need it**:

Updating Calico to the most recent release v2.6.1.

[Release page](https://docs.projectcalico.org/v2.6/releases/) and [blog post](https://www.projectcalico.org/project-calico-2-6-released/)

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

delete the hostport from usedmap

**What this PR does / why we need it**:
delete the hostport record when pod is not on the host

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

Facilitate the further pr kubernetes#52421. Because the code which detects the conflict between wantports and existingports is not quite clean now.
Besides remove the unused port from map will save the memory.

**Special notes for your reviewer**:

I and the original coder @k82cn agreed to make this change

**Release note**:

```release-note
NONE
```
…ev1-round2

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use the core client with explicit version globally

**What this PR does / why we need it**:
As mentioned in kubernetes#49535 and kubernetes#50605, we want to have a global replace to use core client with explicit version.

**Which issue this PR fixes**: fixes kubernetes#49535 

**Special notes for your reviewer**:
The actual type of clientSet.Core() is already the same with clientSet.CoreV1(), so it should be safe replacement.
The places that clientSet.Core() are still in use are identified by IDE "find usages", and changes are made with one time global replace. Hopefully there will be none left after this PR merged.
Let me know if this PR is too big to review, I can split it into some smaller ones.

/cc @kubernetes/sig-api-machinery-pr-reviews 
/cc @k82cn @sttts 

**Release note**:

```release-note
none
```
…llback-host-networking

Automatic merge from submit-queue (batch tested with PRs 50776, 54395). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Move fluentd-gcp out of host network

Since metadata proxy doesn't filter service account after all, make fluentd-gcp addon run in its own network

This will mitigate the problem with port collision

```release-note
[fluentd-gcp addon] Fluentd now runs in its own network, not in the host one.
```
…eExistsByProviderID

Automatic merge from submit-queue (batch tested with PRs 51409, 54616). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Implement InstanceExistsByProviderID() for cloud providers

Fix kubernetes#51406
If cloud providers(like aws, gce etc...) implement ExternalID()
and support getting instance by ProviderID , they also implement
InstanceExistsByProviderID().

/assign wlan0
/assign @luxas

**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51409, 54616). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Generate kubeadm referencedoc and man pages

**What this PR does / why we need it**:
Improve kubeadm reference doc and start generating kubeadm man pages.
With this PR, also kubeadm will start following the same approach used by other tools

**Which issue this PR fixes** 
initial work for [kubernetes#265](kubernetes/kubeadm#265)

**Special notes for your reviewer**:
This [document](https://docs.google.com/document/d/1w22y-C1YD1mmqqETxrQrCLnJpzwttscanddgvfYceYY/edit?usp=sharing)  contains the design proposal for how to implement this goal, that will be implemented partially in https://github.com/kubernetes/kubernetes (this PR) and partially in https://github.com/kubernetes/website

In order to keep the PR as small and clean possible I didn't generated new placeholders files under `/docs/man` and `/docs/admin` at this stage. If this is necessary, I will do this later in this PR or eventually in another PR; however, if this is not strictly necessary, IMO we should avoid to pollute this repo with placeholders to file that are maintained in another repo.

cc @kubernetes/sig-docs-maintainers @Bradamant3 @heckj
@joelsmith
Copy link
Author

Opened by mistake. Sorry!

@joelsmith joelsmith closed this Oct 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.