-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebase 1.16 #120
Rebase 1.16 #120
Conversation
Run VPA e2e tests for both apis even if one fails
add test case for controller_fetcher correct comments typo
add test case for controller_fetcher and cluster_feeder
Currently we read certificate files into a buffer of 5000 bytes. This sets a max size for our certificates; above this max, we will get invalid certs, most likely, leading to runtime bad certificate errors.
Use ioutil for certificate file reading.
- SubscriptionID placeholder does not match documentation or surrounding placeholders
Add support for cronjobs in VPA
Update c5,i3en,m5,r5 instance types
xrange() was removed in Python 3 in favor of range()
Fix typo in autoscaler azure example
…rash nodeGroup judy IsNil to avoid crashed
…erver-over-heapster Use Metrics Server instead of Heapster in the FAQ
Update VPA dependencies to k8s 1.14.3
Switch VPA examples to use apps/v1 API for Deployment
…milarity_rules fix: ignore agentpool label when looking for similar node groups with Azure provider
…onfig_readme Add multinode config in readme.
"delete node" has a specific meaning in the context of the Kubernetes API: deleting the node object. However, the cluster-autoscaler never does that; it terminates the underlying instance and expects the [cloud-node-controller](https://kubernetes.io/docs/concepts/architecture/cloud-controller/#node-controller) to remove the corresponding node from Kubernetes. Replace all mentions of "deleting" a node with "terminating" to disambiguate this. Signed-off-by: Matthias Rampke <mr@soundcloud.com>
This had me stumped for the better part of a day. While the cluster-autoscaler "deletes" nodes, it does not actually delete the Node object from the Kubernetes API. In normal operations, with a well-configured cluster, this is a minor point; however, when debugging why nodes do not get deleted, the inconsistent terminology can be a major headache. This FAQ entry should clarify the difference for anyone who needs to know. Signed-off-by: Matthias Rampke <mr@soundcloud.com>
…examples Add required selector to VPA deployment examples for apps/v1
up when there was a multizonal pool with number of nodes exceeding limit for one zone.
Fix bug in balancing processor.
…nodes cluster-autoscaler FAQ: clarify what "deleting nodes" means in this context
/test unit |
Unit tests needs openshift/release#5626
|
/test e2e-aws-operator |
Needs openshift/cluster-autoscaler-operator#120 for the e2e to succeed |
/test e2e-aws-operator |
/verify-owners |
update cluster-autoscaler OWNERS and remove owners which do not belong the openshift org so also CI is happy.
/verify-owners |
/test e2e-aws-operator |
This reverts commit 6601bf0. See kubernetes#2495
Once this gets refreshed through CI openshift/release#5626 units should go green |
/test unit |
2 similar comments
/test unit |
/test unit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not possible to review in a meaningful way. Process sounds right. LGTM if the tests pass.
/lgtm
/approve
Rebase:
The PR was created by first taking upstream/cluster-autoscaler-release-1.16 as the base then applying UPSTREAM: patches on top. The set of picks applied was derived from:
git log --oneline --no-merges 8a999884de8916f7fe99abb9fa63caed4997719b..openshift/master
where 8a99988 reflects the changes since our last rebase (which was upstream/cluster-autoscaler-release-1.14 #107).
And in that set of picks
UPSTREAM: <carry>: openshift: Bump deps for cloudprovider/openshiftmachineapi
was used to revendor using the new upstream flow which relies oncluster-autoscaler/hack/update-vendor.sh
. For it to succeed and give a buildable artifact we had to relax the script to allow the go.mod-extra to setgithub.com/prometheus/client_golang v0.9.2
andgithub.com/matttproud/golang_protobuf_extensions v1.0.1
.Extra commits
Additionally an extra commit was added at the top to ensure the machine API Provider satisfy the new cloud provider interface methods:
UPSTREAM: <carry>: openshift: satisfy cloud provider interface
The interface was changed by kubernetes@9066688#diff-34ecd32e36ab8898fff1637bb1b39c2c
Additionally an extra commit was added at the top to revert kubernetes@6601bf0 and overcome kubernetes#2495
PRs Deps
This PR needs openshift/cluster-autoscaler-operator#120 for the e2e to succeed because of new RBAC requirements.
This PR needs openshift/release#5626 for unit tests to succeed
They are failing due to go 1.10 missing https://golang.org/doc/go1.12#library
Rebase process
To create the merge commit I have used the following steps:
Picks were as follows, where the conflicts are signaled: