-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate the use of CRD sub-resources. #643
Comments
I also think the Scale sub-resource would be a useful addition to revisions as a standard way for autoscalers to effect scaling. This gives the revision controller the option of modifying or ignoring the request, unlike the alternative of autoscaling the deployment directly. |
Yes perhaps, although I'm not sure the form that takes is appropriate for us? I think the payload is a single integer (foggy recollection of things people told me), which feels like the kind of thing we have been avoiding in the spec, since that feels like "how many servers" and we're after "serverless". That said, I could see this being useful to force cc @josephburnett for this discussion, but let's track that with a separate issue since it involves a bit of design and specification. |
It's less about users and more about implementors of autoscaling strategies. But you're right, off-topic, plus @josephburnett is out until Tuesday and I don't want to start any official discussions without him. 😃 This document in the team drive has more details if anyone is interested. |
/assign @grantr |
Moving to M5, since the GKE 1.10 alpha clusters have been showing some issues. I don't think the importance of this has diminished, but I'm less optimistic that we'll be able to make the switch to 1.10 smoothly in M4. |
If I understood correctly, CRD with subresources is in v1beta1 in k8s 1.11 https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/ |
Here's a backwards compatible strategy for adopting CRD status subresource with the aim of eventually dropping the generation bumping that's occurring in the webhook. Phase 1 - Serving 0.3ChangesEnable status subresource on all CRDs
Drop our functional dependency on
Implications
Phase 2 - Serving 0.4ChangesFully drop our dependency on
Remove
Change the label applied to revisions from
Implications
Phase 3 - Serving 0.5ChangesDrop functional dependency on Phase 4 - Serving 0.6ChangesNo longer apply the Phase X - when conversion webhooks are betaChanges
Implications
|
With Kubernetes 1.11+ metadata.generation now increments properly when the status subresource is enabled on CRDs For more details see: knative/serving#643
With Kubernetes 1.11+ metadata.generation now increments properly when the status subresource is enabled on CRDs For more details see: knative/serving#643
* Drop webhook logic to increment spec.generation With Kubernetes 1.11+ metadata.generation now increments properly when the status subresource is enabled on CRDs For more details see: knative/serving#643 * Drop the generational duck type
0.4 pieces have landed, moving out. |
Addresses knative#643 In 0.4, Revisions have both `/configurationMetadataGeneration` and `/configurationGeneration` labels have a value equal to the Configuration's metadata.generation This commit has the Configuration's reconciler use `/configurationGeneration` when looking up the latest created revision We should be able to drop the use of `/configurationMetadataGeneration` in 0.6
Addresses knative#643 In 0.4, Revisions had generation labels migrated toward use of `/configurationGeneration` label to determine a Configuration's latest created Revision. This commit leaves the deprecated `/configurationMetadataGeneration` label intact to allow rollback to 0.4, but removes the migration of the deprecated label, along with the deletion of an annotation previously used for the same purpose. Functional dependency on `/configurationMetadataGeneration` is dropped in knative#3325.
/milestone Serving 0.6 |
…#3325) Addresses #643 In 0.4, Revisions have both `/configurationMetadataGeneration` and `/configurationGeneration` labels have a value equal to the Configuration's metadata.generation This commit has the Configuration's reconciler use `/configurationGeneration` when looking up the latest created revision We should be able to drop the use of `/configurationMetadataGeneration` in 0.6
Addresses #643 In 0.4, Revisions had generation labels migrated toward use of `/configurationGeneration` label to determine a Configuration's latest created Revision. This commit leaves the deprecated `/configurationMetadataGeneration` label intact to allow rollback to 0.4, but removes the migration of the deprecated label, along with the deletion of an annotation previously used for the same purpose. Functional dependency on `/configurationMetadataGeneration` is dropped in #3325.
@dprotaso Do you plan to do the 0.6 portion of this? |
Oh yeah |
/close |
@dprotaso: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
it is done. |
This reverts commit 2f61813.
This is not mandatory but it's been a while since Go was available in CI so it bumps golang to 1.15.
This reverts commit 2f61813.
IIRC K8s 1.10 added support for "status" sub-resources in CRDs.
There are a variety of places this is useful to us, including
updateStatus
in each of the controllers, and (likely) the validation logic in the webhook.Let's scout this functionality in M4, and if useful adopt.
See also this issue, which tracks a 1.10 update.
The text was updated successfully, but these errors were encountered: