-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reconcile ClusterIngress into VirtualService #2189
reconcile ClusterIngress into VirtualService #2189
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lichuqiang: 10 warnings.
In response to this:
Proposed Changes
- update ingress defaulting and validation
- update test dependency
- add ingress reconciler to reconcile ClusterIngress into VirtualService
Release Note
reconcile ClusterIngress into VirtualService
/area networking
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
) | ||
|
||
var ( | ||
DefaultTimeout = 60 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Golint comments: exported var DefaultTimeout should have comment or be unexported. More info.
a814fc8
to
6fd1711
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lichuqiang: 3 unresolved warnings and 2 new warnings.
In response to this:
/lint
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
) | ||
|
||
const ( | ||
// Default to 60 seconds if timeout not specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Golint comments: comment on exported const DefaultTimeout should be of the form "DefaultTimeout ...". More info.
const ( | ||
// Default to 60 seconds if timeout not specified. | ||
DefaultTimeout = 60 * time.Second | ||
// Default to retry for 3 times if not specified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Golint comments: comment on exported const DefaultRetryCount should be of the form "DefaultRetryCount ...". More info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lichuqiang: 3 unresolved warnings and 0 new warnings.
In response to this:
/lint
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
1dc61dc
to
e7611c1
Compare
/test pull-knative-serving-integration-tests |
3c4dec1
to
d95741e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly nits throughout.
I'd like @tcnghia or @ZhiminXiang to do a pass over this before I approve, but I think chunks of this could sail through if you have any interest in breaking it up.
Also, I'd be fine with adding the reconciler to ./cmd/controller/main.go
now, since it makes trying these resources manually practical.
pkg/apis/networking/register.go
Outdated
|
||
// RouteLabelKey is the label key attached to ClusterIngress resources | ||
// to indicate which Route triggered their creation. | ||
RouteLabelKey = GroupName + "/route" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like a layering violation. I think I'd expect:
serving.knative.dev/route: foo
I would like to avoid the Ingress resources knowing anything about Routes or our other resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha, I ignored the fact that ingress and route resource are not in a same group.
So move this definition to the register file of route?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that SGTM
pkg/apis/networking/register.go
Outdated
|
||
// IngressLabelKey is the label key attached to underlying network programming | ||
// resources to indicate which ClusterIngress triggered their creation. | ||
IngressLabelKey = GroupName + "/clusteringress" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By contract, this is perfectly fine :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*contrast. I blame the bouncy shuttle this morning
} | ||
if p.Retries.PerTryTimeout == nil { | ||
p.Retries.PerTryTimeout = &metav1.Duration{Duration: DefaultTimeout} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to split these (./pkg/apis/networking/...
) changes into a separate PR? They LGTM, thanks for the cleanup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will do
FilterFunc: controller.Filter(v1alpha1.SchemeGroupVersion.WithKind("ClusterIngress")), | ||
Handler: cache.ResourceEventHandlerFuncs{ | ||
AddFunc: impl.EnqueueControllerOf, | ||
UpdateFunc: controller.PassNew(impl.EnqueueControllerOf), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There isn't enough information in OwnerReference for this to work properly.
See also: #1259
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for reminding this.
Will construct a selection func instead, via label I think.
|
||
var ( | ||
ingressRules = []v1alpha1.ClusterIngressRule{ | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: if you move this to the previous line: {{
it will reduce indentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK
}, | ||
HTTP: &v1alpha1.HTTPClusterIngressRuleValue{ | ||
Paths: []v1alpha1.HTTPClusterIngressPath{ | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same nit as above.
Paths: []v1alpha1.HTTPClusterIngressPath{ | ||
{ | ||
Splits: []v1alpha1.ClusterIngressBackendSplit{ | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same nit
}, | ||
}, | ||
Conditions: duckv1alpha1.Conditions{ | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same nit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm gonna stop now :)
AddFunc: impl.EnqueueControllerOf, | ||
UpdateFunc: controller.PassNew(impl.EnqueueControllerOf), | ||
AddFunc: c.enqueueOwnerIngress(impl), | ||
UpdateFunc: controller.PassNew(c.enqueueOwnerIngress(impl)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I mistook this for the Route
-> ClusterIngress
relationship, which can't use OwnerReference
.
Since ClusterIngress
isn't namespaced, I believe my prior comment was false here (but accurate for when we move to ClusterIngress
in the Route
controller).
I apologize for the randomization, but you should be able to revert this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, it can work from a functional standpoint, as we don't need namespace info for cluster-scoped resources.
But seeing it from a design perspective, the OwnerReference
is supposed for "resources of same namespace", but not for cluster-scoped resources.
So I'm thinking if we should just get rid of this, to avoid possible semantic changes in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That way, in EnqueueControllerOf
, the key enqueued is indeed "virtualservice namespace+ingress name“: https://github.com/knative/pkg/blob/master/controller/controller.go#L129
And in reconciler we just throw away the namespace to make it work, I think the behavior a little "opportunistic", WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to leave this, if you are. We could probably use a generalized impl.EnqueueLabvelOf(key)
method in knative/pkg/controller
to generalize this pattern a bit (I've now seen it a handful of places). Not for this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably use a generalized impl.EnqueueLabvelOf(key) method in knative/pkg/controller to generalize this pattern a bit (I've now seen it a handful of places)
That makes sense, we'll also need the kind of func in route->ingress.
I'll keep this in mind, and make it a cleanup after all the things are done
a7d630f
to
a845852
Compare
I'm curious how we see alternative ClusterIngress implementations being slotted in here. The ClusterIngress reconciler and the Serving Controller have explicit knowledge of VirtualService. Do we expect these resources to also have direct knowledge of other ClusterIngress implementations? Or is there some future work to separate out this knowledge so that ClusterIngress isn't directly tied to VirtualService? |
@bbrowning The basic idea is to start decoupling by creating this abstraction to go through. Even with this implementation "bundled" (for now), I am hoping that we can quickly reach a point where alternative implementations may reconcile these resource by running a parallel controller that handles: annotations:
networking.knative.dev/ingress.class: their-class Annotations should be passed through from the Route (and the Service to the Route), so annotating those should be able to override default Ingress selection. I also think that we will want a config setting for the default implementation we use. It'll take a few more steps to reach that point. Ultimately, I could see us splitting off |
/assign @ZhiminXiang for LGTM, and then I'll approve |
@mattmoor Thanks for the explanation. This is definitely a step in the right direction to allow that pluggability 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed some of the files. Will try to finish the rest of files within this week.
original, err := c.clusterIngressLister.Get(name) | ||
if apierrs.IsNotFound(err) { | ||
// The resource may no longer exist, in which case we stop processing. | ||
c.Logger.Errorf("clusteringress %q in work queue no longer exists", key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any particular reason to not use the logger in the context?
logger = logging.FromContext(ctx)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, not really.
Will update this for consistency
return err | ||
} | ||
|
||
// Update the Status of the route. Caller is responsible for checking |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/route/clusteringress
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still reviewing...
} | ||
|
||
// Reconcile compares the actual state with the desired, and attempts to | ||
// converge the two. It then updates the Status block of the Service resource |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/Service/ClusterIngress
}) | ||
|
||
virtualServiceInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ | ||
AddFunc: c.enqueueOwnerIngress(impl), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use EnqueueControllerOf and follow the pattern ?
If this is feasible, then we don't need enqueueOwnerIngress
function here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a discussion with @mattmoor about this above: #2189 (comment)
Would like to hear your opinion on the point as well :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah. Sorry I didn't see your discussion when I reviewed this part.
I get your point now. Agree that we probably need a generalized impl.EnqueueLabvelOf(key) method. we can do it later.
6f169da
to
da9e3f4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @lichuqiang ! Generally this PR looks good to me. Do you have any yaml example for ClusterIngress and its corresponding VirtualService? It would be great if we can paste an example here.
} | ||
|
||
func timeoutToString(timeout time.Duration) string { | ||
return fmt.Sprintf("%ds", int(timeout.Truncate(time.Second).Seconds())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a bit worried about using Truncate
with Second here. For example, if the original timeout is 10.5 seconds, it will be truncated as 10 seconds, which seems not accurate enough.
I am wondering if Istio accepts milliseconds
as timeout unit. If it does, maybe we can consider to truncate the timeout with milliseconds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, can we just serialize the time.Duration
? timeout.String()
should do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type of the field in istio is indeed of protobuf.Duration, which is made up of Seconds
and Nanos
:
type Duration struct {
// Signed seconds of the span of time. Must be from -315,576,000,000
// to +315,576,000,000 inclusive. Note: these bounds are computed from:
// 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years
Seconds int64 `protobuf:"varint,1,opt,name=seconds,proto3" json:"seconds,omitempty"`
// Signed fractions of a second at nanosecond resolution of the span
// of time. Durations less than one second are represented with a 0
// `seconds` field and a positive or negative `nanos` field. For durations
// of one second or more, a non-zero value for the `nanos` field must be
// of the same sign as the `seconds` field. Must be from -999,999,999
// to +999,999,999 inclusive.
Nanos int32 `protobuf:"varint,2,opt,name=nanos,proto3" json:"nanos,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
Have not looked in to that deep, bu I'm afraid it can not understand time duration formatted string so "smartly", and to support milliseconds in timeout, we might need to make it a float num for seconds, e.g "1.00123s"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha, I underestimated the compatibility of protobuf, it makes use of built-in time functions when unmarshal: https://github.com/gogo/protobuf/blob/master/jsonpb/jsonpb.go#L870,
so it can recognize time duration formatted string. Let's keep the passed in duration as is, and remove the extra truncation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
7d872e5
to
de1b00f
Compare
Since previous comments have been addressed, I've squashed the commits generated when addressing the comments, for reading/reviewing convenience :) |
/lgtm |
@lichuqiang: GitHub didn't allow me to request PR reviews from the following users: for, approve. Note that only knative members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
vs.Labels[networking.IngressLabelKey] = ci.Name | ||
|
||
ingressLabels := ci.Labels | ||
vs.Labels[serving.RouteLabelKey] = ingressLabels[serving.RouteLabelKey] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to make the label value "namespace/name", but when modifying the route reconciler, I realized "/" is not allowed in label value. So I think here we'll need another label for route namespace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
Key: "foo/not-found", | ||
SkipNamespaceValidation: true, | ||
}, { | ||
Name: "create VirtualService matching ClisterIngress", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: ClusterIngress
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
/assign @evankanderson who can approve changes in /pkg/apis/... |
The following is the coverage report on pkg/.
|
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lichuqiang, tcnghia, vaikas-google, ZhiminXiang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
RouteLabelKey = GroupName + "/route" | ||
|
||
// RouteNamespaceLabelKey is the label key attached to a ClusterIngress indicating by |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reads little funny. Maybe:
RouteNamespaceLabelKey is the label key attached to a ClusterIngress by a Route to indicate which namespace the Route was created in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll address this in a follow-up PR
return existing, nil | ||
} | ||
existing.Status = ci.Status | ||
// TODO: for CRD there's no updatestatus, so use normal update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should (unless we already have) make a tracking bug since CRDs now have /status:
https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
Not blocking this PR, just jotting it down here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've also noticed this before, but have not got time to look deeper.
Not sure it's just cruft due to historic reasons, or currently there's still bug in UpdateStatus
for CRDs that I'm not aware of .
Will open an issue for further discussion anyway.
oops, was fine to merge, but had couple of comments that I had failed to submit. My bad. |
* update test dependency * helper funcs to construct virtualservice * reconcile ClusterIngress into VirtualService * enable ingress reconciler in controller * add route namespace label
Part of #1963
Follow-up #2151
Proposed Changes
Release Note
/area networking