-
Notifications
You must be signed in to change notification settings - Fork 80
Conversation
aggregationRule: | ||
clusterRoleSelectors: | ||
- matchLabels: | ||
rbac.oam.dev/aggregate-to-controller: "true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I elected to use cluster role aggregation here so that it would be easy for users to extend the privileges of the OAM controller. I was thinking this could help with supporting new non-core kinds of workloads and traits; e.g. if those kinds were created by the core AppConfig controller, but reconciled by controllers running as distinct deployments in the cluster.
This commit updates the Helm chart to avoid running as cluster-admin. Instead, the controller runs only with the privileges it needs 'out of the box'; i.e. to manage all core OAM types, as well as deployments and services. The commit also includes a few small chart hygiene fixes; i.e. ensuring that names will not collide when multiple releases exist in the same cluster, and that all resources include the standard labels. Signed-off-by: Nic Cope <negz@rk0n.org>
Signed-off-by: Nic Cope <negz@rk0n.org>
This ensures the Helm chart grants the required permissions for the e2e tests to pass (except for the custom example.com types used by the test). Signed-off-by: Nic Cope <negz@rk0n.org>
I believe this is good to go - I've tested it by updating the e2e tests to run with only the privileges the Helm chart grants them (plus an additional aggregate role that grants access to |
Thanks @negz ! This PR is good, but I wonder it would be hard for user to register a new CRD to work as an OAM trait/workload. Currently, only a WorkloadDefinition/TraitDefinition is needed. Can you give an example workflow if we add this |
The process is identical, except that the person who authors the CRD and the WorkloadDefinition (or ScopeDefinition) must also author a ClusterRole that grants the OAM controller access to that type, so that the AppConfig controller can create, update, and delete it as necessary. This seems like a critical best practice to me; we should not run our controllers as cluster-admin. The example in the e2e tests may be a good illustration of this. We create the CRD and WorkloadDefinition for apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: foo.example.com
labels:
rbac.oam.dev/aggregate-to-controller: "true"
rules:
- apiGroups:
- example.com
resources:
- foo
verbs:
- "*" https://crossplane.slack.com/archives/CTHADJCEN/p1600977951000500 This approach was discussed in the above Slack conversation. There may be further automation we can do in future, like Crossplane's RBAC manager, but for now I feel it's a good compromise to allow us to feel comfortable including oam-kubernetes-runtime with Crossplane v0.13 later this week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @negz
Fixes #218
This commit updates the Helm chart to avoid running as cluster-admin. Instead, the controller runs only with the privileges it needs 'out of the box'; i.e. to manage all core OAM types, as well as deployments and services.
The commit also includes a few small chart hygiene fixes; i.e. ensuring that names will not collide when multiple releases exist in the same cluster, and that all resources include the standard labels.