Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should the Operator be NameSpaced or Cluster #54

Closed
ArangoGutierrez opened this issue Apr 8, 2021 · 10 comments
Closed

Should the Operator be NameSpaced or Cluster #54

ArangoGutierrez opened this issue Apr 8, 2021 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@ArangoGutierrez
Copy link
Contributor

Per definition

When you create a new CustomResourceDefinition (CRD), the Kubernetes API Server creates a new RESTful resource path for each version you specify. The CRD can be either namespaced or cluster-scoped, as specified in the CRD's scope field. As with existing built-in objects, deleting a namespace deletes all custom objects in that namespace. CustomResourceDefinitions themselves are non-namespaced and are available to all namespaces.

currently the operator runs namespaced, what do we think about it being Cluster-scoped?

@ArangoGutierrez ArangoGutierrez added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 8, 2021
@ArangoGutierrez
Copy link
Contributor Author

/cc @zvonkok @marquiz @mythi

@mythi
Copy link

mythi commented Apr 12, 2021

what do we think about it being Cluster-scoped?

Node objects are non-namespaced too so I think a cluster scoped NFD CRD would be better aligned with that.

@marquiz
Copy link
Contributor

marquiz commented Apr 12, 2021

what do we think about it being Cluster-scoped?

Node objects are non-namespaced too so I think a cluster scoped NFD CRD would be better aligned with that.

Hmm, now thinking of this, I'm not really sure 🙃 Node objects sure are non-namespaced but OTOH the operator always runs in some namespace. I'd probably stay with namespaced. Makes cleaning up kind of easier: deleting the namespace should make sure that no old configs haunt you in the future. Thoughts? @zvonkok?

@mythi
Copy link

mythi commented Apr 12, 2021

With namespaced CRDs the operator needs to watch all those namespaces to CRUD an NFD instance in each of them. What I've understood and it was also mentioned by @zvonkok in kubernetes-sigs/node-feature-discovery#508 (comment) that one instance of NFD in a cluster is preferred. AFAIU with that it would be simpler to watch cluster-scoped CRDs. Alternatively, the operator could watch only one namespace (WATCH_NAMESPACE?) but it'd still be possible to create orphan CRDs...

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 11, 2021
@zwpaper
Copy link
Member

zwpaper commented Jul 22, 2021

as NFD support the instance flag, we may run multiple instance of NFD, it should be reasonable to be namespaced.

for example, we may leave different people managing their features and isolated by both label namespace and k8s namespaces, eg. gpu-nfd for nvidia.com under gpu-ops ns, network-nfd for network.example.com network-ops ns.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 22, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 20, 2021
@vaibhav2107
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 18, 2021
@ArangoGutierrez
Copy link
Contributor Author

As per #114 this is now fully documented
/close

@k8s-ci-robot
Copy link
Contributor

@ArangoGutierrez: Closing this issue.

In response to this:

As per #114 this is now fully documented
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

8 participants