Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extract firewall management into separate controller #403

Merged
merged 1 commit into from
Jul 24, 2018

Conversation

rramkumar1
Copy link
Contributor

@rramkumar1 rramkumar1 commented Jul 17, 2018

This PR attempts to move the management of the L7 firewall rule into a separate controller.

Summary of changes:

  1. Add controller.go to pkg/firewalls. This new controller also creates and interacts with the firewall pool, rather than the ClusterManager. Currently, the controller watches ingresses and services and updates the firewall rule as necessary. In the future, we can extend this to support the firewall rules for expose NEG.

  2. Remove all firewall related stuff from ClusterManager and pkg/controller/controller.go

  3. Move IsHealthy from ClusterManager to the controller the health check is meant for. It really has notplace in ClusterManager.

This leads well into a followup PR which can remove the ClusterManager entirely. Open to discussion on some of the design decisions in the PR. /assign @nicksardo

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Jul 17, 2018
@rramkumar1
Copy link
Contributor Author

/assign @nicksardo

@rramkumar1
Copy link
Contributor Author

Note: This does not compile because of an import cycle. Trying to fix that now, but should not affect review.

@rramkumar1 rramkumar1 force-pushed the global-state-manager branch from af1b4d5 to bbbf955 Compare July 17, 2018 16:40
@@ -193,6 +200,9 @@ func runControllers(ctx *context.ControllerContext) {
ctx.Start(stopCh)
lbc.Run()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this blocks...

Copy link
Contributor Author

@rramkumar1 rramkumar1 Jul 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops..moved start of fwc above lbc.Run()

@rramkumar1 rramkumar1 force-pushed the global-state-manager branch from bbbf955 to b9d5a35 Compare July 17, 2018 17:05
// IngressesForObject gets Ingresses that are associated with the
// passed in object. It is a wrapper around functions which do the actual
// work of getting Ingresses based on a typed object.
func (j *Joiner) IngressesForObject(obj interface{}) []*extensions.Ingress {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest ditching the wrapper and exposing the ...ForService and ...ForBackendConfig funcs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditched the wrapper

// NewJoiner returns a Joiner.
func NewJoiner(
ingressInformer cache.SharedIndexInformer,
svcInformer cache.SharedIndexInformer,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest passing in the listers instead of the full informers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

svcInformer cache.SharedIndexInformer,
defaultBackendSvcPortID ServicePortID) *Joiner {
ingLister := StoreToIngressLister{Store: ingressInformer.GetStore()}
return &Joiner{ingLister, svcInformer, defaultBackendSvcPortID}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I'm not seeing a strong reason why Joiner needs to exist as a struct. Seems more flexible to make the last two funcs static and pass in the appropriate lister. Then a consuming controller isn't forced to provide all listers even if it only uses one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So after trying the alternative, I think its much easier to read the code when Joiner exists as a struct. The reason is because the initalization of the struct will take care of storing all necessary things needed (listers, default backend service port ID) and then the function calls become nice and compact. The alternative is that each function call becomes really bloated with all the necessary arguments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To alleviate the problem of sometimes passing in stuff you don't need, maybe the Joiner can just take a ControllerContext? That contains everything it needs.

@@ -29,6 +29,7 @@ import (
type TaskQueue interface {
Run(period time.Duration, stopCh <-chan struct{})
Enqueue(obj interface{})
EnqueueAll(objs []interface{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variadic func?

Enqueue(obj ...interface{})

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

ctx.ServiceInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
ings := joiner.IngressesForObject(obj)
lbc.ingQueue.EnqueueAll(ings)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fwc.queue

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@rramkumar1 rramkumar1 force-pushed the global-state-manager branch 3 times, most recently from 0b8359f to 63bd9a7 Compare July 18, 2018 21:48
@rramkumar1
Copy link
Contributor Author

Realized there are some things I haven't thought about that I need to add to my implementation. Marking as "WIP" for now.

@rramkumar1 rramkumar1 changed the title Extract firewall management into separate controller Extract firewall management into separate controller [WIP Jul 18, 2018
@rramkumar1 rramkumar1 changed the title Extract firewall management into separate controller [WIP Extract firewall management into separate controller [WIP] Jul 18, 2018
@rramkumar1 rramkumar1 force-pushed the global-state-manager branch from 63bd9a7 to d52021e Compare July 19, 2018 15:09
@rramkumar1 rramkumar1 changed the title Extract firewall management into separate controller [WIP] Extract firewall management into separate controller Jul 19, 2018
@rramkumar1 rramkumar1 force-pushed the global-state-manager branch 7 times, most recently from 2dcca64 to dc8f7bd Compare July 19, 2018 16:58
func (fwc *FirewallController) Run(stopCh chan struct{}) {
defer fwc.shutdown()
go fwc.queue.Run(time.Second, stopCh)
<-stopCh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Run(...) is supposed to block, is there any need to start the fwc.queue.Run(...) in a goroutine? Also, fwc.shutdown() calls firewallPool.Shutdown() which deletes the firewall. That should only be called when we have no ingresses/NEGs.

func (fwc *FirewallController) Run(stopCh chan struct{}) {
	fwc.queue.Run(time.Second, stopCh)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed firewallPool.Shutdown from the fwc.shutdown() call.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I think the behavior for all these shutdowns is correct. We only call shutdown on the firewall pool if there are no ingresses and we only shutdown the queue when the process itself is terminated

<-stopCh
}

func (fwc *FirewallController) shutdown() error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment that this should only be called when we no longer need a firewall rule.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So actually the shutdown() on the controller should only consist of the queue being shutdown. Also, we should only call shutdown() on the controller when the overall process is being terminated right?

}

// IngressesForBackendConfig gets all Ingresses that reference (indirectly) a BackendConfig.
func (j *Joiner) IngressesForBackendConfig(beConfig *backendconfigv1beta1.BackendConfig) (ingList []*extensions.Ingress) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a low priority TODO note for optimizing the joining here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@rramkumar1 rramkumar1 force-pushed the global-state-manager branch 2 times, most recently from 01eec3f to d17223f Compare July 20, 2018 22:00
@nicksardo
Copy link
Contributor

/lgtm
Needs testing and then someone else can merge.

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 20, 2018
@rramkumar1 rramkumar1 force-pushed the global-state-manager branch from d17223f to ebaf600 Compare July 24, 2018 18:08
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 24, 2018
@rramkumar1
Copy link
Contributor Author

Made some small changes:

  1. Added a util function to convert a list of ingresses into a list of interfaces{}. This ensures that we can use the now variadic Enqueue function
  2. Made queueKey a pointer.

I tested this end-to-end and it works so it should be good to merge.

@rramkumar1
Copy link
Contributor Author

/assign @MrHohn

@MrHohn
Copy link
Member

MrHohn commented Jul 24, 2018

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 24, 2018
@MrHohn MrHohn merged commit b70b7a2 into kubernetes:master Jul 24, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants