You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running in Kubernetes, Consul currently has very limited options in terms of securing north-south traffic - namely you can either use the recently released Consul Ingress controller (which doesn't actually support Ingress objects) or you can deploy Ambassador and have Consul integrate with that.
This feature requests that Consul some other mechanism that allows it to "whitelist" north-south traffic that originates from a third party ingress controller.
The UX for this should be reasonably smooth - ideally requiring no more than additional annotations to the Ingress object in question to get it to work, and should not interfere with the actual base functions of how that Ingress controller class is implemented -- this is key, as it represents the whole reason we picked this Ingress controller over Ambassador/Consul in the first place.
Use Case(s)
The primary use case here is built around users who have already deployed a working Ingress controller, and want to use Consul for their service mesh to secure east-west traffic;
As a developer, I want to be able to annotate my existing Ingress objects so that they can be integrated natively with the Consul intentions concept, so that I can end-to-end encrypt my north-south traffic with a strong cryptographic identity
The key implementation detail of this is that it can't interfere with the way the existing Ingress object works, other than potentially adding relevant annotations to the Ingress object (similar to how you would a Deployment)
Example Problem Spec
The specific version of this problem that I'm facing right now, is trying to get Consul to work with our GCE Ingress controller; and the tight integration it has with Google Cloud itself, see this issue
Making use of Google Cloud NEGs means that we get a much more scalable and reliable backend network architecture, with the load balancer routing directly to the pods, as opposed to having to traverse the kube networking mesh.
Google can do this by taking advantage of the VPC-native architecture of our clusters, and then building its own understanding of the services in question by looking at the service manifests and the readiness gates on the pods. What Consul Connect is going to do is slightly change those pods, albeit in a way that Google is not aware of, to inject the sidecars, thus adding a new entry-point into the container (the high port number that is exposed by Envoy). Now Google can actually be told about these changes (albeit again, manually that is) and be re-configured to use this port instead. It can even be configured with an understanding that the port is HTTPS enabled, and that it should use that scheme to connect. I actually got a hacked working implementation of this working.
The issue arises in that Consul does not trust traffic originating from the Google load balancer, and thus rejects it. The second issue is that the ports on Envoy are in a range and as such I needed to work out which of these ports was open to manually hack Google to use that one.
The text was updated successfully, but these errors were encountered:
Feature Description
When running in Kubernetes, Consul currently has very limited options in terms of securing north-south traffic - namely you can either use the recently released Consul Ingress controller (which doesn't actually support Ingress objects) or you can deploy Ambassador and have Consul integrate with that.
This feature requests that Consul some other mechanism that allows it to "whitelist" north-south traffic that originates from a third party ingress controller.
The UX for this should be reasonably smooth - ideally requiring no more than additional annotations to the Ingress object in question to get it to work, and should not interfere with the actual base functions of how that Ingress controller class is implemented -- this is key, as it represents the whole reason we picked this Ingress controller over Ambassador/Consul in the first place.
Use Case(s)
The primary use case here is built around users who have already deployed a working Ingress controller, and want to use Consul for their service mesh to secure east-west traffic;
The key implementation detail of this is that it can't interfere with the way the existing Ingress object works, other than potentially adding relevant annotations to the Ingress object (similar to how you would a Deployment)
Example Problem Spec
The specific version of this problem that I'm facing right now, is trying to get Consul to work with our GCE Ingress controller; and the tight integration it has with Google Cloud itself, see this issue
Making use of Google Cloud NEGs means that we get a much more scalable and reliable backend network architecture, with the load balancer routing directly to the pods, as opposed to having to traverse the kube networking mesh.
Google can do this by taking advantage of the VPC-native architecture of our clusters, and then building its own understanding of the services in question by looking at the service manifests and the readiness gates on the pods. What Consul Connect is going to do is slightly change those pods, albeit in a way that Google is not aware of, to inject the sidecars, thus adding a new entry-point into the container (the high port number that is exposed by Envoy). Now Google can actually be told about these changes (albeit again, manually that is) and be re-configured to use this port instead. It can even be configured with an understanding that the port is HTTPS enabled, and that it should use that scheme to connect. I actually got a hacked working implementation of this working.
The issue arises in that Consul does not trust traffic originating from the Google load balancer, and thus rejects it. The second issue is that the ports on Envoy are in a range and as such I needed to work out which of these ports was open to manually hack Google to use that one.
The text was updated successfully, but these errors were encountered: