-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Breaking up an application monolith with NSM #1
Comments
Based on the description of the issue, I think we have the following steps:
@edwarnicke |
I think for
We can do setup of DNS as we do for interdomain |
I feel its not a chain elemnt. Its implementation of cmd-nse-simple-vl3-docker and should be located in the repository of cmd-nse-simple-vl3-docker. |
@glazychev-art This is about right. I'd suggest starting with step 3 (create a chain). @denis-tingaikin Is right, we probably have all the chain elements we need already... it should mostly be a matter of composition. You are correct to call out that we will need to work out the spire bits. I suspect we can get something working from having the chain/cmd + figuring out the spire bits. From there adding DNS will be al layering on issue, which I suspect @denis-tingaikin has some solution to for interdomain. But lets get basic vL3 going before we worry about the DNS parts. |
@edwarnicke In this issue we are talking about kernel interfaces. |
@glazychev-art We'd still use vpp in the container, we need it to run the vrf, handle vWires, etc. |
@glazychev-art You'd just use the mechanisms for kernel and wireguard instead of memif |
@edwarnicke Only kernel side has IP address. So, we also need to create vrf, loop on the kernel side and we need new chain elements for the Do I understand correctly? |
@glazychev-art You are correct as far as the picture goes :) I would suggest your simplest way to do this is to build your NSE/Forwarder to accept on the server side either wg or kernel. Then have the cmd-nse-simple-vl3-docker incorporate into itself a simple client that requests its own service with MechanismPreference kernel. That way you get what you would expect: an IP address on the kernel interface on the monolith server, and an IP address on the interface injected into the K8s Pod (not shown in your picture, but understood to work normally). This makes the vL3 pretty much in line with what we have now: loopback for its 'gateway' IP address and unnumbered for others :) |
@glazychev-art Looking at your diagram more... I'd actually put creating loop, vrf, un numbered in the 'server part'. |
Basically... unlike a forwarder which 'cross connects'.. you just need to have a server which takes 'connections' and can do its own kernel interface mechanics to connect to the VRF. |
Closed with #4 and networkservicemesh/integration-k8s-kind#669 |
There are many existing applications in the world which consist of a collection of monolith servers:
running multiple services on those servers, with each user sharded to exactly one monolith server.
It is desirable to be able to progressively break such monoliths up into cloud native apps bit by bit by pulling services out into Pods:
The problem then becomes how to establish communications between these services that have been pulled into Pods and those services that remain on the monolith. NSM offers an ideal way to do so.
By creating a docker based combination of vL3/forwarder/nsc to run on the monolith server, and registering it with a floating network service domain, Pods running services can connect to it using NSM and communicate with one another:
Because the cmd-nse-simple-vl3-docker is runs on a bare server, it cannot rely on a forwarder and must provide that functionality itself. Its probably best to think of it as a hybrid between a vl3 nse that takes its IPAM from a simple env variable containing the prefix for IPAM (point2pointipam), the parts of a forwarder that would handle wireguard and kernel interfaces, and a simple built in client to 'request' a local kernel interface on the server.
The cmd-nse-simple-vl3-docker would need to be run as a docker container (not a pod) in the host netns in privileged mode.
It would need to:
Once registered, it could then accept incoming wireguard vWires from Pods running in what ever K8s cluster they are running in.
Since there will be exactly one kernel interface local to the monolith server attached to the vL3, it will behave like a 'local to the monolith server' vL3.
The text was updated successfully, but these errors were encountered: