Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Breaking up an application monolith with NSM #1

Closed
edwarnicke opened this issue Apr 5, 2022 · 12 comments
Closed

Breaking up an application monolith with NSM #1

edwarnicke opened this issue Apr 5, 2022 · 12 comments
Assignees

Comments

@edwarnicke
Copy link
Member

There are many existing applications in the world which consist of a collection of monolith servers:

Breaking up the Monolith with NSM-Page-1

running multiple services on those servers, with each user sharded to exactly one monolith server.

It is desirable to be able to progressively break such monoliths up into cloud native apps bit by bit by pulling services out into Pods:

Breaking up the Monolith with NSM-Page-2

The problem then becomes how to establish communications between these services that have been pulled into Pods and those services that remain on the monolith. NSM offers an ideal way to do so.

By creating a docker based combination of vL3/forwarder/nsc to run on the monolith server, and registering it with a floating network service domain, Pods running services can connect to it using NSM and communicate with one another:

Breaking up the Monolith with NSM-Page-3

Because the cmd-nse-simple-vl3-docker is runs on a bare server, it cannot rely on a forwarder and must provide that functionality itself. Its probably best to think of it as a hybrid between a vl3 nse that takes its IPAM from a simple env variable containing the prefix for IPAM (point2pointipam), the parts of a forwarder that would handle wireguard and kernel interfaces, and a simple built in client to 'request' a local kernel interface on the server.

The cmd-nse-simple-vl3-docker would need to be run as a docker container (not a pod) in the host netns in privileged mode.
It would need to:

  1. Opportunistically binds with AF_PACKET to the host interface in a manner similar to the cmd-forwarder-vpp
  2. Supports local mechanism kernel and remote mechanism wireguard. Connections for either one get plugged into the vl3 vrf.
  3. On startup, should make its own 'local' request for a kernel interface on the host.
  4. Advertises a named networkservice that semantically provides a very simple vl3. that simple vl3 can use simple point2point ipam using a prefix it receives from an environment variable.

Once registered, it could then accept incoming wireguard vWires from Pods running in what ever K8s cluster they are running in.

Since there will be exactly one kernel interface local to the monolith server attached to the vL3, it will behave like a 'local to the monolith server' vL3.

@glazychev-art
Copy link
Collaborator

Based on the description of the issue, I think we have the following steps:

  1. Spire federation for k8s-cluster and docker container
  2. DNS - k8s and docker container should know about each other
  3. Create a chain - it is be a combination of nsmgr/forwarder/vl3/nsc
  4. Register docker container in the floating registry
  5. Prepare docker compose yaml

@edwarnicke
Is it right? Do you think there is something else?

@denis-tingaikin
Copy link
Member

I think for

  1. DNS - k8s and docker container should know about each other

We can do setup of DNS as we do for interdomain

@denis-tingaikin
Copy link
Member

denis-tingaikin commented Apr 18, 2022

  1. Create a chain - it is be a combination of nsmgr/forwarder/vl3/nsc

I feel its not a chain elemnt. Its implementation of cmd-nse-simple-vl3-docker and should be located in the repository of cmd-nse-simple-vl3-docker.

@edwarnicke
Copy link
Member Author

@glazychev-art This is about right. I'd suggest starting with step 3 (create a chain). @denis-tingaikin Is right, we probably have all the chain elements we need already... it should mostly be a matter of composition.

You are correct to call out that we will need to work out the spire bits. I suspect we can get something working from having the chain/cmd + figuring out the spire bits. From there adding DNS will be al layering on issue, which I suspect @denis-tingaikin has some solution to for interdomain. But lets get basic vL3 going before we worry about the DNS parts.

@glazychev-art
Copy link
Collaborator

@edwarnicke
Additional thing - our current cmd-nse-vl3-vpp is completely based on vpp and uses memif interfaces.
Therefore loopback, unnumbered and vrf are created only for vpp.

In this issue we are talking about kernel interfaces.
Question: do I understand correctly that we should create similar chain-elements (loop, vrf, unnumbered), but for the sdk-kernel?

@edwarnicke
Copy link
Member Author

@glazychev-art We'd still use vpp in the container, we need it to run the vrf, handle vWires, etc.

@edwarnicke
Copy link
Member Author

@glazychev-art You'd just use the mechanisms for kernel and wireguard instead of memif

@glazychev-art
Copy link
Collaborator

@edwarnicke
Yes, we will use vpp in the container. I think it looks like this:
Screenshot from 2022-04-19 21-00-26

Only kernel side has IP address. So, we also need to create vrf, loop on the kernel side and we need new chain elements for the sdk-kernel.

Do I understand correctly?

@edwarnicke
Copy link
Member Author

@glazychev-art You are correct as far as the picture goes :)

I would suggest your simplest way to do this is to build your NSE/Forwarder to accept on the server side either wg or kernel.

Then have the cmd-nse-simple-vl3-docker incorporate into itself a simple client that requests its own service with MechanismPreference kernel. That way you get what you would expect: an IP address on the kernel interface on the monolith server, and an IP address on the interface injected into the K8s Pod (not shown in your picture, but understood to work normally). This makes the vL3 pretty much in line with what we have now: loopback for its 'gateway' IP address and unnumbered for others :)

@edwarnicke
Copy link
Member Author

@glazychev-art Looking at your diagram more... I'd actually put creating loop, vrf, un numbered in the 'server part'.

@edwarnicke
Copy link
Member Author

Basically... unlike a forwarder which 'cross connects'.. you just need to have a server which takes 'connections' and can do its own kernel interface mechanics to connect to the VRF.

@glazychev-art
Copy link
Collaborator

Closed with #4 and networkservicemesh/integration-k8s-kind#669

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants