Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Resource Naming #2255

Open
Tracked by #278
zvonkok opened this issue Jan 22, 2025 · 10 comments
Open
Tracked by #278

GPU Resource Naming #2255

zvonkok opened this issue Jan 22, 2025 · 10 comments

Comments

@zvonkok
Copy link
Member

zvonkok commented Jan 22, 2025

For the Kata BM use-case, we have VFIOs advertised as nvidia.com/pgpu: 1, we cannot use nvidia.com/gpu: 1 for the peer-pods use-case since this is reserved for GPUs that are using traditional container runtimes and will clash in a cluster where nodes are running GPUs without Kata/PeerPods and nodes running with Kata/PeerPods

We need to come up with a new naming scheme that we use for peer-pods.

In the bare-metal use-case we have e.g. also the SKU name exposed in the cluster nvidia.com/GH100_H800: 8.

@zvonkok
Copy link
Member Author

zvonkok commented Jan 22, 2025

Since an admin has a curated list of instance types one wants to expose and peer-pods are heavily tied to the instance type we could expose

nvidia.com/<intancen-type-a>-gpu: 1
nvidia.com/<intancee-type-b>-gpu: 1

If we do not care about the GPU type and just need a any instance type we need a common name and then peer-pods could allocate any GPU instance

CSP GPU == cgpu ?

nvidia.com/cgpu: 1

This way we would have distinct names for

Traditional Container: nvidia.com/gpu: 0 or if we need a specific type nvidia.com/mig-1g.10gb.count: 1
Baremetal Kata: nvidia.com/pgpu: 1 or if we need a specific type: nvidia.com/GH100_H800: 1
PeerPods Kata: nvidia.com/cgpu: 1 or if we need a specific type: nvidia.com/<instance-type>: 1

@stevenhorsman @jensfr @bpradipt

@mythi
Copy link
Contributor

mythi commented Jan 24, 2025

We need to come up with a new naming scheme that we use for peer-pods.

A bit off-topic but how these resources are advertised on a node and mapped to a podvm? a device plugin with CDI devices that are podvm specific?

@zvonkok
Copy link
Member Author

zvonkok commented Jan 24, 2025

Since we're in peer-pod land, I will answer this question in this context.

I talked to @bpradipt, who told me that one (admin, operator) will usually have a curated list of VM instance types that can be used in a specific cluster.

We can create NFD rules or a device plugin (which is unnecessary since it can only add ENV or mounts to the container; we cannot add annotations depending on the request) to expose this list as an extended resource. Since we added CDI support in the Kata agent, what we can now do is the following:

Pod requests nvidia.com/cgpu: 1, which means we do not care which GPU instances; use one from the list and use the mutating webhook to add the annotation:

"cdi.k8s.io/peer-pod": "nvidia.com/gpu=0"

The kata-agent will read this annotation, and the corresponding CDI device will be injected.

If we need multiple GPUs

"cdi.k8s.io/peer-pod": "nvidia.com/gpu=0"
"cdi.k8s.io/peer-pod": "nvidia.com/gpu=1"

For the instance type, we have another annotation which is not related to CDI but would need to be GPU instance obviously. If we use a CPU instance type and have added the CDI annotations, kata-agent will fail since we cannot create the CDI specs for GPUs and timeout.

@inatatsu
Copy link

The kata-agent will read this annotation, and the corresponding CDI device will be injected.

@zvonkok In my understanding, Cloud API Adaptor, which sits between the container runtime for remote hypervisor and kata agent (and resides outside of a pod VM), currently handles the GPU resource request annotations to determine an appropriate instance type. Do you suggest kata agent can handle this annotation by using a CDI spec, inside of a pod VM?

@bpradipt
Copy link
Member

Currently we have the following mechanism for using GPU with peer-pods

User provides the following pod manifest (same like regular Kata or runc, except the runtimeClass changes)

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vectoradd-kata   
spec:
  runtimeClassName: kata-remote
  restartPolicy: OnFailure
  containers:
  - name: cuda-vectoradd
    image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04"
    resources:
      limits:
        "nvidia.com/gpu": 1

The webhook mutates the pod manifest to something like this (note the removal of resources and addition of annotations)

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vectoradd-kata   
  annotations:
      io.katacontainers.config.hypervisor.default_gpus=1
spec:
  runtimeClassName: kata-remote
  restartPolicy: OnFailure
  containers:
  - name: cuda-vectoradd
    image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04"    

Then CAA finds out the suitable gpu instance type from the pre-configured instance type list and creates the VM and runs the pod.

Another alternate mechanism is to simply use a pod manifest specific for peer-pods, like the following (note the machine_type annotation to select the specific GPU instance type)

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vectoradd-kata   
  annotations:
      io.katacontainers.config.hypervisor.machine_type: Standard_NC4as_T4_v3
spec:
  runtimeClassName: kata-remote
  restartPolicy: OnFailure
  containers:
  - name: cuda-vectoradd
    image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04"    

Now with CDI, we can start with the most basic implementation, like the manifest below:

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vectoradd-kata   
  annotations:
      io.katacontainers.config.hypervisor.default_gpus=1
     cdi.k8s.io/gpu: "nvidia.com/pgpu=1"
spec:
  runtimeClassName: kata-remote
  restartPolicy: OnFailure
  containers:
  - name: cuda-vectoradd
    image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04"    

There are two places we can add the CDI annotation. Either in the webhook or in CAA.
If we do it in webhook, it's simple but we won't be able to automatically add suitable annotation based on the number of GPUs available in a specific instance as that info is not available to webhook. IOW if the original manifest is the following:

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vectoradd-kata   
  annotations:
      io.katacontainers.config.hypervisor.machine_type: Standard_NC64as_T4_v3
spec:
  runtimeClassName: kata-remote
  restartPolicy: OnFailure
  containers:
  - name: cuda-vectoradd
    image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04"    

I think we would want the actual manifest to be with the proper cdi annotation added indicating number of pgpus. That's not possible with webhook today. CAA already has this info so should be able to modify the oci spec to add it.

apiVersion: v1
kind: Pod
metadata:
  name: cuda-vectoradd-kata   
  annotations:
      io.katacontainers.config.hypervisor.machine_type: Standard_NC64as_T4_v3
      cdi.k8s.io/gpu: "nvidia.com/pgpu=4"
spec:
  runtimeClassName: kata-remote
  restartPolicy: OnFailure
  containers:
  - name: cuda-vectoradd
    image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04"    

Does this make sense ?

@snir911
Copy link
Contributor

snir911 commented Jan 30, 2025

IIUC eventually we'll need to have some sort of translation between the instance size and a matching CDI annotation (type), no? which cannot be done ATM in the webhook AFAIU.

Having said that, starting with attaching a default CDI annotation in the webhook/caa according to the gpu request looks like a good option to me (assuming i understand the workflow right).

@bpradipt
Copy link
Member

IIUC eventually we'll need to have some sort of translation between the instance size and a matching CDI annotation (type), no?

Yes. That's my understanding

Having said that, starting with attaching a default CDI annotation in the webhook/caa according to the gpu request looks like a good option to me (assuming i understand the workflow right).

Is there anything needed on the pod VM side or the CDI annotation in the spec is enough ?

@snir911
Copy link
Contributor

snir911 commented Jan 30, 2025

Is there anything needed on the pod VM side or the CDI annotation in the spec is enough ?

AFAIU the agent CDI related bits are all in place, podvm needs to have the CDI specification in place and that's it (i've been experimenting with the injunction in the caa and it worked)

@snir911
Copy link
Contributor

snir911 commented Jan 30, 2025

Actually adding the CDI annotation it in the webhook (or manually) will fail ATM as the (go) shim cannot add the specified CDI device (should it simply pass the annotation and do nothing else when it's remote hypervisor? IDK)

@mythi
Copy link
Contributor

mythi commented Jan 31, 2025

Actually adding the CDI annotation it in the webhook (or manually) will fail ATM as the (go) shim cannot add the specified CDI device (should it simply pass the annotation and do nothing else when it's remote hypervisor? IDK)

I believe the idea is that kata-agent knows about the CDI devices and writes the config.json edits inside the guest. I'm not sure if that's necessary in the peer-pods case where no node device resources to be mapped into the guest device resources.

Would peer-pods simply work if the config.json is prepared on the host before sending it to the kata-agent?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants