You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What are you thoughts on ways to integrate this layer with an external layer that provides CUDA support for the desktop? I have a hacked up layer locally that (somewhat) emulates the CUDA packages available in this layer but for x86_64 desktop systems, so I use the same naming convention as here to keep compatibility with the cuda bbclass.
I had envisioned something like a meta-nvidia which provides x86 cuda support, and maybe x86 driver support (my current use case is in containers with the drivers passed through). However I don't know how to share the cuda bbclass without either layer depending on the other.
One idea would be to move the CUDA support out of meta-tegra into meta-nvidia and have cuda support in there, maybe have a tree like
meta-nvidia -> meta-cuda
-> meta-nvidia-driver
then meta-cuda would be a dependancy of meta-tegra if you wanted the cuda support and the bbclass and all recipe names would be in sync.
I understand that this would be a pain, so I'm open to suggestions a good way to handle it.
The text was updated successfully, but these errors were encountered:
Jack, I've thought about this off and on, but have held off responding because I couldn't come up with a good way to make this work without introducing the additional layer dependenc(y/ies), which I'd really like to avoid if at all possible.
That said, if you have something set up that's working across Tegra and non-Tegra builds for generalizing CUDA package builds, I'd like to see it.
I ended up hacking together a meta-nvidia layer for a client which just followed the naming conventions of the tegra packages so that the dependencies would just "work". I'm afraid I'm not at liberty to share it, but it was basically just the desktop nvidia .run binary extracted and separated into packages much as we do for the tegra one. It was just a test bed for cuda enabled x86_64 docker images and didn't get a lot of use in the end so there wasn't much to lose out on.
Hi Matt,
What are you thoughts on ways to integrate this layer with an external layer that provides CUDA support for the desktop? I have a hacked up layer locally that (somewhat) emulates the CUDA packages available in this layer but for x86_64 desktop systems, so I use the same naming convention as here to keep compatibility with the cuda bbclass.
I had envisioned something like a meta-nvidia which provides x86 cuda support, and maybe x86 driver support (my current use case is in containers with the drivers passed through). However I don't know how to share the cuda bbclass without either layer depending on the other.
One idea would be to move the CUDA support out of meta-tegra into meta-nvidia and have cuda support in there, maybe have a tree like
meta-nvidia -> meta-cuda
-> meta-nvidia-driver
then meta-cuda would be a dependancy of meta-tegra if you wanted the cuda support and the bbclass and all recipe names would be in sync.
I understand that this would be a pain, so I'm open to suggestions a good way to handle it.
The text was updated successfully, but these errors were encountered: