Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

desktop cuda support and cuda bbclass #23

Closed
jackmitch opened this issue Dec 15, 2016 · 2 comments
Closed

desktop cuda support and cuda bbclass #23

jackmitch opened this issue Dec 15, 2016 · 2 comments

Comments

@jackmitch
Copy link
Contributor

jackmitch commented Dec 15, 2016

Hi Matt,

What are you thoughts on ways to integrate this layer with an external layer that provides CUDA support for the desktop? I have a hacked up layer locally that (somewhat) emulates the CUDA packages available in this layer but for x86_64 desktop systems, so I use the same naming convention as here to keep compatibility with the cuda bbclass.

I had envisioned something like a meta-nvidia which provides x86 cuda support, and maybe x86 driver support (my current use case is in containers with the drivers passed through). However I don't know how to share the cuda bbclass without either layer depending on the other.

One idea would be to move the CUDA support out of meta-tegra into meta-nvidia and have cuda support in there, maybe have a tree like

meta-nvidia -> meta-cuda
-> meta-nvidia-driver

then meta-cuda would be a dependancy of meta-tegra if you wanted the cuda support and the bbclass and all recipe names would be in sync.

I understand that this would be a pain, so I'm open to suggestions a good way to handle it.

@madisongh
Copy link
Member

Jack, I've thought about this off and on, but have held off responding because I couldn't come up with a good way to make this work without introducing the additional layer dependenc(y/ies), which I'd really like to avoid if at all possible.

That said, if you have something set up that's working across Tegra and non-Tegra builds for generalizing CUDA package builds, I'd like to see it.

@jackmitch
Copy link
Contributor Author

Hi Matt,

I ended up hacking together a meta-nvidia layer for a client which just followed the naming conventions of the tegra packages so that the dependencies would just "work". I'm afraid I'm not at liberty to share it, but it was basically just the desktop nvidia .run binary extracted and separated into packages much as we do for the tegra one. It was just a test bed for cuda enabled x86_64 docker images and didn't get a lot of use in the end so there wasn't much to lose out on.

Cheers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants