Native libraries for multiple Operating Systems #1422
Replies: 4 comments 7 replies
-
@maziyarpanahi Here is an example how gradle handle it: djl/engines/mxnet/jnarator/build.gradle Line 28 in 5ffc373 |
Beta Was this translation helpful? Give feedback.
-
Now i see the problem. We are scanning all It looks like we won't be able to support fatjar for linux and mac together. |
Beta Was this translation helpful? Give feedback.
-
TensorFlow for Java uses the same mechanism as used by the JavaCPP Presets, so it is possible to do this wih the JavaCPP Presets for PyTorch. I have offered DJL my help to package their support for PyTorch in the same way, but they are not interested, and I am not sure why. @frankfliu Could you please elaborate on the reasons why DJL isn't interested in reusing functionality from JavaCPP that is known to work well? I would be happy to fix whatever is wrong with JavaCPP to get cooperation going. |
Beta Was this translation helpful? Give feedback.
-
Hi @frankfliu Is there any ETA on adding classifier folders for each native jar file based on OS like: |
Beta Was this translation helpful? Give feedback.
-
Hi,
I would like to see the best practice to have native libraries for PyTorch engine on the same device (CPU) for multiple operating systems in one project offline. For instance, if I wish to offer a PyTorch engine to all osx, linux, and windows users in one SBT project (Fat JAR, I am not worried about the size), what would be the best way to do this since there is no Maven package by DJL for this specific purpose.
As an example,
tensorflow-java
has option for each specific operating system/device, however it also has an option to automatically include all versions of the native library for a given configuration. For example, thetensorflow-core-platform
for all supported OSes on CPU,tensorflow-core-platform-mkl
for all supported OSes on CPU with MKL,tensorflow-core-platform-gpu
for all supported OSes on GPU, etc.Since there is no option for this, I have to include all supported
pytorch-native-cpu
with multiple classifiers in the SBT project. DJL PyTorch doesn't work properly in this situation when you have all the 3 osx, win, and linux CPU dependencies in the same project:No deep learning engine found.
What would be the best way to have a package that supports all 3 operating systems on the CPU without using
auto
for offline usage given these are included inside the Fat JAR (and not in some default cache folder)PS: I have read the discussions/issues regarding offline usage of native libraries, but none of them asked about multiple targets. They all knew what was the actual target (like windows on CPU).
Beta Was this translation helpful? Give feedback.
All reactions