-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for macos_arm64 #604
Comments
Unfortunately we are not able to give you concrete guidance here as we don't have access to Apple Silicon at the moment. From the PR you linked it should be possible to update the TensorFlow dependency to a newer commit here, although I am not sure if this would require additional changes to our code. I would start with |
so for a start I changed the |
Hey Simon, when building with Bazel the tar.gz file that's in the WORKSPACE file you linked to gets used, and the TF submodule is used for Makefile builds. We tend to use Bazel for almost everything so the WORKSPACE file is what matters, but when we update TF versions we always bump the submodule as well to keep them in sync. |
so workin on this front also revealed some updates.
this works and I evaluated the QuickNet models. the performance is quite stunning. LARQ doc would also be a nice place to show these numbers
the LCE make script uses TF's makefile but without '-D' options as described here. how could |
@lgeiger @Tombana @AdamHillier as stated above bazel builds are not yet possible (due to TensorFlow Lite dependencies on XNNPACK but I issued a PR there. once this gets accepted, I can issue a PR in the official TensorFlow repo. this should allow bazel builds of the |
Closing this issue as #664 fixes this and has been merged |
tried to run lce_benchmark_model on an Apple Mac mini with M1.
turns out that this is not possible directly although the platform
aarch64
is generally the same.next step was trying to build it manually but it did not work since more adaptations for the
bazel
build system are needed (I assume it would need a different toolchain).could you give me a hint on how to implement this properly such that we could do:
or for the whole pip package of
larq_compute_engine
since commit bab0d14036efd0adcd4e48303d045cee3c342cb0 it is possible
to build TF2.4 for Apple silicon as explained here.
I checked the subrepo of TF within
larq_compute_engine
, which currently is at commit 85c8b2a817f95a3e979ecd1ed95bff1dc1335cff so this is not yet includedThe text was updated successfully, but these errors were encountered: