This fork contains custom TF2/TFLite code needed
by the DL team for iOS model support. See the TF2 online
documentation for bazel related targets and build info. In particular, you'll need
to set up a python environment and run ./configure
from the command line to stage your build.
For Android, we use docker tools available from google and just volume in our code (nice!). We run these
on a machine we spin-up for this purpose, dl-android
So far, our principal additions have been a port of TF-Addon code to
tflite and some additional ops to support the b-line-counting auxiliary model. These latter TFLite ops are mirrored
by TF2 ops in the dl-ops
python library.
Prior to building, setup a python environment as outlined in the documentation. We're been building with python 3.8.7 and numpy 1.19.5 as these seem to be favored by the TF release process.
To configure the build, first run the following and select iOS support when prompted.
./configure
The principal iOS target here would be (Note: not current - see c++ below)
$ bazel build -c opt --config=nogcp --config=nonccl --config=ios --ios_multi_cpus=arm64,x86_64 //tensorflow/lite/ios:TensorFlowLiteC_framework
This builds a c TFLite Framework, the full builtin
set
of ops,
on top of an underlying set of c++ libraries.
A tighter runtime can be generated using the tensorflow-provided script tensorflow/lite/ios/build_frameworks.sh
.
Given a set of models, this will generate a minimal "covering" framework and will also build, as necessary,
an additional SelectOps
library
to cover any ops used that are not in the base builtin
set. We have avoided these types of Ops as inclusion of
the SelectOps
libraries results in a substantial footprint and, more serious, a collision with our existing
TF1 installation on the phone.
When TF1 support and this library are
removed, we can deploy models that depend on this addition framework safely. Note we had earlier needed to add some
additional dependencies to this
script and the related tensorflow/lite/tools/gen_op_registration.cc
code. Take a look at the diff in git; perhaps
these changes will not be needed in the future.
For signature support in TFLite 2.7, we use the underlying c++ API in preference to the officially supported c API
because this API allows access to multiple signature methods. Multiple signatures will likely come to the c API in a
future release, but for 2.7 we have to use c++. We do
this by simply including the relevant headers needed to expose the underlying c++ API.
There are about 36 of these headers - the script tensorflow_header_copy.sh
copies from the bazel-bin
output
directly over to the software
repo and drops them into the c framework.
It assumes the software
repo lives in $HOME/src/software
. Adjust and run this script
as needed to update the headers when changing the tensorflow version if the headers change.
The normal iOS bazel target, //tensorflow/lite/ios:TensorFlowLiteC_framework
, explicitly removes c++ symbols, though
the tensorflow/lite/ios/build_frameworks.sh
does not. An additional target,
//tensorflow/lite/ios:TensorFlowLiteC_static_framework
exposes these symbols. This is the target we are currently using.
To support GPU use on iOS, we force link the GPU Framework. This framework is built as follows:
$ bazel build -c opt --config=nogcp --config=nonccl --config=ios --ios_multi_cpus=arm64,x86_64 //tensorflow/lite/delegates/coreml:coreml_delegate
We build out of docker by voluming in our code. Instructions are here. To launch:
docker run -it -v $PWD:/host_dir tflite-builder bash
If the container is still around, it's easier to just start it and attach (otherwise, read the doc on upgrading android tools):
ubuntu@ip-10-0-6-111:~$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1305b9415342 tflite-builder "bash" 6 weeks ago Exited (0) 17 seconds ago awesome_robinson
docker start 1305b9415342
docker attach 1305b9415342
History is your friend here ($ history
) - you can see and re-use prior commands in the container this way.
Here we build two targets (Note: not current):
bazel build --config=android_arm64 --config=noaws --config=nogcp --config=nohdfs --config=nonccl //tensorflow/lite/c:tensorflowlite_c
cp bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so libtensorflowlite_c_android_arm_64.so
bazel build --config=android_x86_64 --config=noaws --config=nogcp --config=nohdfs --config=nonccl //tensorflow/lite/c:tensorflowlite_c
cp bazel-bin/tensorflow/lite/c/libtensorflowlite_c.so libtensorflowlite_c_android_x86_64.so
For c++ we use the target (Note: current)
//tensorflow/lite:tensorflowlite
We also build and maintance builds for Darwin and Linux. This is neede to support direct c++ use by the imaging team.
These are not generally necessary at this point as custom ops are implemented in the repo dl-ops
We have the following branches
- tf1 - this is the historical 1.13.x branch used in the early days (and still, now) for development and the phone
- master - this is a release build (currently v2.7.0) plus our changes.