The custom model
in the local benchmark tool currently only supports tf.GraphModel
or tf.LayersModel
.
If you want to benchmark more complex TensorFlow.js models with customized input preprocessing logic, you need to implement load
and predictFunc
methods, following this example PR.
If you have a TensorFlow.js model in local file system, you can benchmark it by: locally host the local benchmark tool and the model on a http server. In addition, if the online local benchmark tool is blocked by CORS
problems when fetching custom models, this solution also works.
You can benchmark the MobileNet model in local file system through the following steps:
- Download the tool.
git clone https://github.com/tensorflow/tfjs.git
cd tfjs/e2e/benchmarks/
- Download the model.
wget -O model.tar.gz "https://tfhub.dev/google/tfjs-model/imagenet/mobilenet_v2_130_224/classification/3/default/1?tfjs-format=compressed"
mkdir model
tar -xf model.tar.gz -C model/
- Run a http server to host the model and the local benchmark tool.
npx http-server
- Open http://127.0.0.1:8080/local-benchmark/ through the browser.
- Select
custom
in themodels
field. - Fill
http://127.0.0.1:8080/model/model.json
into themodelUrl
field. - Run benchmark.
The benchmark tool suopports three kinds of paths to the custom models.
Examples:
- TF Hub: https://tfhub.dev/google/tfjs-model/imagenet/resnet_v2_50/feature_vector/1/default/1
- Storage: https://storage.googleapis.com/tfjs-models/savedmodel/mobilenet_v2_1.0_224/model.json
Store the model in LocalStorage at first. Run the following codes in the browser console:
const localStorageModel = tf.sequential(
{layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
const saveResults = await localStorageModel.save('localstorage://my-model-1');
Then use "localstorage://my-model-1" as the custom model URL.
Store the model in IndexDB at first. Run the following codes in the browser console:
const indexDBModel = tf.sequential(
{layers: [tf.layers.dense({units: 1, inputShape: [3]})]});
const saveResults = await indexDBModel.save('indexeddb://my-model-1');
Then use "indexeddb://my-model-1" as the custom model URL.
If input shapes for you model contain dynamic dimension (i.e. for mobilenet
shape = [-1, 224, 224, 3]
), you are requires to set it to a valid shape
[1, 224, 224, 3]
before you can perform the benchmark.
In the Inputs section you will see an input box for you to update the shape.
Once the shape is set, you can click the 'Run benchmark' button again to run
the benchmark.
It's easy to set up a web server to host benchmarks and run against them via e2e/benchmarks/local-benchmark/index.html. You can manually specify the optional url parameters as needed. Here are the list of supported url parameters:
-
Model related parameters:
architecture: same as architecture (only certain models has it, such as MobileNetV3 and posenet)
benchmark: same as models
inputSize: same as inputSizes
inputType: same as inputTypes
modelUrl: same as modelUrl, for custom models only
${InputeName}Shape: the input shape array, separated by comma, for custom models only. For example, bodypix's graph model has an input named sub_2, then users could add 'sub_2Shape=1,1,1,3
' in the URL to populate its shape. -
Environment related parameters:
backend: same as backend
localBuild: local build name list, separated by comma. The name is in short form (in general the name without the tfjs- and backend- prefixes, for example webgl for tfjs-backend-webgl, core for tfjs-core). Example: 'webgl,core'.
run: same as numRuns
task: correctness to "Test correctness" or performance to "Run benchmark"
warmup: same as numWarmups