Skip to content

Commit

Permalink
removing paddlepaddle and tflite reference (#1805)
Browse files Browse the repository at this point in the history
  • Loading branch information
Qing Lan authored Apr 23, 2024
1 parent ed0041a commit 346f1ce
Show file tree
Hide file tree
Showing 11 changed files with 1 addition and 50 deletions.
2 changes: 0 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ HTTP endpoint. It can serve the following model types out of the box:

You can install extra extensions to enable the following models:

- PaddlePaddle model
- TFLite model
- XGBoost model
- LightGBM model
- Sentencepiece model
Expand Down
5 changes: 0 additions & 5 deletions benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,10 @@ djl-bench currently support benchmark the following type of models:
- TensorFlow SavedModel bundle
- Apache MXNet model
- ONNX model
- PaddlePaddle model
- TFLite model
- TensorRT model
- XGBoost model
- LightGBM model
- Python script model
- Neo DLR (TVM) model

You can build djl-bench from source if you need to benchmark fastText/BlazingText/Sentencepiece models.

Expand Down Expand Up @@ -187,9 +184,7 @@ By default, the above script will use MXNet as the default Engine, but you can a
-e TensorFlow # TensorFlow
-e PyTorch # PyTorch
-e MXNet # Apache MXNet
-e PaddlePaddle # PaddlePaddle
-e OnnxRuntime # pytorch
-e TFLite # TFLite
-e TensorRT # TensorRT
-e XGBoost # XGBoost
-e LightGBM # LightGBM
Expand Down
2 changes: 0 additions & 2 deletions benchmark/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,6 @@ dependencies {
runtimeOnly "ai.djl.pytorch:pytorch-model-zoo"
runtimeOnly "ai.djl.tensorflow:tensorflow-model-zoo"
runtimeOnly "ai.djl.mxnet:mxnet-model-zoo"
runtimeOnly "ai.djl.paddlepaddle:paddlepaddle-model-zoo"
runtimeOnly "ai.djl.tflite:tflite-engine"
runtimeOnly "ai.djl.ml.xgboost:xgboost"
runtimeOnly project(":engines:python")
runtimeOnly "ai.djl.tensorrt:tensorrt"
Expand Down
2 changes: 0 additions & 2 deletions benchmark/snapcraft/snapcraft.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,8 @@ description: |
- PyTorch
- TensorFlow
- Apache MXNet
- PaddlePaddle
- ONNXRuntime
- TensorRT
- TensorFlow Lite
- XGBoost
- LightGBM
- Python
Expand Down
6 changes: 0 additions & 6 deletions benchmark/src/main/java/ai/djl/benchmark/Benchmark.java
Original file line number Diff line number Diff line change
Expand Up @@ -121,12 +121,6 @@ private static void configEngines(boolean multithreading) {
System.setProperty("ai.djl.onnxruntime.num_threads", "1");
}
}
if (System.getProperty("ai.djl.tflite.disable_alternative") == null) {
System.setProperty("ai.djl.tflite.disable_alternative", "true");
}
if (System.getProperty("ai.djl.paddlepaddle.disable_alternative") == null) {
System.setProperty("ai.djl.paddlepaddle.disable_alternative", "true");
}
if (System.getProperty("ai.djl.onnx.disable_alternative") == null) {
System.setProperty("ai.djl.onnx.disable_alternative", "true");
}
Expand Down
6 changes: 0 additions & 6 deletions serving/docs/configurations.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,12 +57,6 @@ DJLServing build on top of Deep Java Library (DJL). Here is a list of settings f
| ai.djl.mxnet.static_shape | system prop | CachedOp options, default: true |
| ai.djl.use_local_parameter_server | system prop | Use java parameter server instead of MXNet native implemention, default: false |

### PaddlePaddle

| Key | Type | Description |
|-----------------------------------------|---------------------|--------------------------------------------------|
| PADDLE_LIBRARY_PATH | env var/system prop | User provided custom PaddlePaddle native library |
| ai.djl.paddlepaddle.disable_alternative | system prop | Disable alternative engine |

### Huggingface tokenizers

Expand Down
2 changes: 1 addition & 1 deletion serving/docs/configurations_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ An example `serving.properties` can be found [here](https://github.com/deepjaval
In `serving.properties`, you can set the following properties. Model properties are accessible to `Translator`
and python handler functions.

- `engine`: Which Engine to use, values include MXNet, PyTorch, TensorFlow, ONNX, PaddlePaddle, DeepSpeed, etc.
- `engine`: Which Engine to use, values include MXNet, PyTorch, TensorFlow, ONNX, DeepSpeed, etc.
- `load_on_devices`: A ; delimited devices list, which the model to be loaded on, default to load on all devices.
- `translatorFactory`: Specify the TranslatorFactory.
- `job_queue_size`: Specify the job queue size at model level, this will override global `job_queue_size`, default is `1000`.
Expand Down
2 changes: 0 additions & 2 deletions serving/docs/modes.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,6 @@ Next, you need to include a model file. DJL Serving supports model artifacts for
- PyTorch (torchscript only)
- TensorFlow
- ONNX
- PaddlePaddle

You can also include any required artifacts in the model directory. For example, `ImageClassificationTranslator` may need a `synset.txt` file, you can put it in the same directory with your model file to define the labels.

Expand Down Expand Up @@ -443,7 +442,6 @@ DJL Serving supports model artifacts for the following engines:
- PyTorch (torchscript only)
- TensorFlow
- ONNX
- PaddlePaddle

### Packaging

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,12 +75,6 @@ public void installEngine(String engineName) throws IOException {
installDependency("ai.djl.mxnet:mxnet-engine:" + djlVersion);
installDependency("ai.djl.mxnet:mxnet-model-zoo:" + djlVersion);
break;
case "PaddlePaddle":
installDependency("ai.djl.paddlepaddle:paddlepaddle-engine:" + djlVersion);
break;
case "TFLite":
installDependency("ai.djl.tflite:tflite-engine:" + djlVersion);
break;
case "XGBoost":
installDependency("ai.djl.ml.xgboost:xgboost:" + djlVersion);
// TODO: Avoid hard code version
Expand Down
6 changes: 0 additions & 6 deletions wlm/src/main/java/ai/djl/serving/wlm/ModelInfo.java
Original file line number Diff line number Diff line change
Expand Up @@ -704,12 +704,6 @@ private String inferEngine() throws ModelException {
} else if (Files.isRegularFile(modelDir.resolve(prefix + ".trt"))
|| Files.isRegularFile(modelDir.resolve(prefix + ".uff"))) {
return "TensorRT";
} else if (Files.isRegularFile(modelDir.resolve(prefix + ".tflite"))) {
return "TFLite";
} else if (Files.isRegularFile(modelDir.resolve("model"))
|| Files.isRegularFile(modelDir.resolve("__model__"))
|| Files.isRegularFile(modelDir.resolve("inference.pdmodel"))) {
return "PaddlePaddle";
} else if (Files.isRegularFile(modelDir.resolve(prefix + ".json"))
|| Files.isRegularFile(modelDir.resolve(prefix + ".xgb"))
|| Files.isRegularFile(modelDir.resolve("model.xgb"))) {
Expand Down
12 changes: 0 additions & 12 deletions wlm/src/test/java/ai/djl/serving/wlm/ModelInfoTest.java
Original file line number Diff line number Diff line change
Expand Up @@ -176,18 +176,6 @@ public void testInitModel() throws IOException, ModelException {
model.initialize();
assertEquals(model.getEngineName(), "XGBoost");

Path paddle = modelDir.resolve("__model__");
Files.createFile(paddle);
model = new ModelInfo<>("build/models/test_model");
model.initialize();
assertEquals(model.getEngineName(), "PaddlePaddle");

Path tflite = modelDir.resolve("test_model.tflite");
Files.createFile(tflite);
model = new ModelInfo<>("build/models/test_model");
model.initialize();
assertEquals(model.getEngineName(), "TFLite");

Path tensorRt = modelDir.resolve("test_model.uff");
Files.createFile(tensorRt);
model = new ModelInfo<>("build/models/test_model");
Expand Down

0 comments on commit 346f1ce

Please sign in to comment.