Skip to content

Commit

Permalink
Merge pull request #6949 from lego0901/wssong-estimator-depr
Browse files Browse the repository at this point in the history
Remove tf_estimator usages, that is supposed to be fully deprecated from TF 2.16
  • Loading branch information
lego0901 authored Nov 5, 2024
2 parents 381ccf6 + 5800e1a commit 50a8599
Show file tree
Hide file tree
Showing 65 changed files with 1,128 additions and 4,447 deletions.
5 changes: 3 additions & 2 deletions RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
most likely you discovered a bug and should not use an f-string in the first
place. If it is truly your intention to print the placeholder (not its
resolved value) for debugging purposes, use `repr()` or `!r` instead.
* Drop supports for the Estimator API.

### For Pipeline Authors

Expand Down Expand Up @@ -224,7 +225,7 @@

## Bug Fixes and Other Changes

* Support to task type "workerpool1" of CLUSTER_SPEC in Vertex AI training's
* Support to task type "workerpool1" of CLUSTER_SPEC in Vertex AI training's
service according to the changes of task type in Tuner component.
* Propagates unexpected import failures in the public v1 module.

Expand Down Expand Up @@ -2887,4 +2888,4 @@ the 1.1.x release for TFX library.

### For component authors

* N/A
* N/A
4 changes: 1 addition & 3 deletions docs/guide/evaluator.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,7 @@ import tensorflow_model_analysis as tfma
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name='eval' and
# remove the label_key. Note, if using a TFLite model, then you must set
# model_type='tf_lite'.
# using a TFLite model, then you must set model_type='tf_lite'.
tfma.ModelSpec(label_key='<label_key>')
],
metrics_specs=[
Expand Down
10 changes: 0 additions & 10 deletions docs/guide/fairness_indicators.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,16 +43,6 @@ an evaluation set that does, or considering proxy features within your feature
set that may highlight outcome disparities. For additional guidance, see
[here](https://tensorflow.org/responsible_ai/fairness_indicators/guide/guidance).

### Model

You can use the Tensorflow Estimator class to build your model. Support for
Keras models is coming soon to TFMA. If you would like to run TFMA on a Keras
model, please see the “Model-Agnostic TFMA” section below.

After your Estimator is trained, you will need to export a saved model for
evaluation purposes. To learn more, see the
[TFMA guide](https://www.tensorflow.org/tfx/model_analysis/get_started).

### Configuring Slices

Next, define the slices you would like to evaluate on:
Expand Down
17 changes: 0 additions & 17 deletions docs/guide/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -438,23 +438,6 @@ using the exact same code during both training and inference. Using the
modeling code, including the SavedModel from the Transform component, you can
consume your training and evaluation data and train your model.

When working with Estimator based models, the last section of your modeling
code should save your model as both a SavedModel and an EvalSavedModel. Saving
as an EvalSavedModel ensures the metrics used at training time are also
available during evaluation (note that this is not required for keras based
models). Saving an EvalSavedModel requires that you import the
[TensorFlow Model Analysis (TFMA)](tfma.md) library in your Trainer component.

```python
import tensorflow_model_analysis as tfma
...

tfma.export.export_eval_savedmodel(
estimator=estimator,
export_dir_base=eval_model_dir,
eval_input_receiver_fn=receiver_fn)
```

An optional [Tuner](tuner.md) component can be added before Trainer to tune the
hyperparameters (e.g., number of layers) for the model. With the given model and
hyperparameters' search space, tuning algorithm will find the best
Expand Down
62 changes: 4 additions & 58 deletions docs/guide/keras.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,54 +38,10 @@ they become available in TF 2.x, you can follow the

## Estimator

The Estimator API has been retained in TensorFlow 2.x, but is not the focus of
new features and development. Code written in TensorFlow 1.x or 2.x using
Estimators will continue to work as expected in TFX.
The Estimator API has been fully dropped since TensorFlow 2.16, we decided to
discontinue the support for it.

Here is an end-to-end TFX example using pure Estimator:
[Taxi example (Estimator)](https://github.com/tensorflow/tfx/blob/r0.21/tfx/examples/chicago_taxi_pipeline/taxi_utils.py)

## Keras with `model_to_estimator`

Keras models can be wrapped with the `tf.keras.estimator.model_to_estimator`
function, which allows them to work as if they were Estimators. To use this:

1. Build a Keras model.
2. Pass the compiled model into `model_to_estimator`.
3. Use the result of `model_to_estimator` in Trainer, the way you would
typically use an Estimator.

```py
# Build a Keras model.
def _keras_model_builder():
"""Creates a Keras model."""
...

model = tf.keras.Model(inputs=inputs, outputs=output)
model.compile()

return model


# Write a typical trainer function
def trainer_fn(trainer_fn_args, schema):
"""Build the estimator, using model_to_estimator."""
...

# Model to estimator
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)

return {
'estimator': estimator,
...
}
```

Other than the user module file of Trainer, the rest of the pipeline remains
unchanged.

## Native Keras (i.e. Keras without `model_to_estimator`)
## Native Keras (i.e. Keras without Estimator)

!!! Note
Full support for all features in Keras is in progress, in most cases,
Expand All @@ -101,7 +57,7 @@ Here are several examples with native Keras:
'Hello world' end-to-end example.
* [MNIST](https://github.com/tensorflow/tfx/blob/master/tfx/examples/mnist/mnist_pipeline_native_keras.py)
([module file](https://github.com/tensorflow/tfx/blob/master/tfx/examples/mnist/mnist_utils_native_keras.py)):
Image and TFLite end-to-end example.
Image end-to-end example.
* [Taxi](https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_native_keras.py)
([module file](https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi_pipeline/taxi_utils_native_keras.py)):
end-to-end example with advanced Transform usage.
Expand Down Expand Up @@ -132,11 +88,6 @@ will be discussed in the following Trainer and Evaluator sections.

#### Trainer

To configure native Keras, the `GenericExecutor` needs to be set for Trainer
component to replace the default Estimator based executor. For details, please
check
[here](trainer.md#configuring-the-trainer-component).

##### Keras Module file with Transform

The training module file must contains a `run_fn` which will be called by the
Expand Down Expand Up @@ -296,9 +247,4 @@ validate the current model compared with previous models. With this change, the
Pusher component now consumes a blessing result from Evaluator instead of
ModelValidator.

The new Evaluator supports Keras models as well as Estimator models. The
`_eval_input_receiver_fn` and eval saved model which were required previously
will no longer be needed with Keras, since Evaluator is now based on the same
`SavedModel` that is used for serving.

[See Evaluator for more information](evaluator.md).
4 changes: 1 addition & 3 deletions docs/guide/modelval.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,7 @@ import tensorflow_model_analysis as tfma

eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
# This assumes a serving model with signature 'serving_default'.
tfma.ModelSpec(label_key='<label_key>')
],
metrics_specs=[
Expand Down
56 changes: 0 additions & 56 deletions docs/guide/train.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,59 +22,3 @@ a [Transform](transform.md) component, and the layers of the Transform model sho
be included with your model so that when you export your SavedModel and
EvalSavedModel they will include the transformations that were created by the
[Transform](transform.md) component.

A typical TensorFlow model design for TFX looks like this:

```python
def _build_estimator(tf_transform_dir,
config,
hidden_units=None,
warm_start_from=None):
"""Build an estimator for predicting the tipping behavior of taxi riders.
Args:
tf_transform_dir: directory in which the tf-transform model was written
during the preprocessing step.
config: tf.contrib.learn.RunConfig defining the runtime environment for the
estimator (including model_dir).
hidden_units: [int], the layer sizes of the DNN (input layer first)
warm_start_from: Optional directory to warm start from.
Returns:
Resulting DNNLinearCombinedClassifier.
"""
metadata_dir = os.path.join(tf_transform_dir,
transform_fn_io.TRANSFORMED_METADATA_DIR)
transformed_metadata = metadata_io.read_metadata(metadata_dir)
transformed_feature_spec = transformed_metadata.schema.as_feature_spec()

transformed_feature_spec.pop(_transformed_name(_LABEL_KEY))

real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _transformed_names(_DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _transformed_names(_VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _transformed_names(_BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets, default_value=0)
for key, num_buckets in zip(
_transformed_names(_CATEGORICAL_FEATURE_KEYS), #
_MAX_CATEGORICAL_FEATURE_VALUES)
]
return tf.estimator.DNNLinearCombinedClassifier(
config=config,
linear_feature_columns=categorical_columns,
dnn_feature_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25],
warm_start_from=warm_start_from)
```
12 changes: 5 additions & 7 deletions docs/guide/trainer.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,14 @@ Trainer emits: At least one model for inference/serving (typically in SavedModel

We provide support for alternate model formats such as
[TFLite](https://www.tensorflow.org/lite) through the [Model Rewriting Library](https://github.com/tensorflow/tfx/blob/master/tfx/components/trainer/rewriting/README.md).
See the link to the Model Rewriting Library for examples of how to convert both Estimator and Keras
See the link to the Model Rewriting Library for examples of how to convert Keras
models.

## Generic Trainer

Generic trainer enables developers to use any TensorFlow model API with the
Trainer component. In addition to TensorFlow Estimators, developers can use
Keras models or custom training loops. For details, please see the
Trainer component. Developers can use Keras models or custom training loops.
For details, please see the
[RFC for generic trainer](https://github.com/tensorflow/community/blob/master/rfcs/20200117-tfx-generic-trainer.md).

### Configuring the Trainer Component
Expand All @@ -57,10 +57,8 @@ trainer = Trainer(
```

Trainer invokes a training module, which is specified in the `module_file`
parameter. Instead of `trainer_fn`, a `run_fn` is required in the module file if
the `GenericExecutor` is specified in the `custom_executor_spec`. The
`trainer_fn` was responsible for creating the model. In addition to that,
`run_fn` also needs to handle the training part and output the trained model to
parameter. A `run_fn` is required in the module file,
and it needs to handle the training part and output the trained model to
a the desired location given by
[FnArgs](https://github.com/tensorflow/tfx/blob/master/tfx/components/trainer/fn_args_utils.py):

Expand Down
3 changes: 0 additions & 3 deletions docs/tutorials/tfx/cloud-ai-platform-pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -333,9 +333,6 @@ Here is brief description of the Python files.
- `features.py` `features_test.py` — defines features for the model
- `preprocessing.py` / `preprocessing_test.py` — defines preprocessing
jobs using `tf::Transform`
- `estimator` - This directory contains an Estimator based model.
- `constants.py` — defines constants of the model
- `model.py` / `model_test.py` — defines DNN model using TF estimator
- `keras` - This directory contains a Keras based model.
- `constants.py` — defines constants of the model
- `model.py` / `model_test.py` — defines DNN model using Keras
Expand Down
4 changes: 0 additions & 4 deletions docs/tutorials/tfx/tfx_for_mobile.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,6 @@ The TFX Trainer expects a user-defined `run_fn` to be specified in
a module file. This `run_fn` defines the model to be trained,
trains it for the specified number of iterations, and exports the trained model.

In the rest of this section, we provide code snippets which show the changes
required to invoke the TFLite rewriter and export a TFLite model. All of this
code is located in the `run_fn` of the [MNIST TFLite module](https://github.com/tensorflow/tfx/blob/master/tfx/examples/mnist/mnist_utils_native_keras_lite.py).

As shown in the code below,
we must first create a signature that takes a `Tensor` for every feature as
input. Note that this is a departure from most existing models in TFX, which take
Expand Down
Loading

0 comments on commit 50a8599

Please sign in to comment.