diff --git a/doc/frameworks/tensorflow/deploying_tensorflow_serving.rst b/doc/frameworks/tensorflow/deploying_tensorflow_serving.rst index 1d7344fbbb..a645cd5a62 100644 --- a/doc/frameworks/tensorflow/deploying_tensorflow_serving.rst +++ b/doc/frameworks/tensorflow/deploying_tensorflow_serving.rst @@ -64,7 +64,7 @@ If you already have existing model artifacts in S3, you can skip training and de from sagemaker.tensorflow import TensorFlowModel - model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') + model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole', framework_version='x.x.x') predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge') @@ -74,7 +74,7 @@ Python-based TensorFlow serving on SageMaker has support for `Elastic Inference from sagemaker.tensorflow import TensorFlowModel - model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') + model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole', framework_version='x.x.x') predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge', accelerator_type='ml.eia1.medium') diff --git a/doc/frameworks/tensorflow/using_tf.rst b/doc/frameworks/tensorflow/using_tf.rst index 979e86d8b6..5b888f95be 100644 --- a/doc/frameworks/tensorflow/using_tf.rst +++ b/doc/frameworks/tensorflow/using_tf.rst @@ -468,7 +468,7 @@ If you already have existing model artifacts in S3, you can skip training and de from sagemaker.tensorflow import TensorFlowModel - model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') + model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole', framework_version='x.x.x') predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge') @@ -478,7 +478,7 @@ Python-based TensorFlow serving on SageMaker has support for `Elastic Inference from sagemaker.tensorflow import TensorFlowModel - model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') + model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole', framework_version='x.x.x') predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge', accelerator_type='ml.eia1.medium') @@ -767,7 +767,8 @@ This customized Python code must be named ``inference.py`` and is specified thro model = TensorFlowModel(entry_point='inference.py', model_data='s3://mybucket/model.tar.gz', - role='MySageMakerRole') + role='MySageMakerRole', + framework_version='x.x.x') In the example above, ``inference.py`` is assumed to be a file inside ``model.tar.gz``. If you want to use a local file instead, you must add the ``source_dir`` argument. See the documentation on `TensorFlowModel `_. @@ -923,7 +924,8 @@ processing. There are 2 ways to do this: model = TensorFlowModel(entry_point='inference.py', dependencies=['requirements.txt'], model_data='s3://mybucket/model.tar.gz', - role='MySageMakerRole') + role='MySageMakerRole', + framework_version='x.x.x') 2. If you are working in a network-isolation situation or if you don't @@ -941,7 +943,8 @@ processing. There are 2 ways to do this: model = TensorFlowModel(entry_point='inference.py', dependencies=['/path/to/folder/named/lib'], model_data='s3://mybucket/model.tar.gz', - role='MySageMakerRole') + role='MySageMakerRole', + framework_version='x.x.x') For more information, see: https://github.com/aws/sagemaker-tensorflow-serving-container#prepost-processing