1- ---
2- ---
3- <style >hr {display :none ;}</style >
4-
51# Serving Dynamically Updated TensorFlow Model with Batching
62
73This tutorial shows you how to use TensorFlow Serving components to build a
@@ -10,7 +6,7 @@ TensorFlow model. You'll also learn how to use TensorFlow Serving
106batcher to do batched inference. The code examples in this tutorial focus on the
117discovery, batching, and serving logic. If you just want to use TensorFlow
128Serving to serve a single version model without batching, see
13- [ TensorFlow Serving basic tutorial] ( serving_basic ) .
9+ [ TensorFlow Serving basic tutorial] ( serving_basic.md ) .
1410
1511This tutorial uses the simple Softmax Regression model introduced in the
1612TensorFlow tutorial for handwritten image (MNIST data) classification. If you
@@ -33,7 +29,7 @@ This tutorial steps through the following tasks:
3329 4 . Serve request with TensorFlow Serving manager.
3430 5 . Run and test the service.
3531
36- Before getting started, please complete the [ prerequisites] ( setup#prerequisites ) .
32+ Before getting started, please complete the [ prerequisites] ( setup.md #prerequisites ) .
3733
3834## Train And Export TensorFlow Model
3935
@@ -58,7 +54,7 @@ $>bazel-bin/tensorflow_serving/example/mnist_export --training_iteration=2000 --
5854
5955As you can see in ` mnist_export.py ` , the training and exporting is done the
6056same way it is in the
61- [ TensorFlow Serving basic tutorial] ( serving_basic ) . For
57+ [ TensorFlow Serving basic tutorial] ( serving_basic.md ) . For
6258demonstration purposes, you're intentionally dialing down the training
6359iterations for the first run and exporting it as v1, while training it normally
6460for the second run and exporting it as v2 to the same parent directory -- as we
@@ -128,8 +124,8 @@ monitors cloud storage instead of local storage, or you could build a version
128124policy plugin that does version transition in a different way -- in fact, you
129125could even build a custom model plugin that serves non-TensorFlow models. These
130126topics are out of scope for this tutorial, however, you can refer to the
131- [custom source](custom_source) and
132- [custom servable](custom_servable) documents for more information.
127+ [custom source](custom_source.md ) and
128+ [custom servable](custom_servable.md ) documents for more information.
133129
134130## Batching
135131
@@ -232,7 +228,7 @@ To put all these into the context of this tutorial:
232228`DoClassifyInBatch` is then just about requesting `SessionBundle` from the
233229manager and uses it to run inference. Most of the logic and flow is very similar
234230to the logic and flow described in the
235- [TensorFlow Serving basic tutorial](serving_basic), with just a few
231+ [TensorFlow Serving basic tutorial](serving_basic.md ), with just a few
236232key changes:
237233
238234 * The input tensor now has its first dimension set to variable batch size at
0 commit comments