aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKhanh LeViet <khanhlvg@google.com>2020-02-03 17:54:54 -0800
committerTensorFlower Gardener <gardener@tensorflow.org>2020-02-03 17:58:37 -0800
commit14b79b98bf8759d94e852fdc3186e1cd63d2316c (patch)
tree0719f920557129a665080ff0dbf9deaf407a1e91
parent70e675ebcf44a86a1a7b0b0070bfc116e26e3a32 (diff)
downloadtensorflow-14b79b98bf8759d94e852fdc3186e1cd63d2316c.tar.gz
Update TFL doc migration
PiperOrigin-RevId: 293048235 Change-Id: Ib5db4af490ffe223e470c2d81aa7954fa5c7cb96
-rw-r--r--tensorflow/lite/g3doc/_book.yaml3
-rw-r--r--tensorflow/lite/g3doc/convert/1x_compatibility.md117
-rw-r--r--tensorflow/lite/g3doc/convert/cmdline.md2
-rw-r--r--tensorflow/lite/g3doc/convert/python_api.md88
4 files changed, 121 insertions, 89 deletions
diff --git a/tensorflow/lite/g3doc/_book.yaml b/tensorflow/lite/g3doc/_book.yaml
index a64e56d4bbd..a5206dc8123 100644
--- a/tensorflow/lite/g3doc/_book.yaml
+++ b/tensorflow/lite/g3doc/_book.yaml
@@ -42,6 +42,8 @@ upper_tabs:
path: /lite/convert/quantization
- title: "Convert RNN models"
path: /lite/convert/rnn
+ - title: "1.x compatibility"
+ path: /lite/convert/1x_compatibility
- heading: "Inference"
- title: "Overview"
@@ -54,6 +56,7 @@ upper_tabs:
path: /lite/guide/ops_compatibility
- title: "Select operators from TensorFlow"
path: /lite/guide/ops_select
+ status: experimental
- title: "List of hosted models"
path: /lite/guide/hosted_models
diff --git a/tensorflow/lite/g3doc/convert/1x_compatibility.md b/tensorflow/lite/g3doc/convert/1x_compatibility.md
new file mode 100644
index 00000000000..adb2af4d8ad
--- /dev/null
+++ b/tensorflow/lite/g3doc/convert/1x_compatibility.md
@@ -0,0 +1,117 @@
+# TensorFlow 1.x compatibility
+
+The `tf.lite.TFLiteConverter` was updated between TensorFlow 1.X and 2.0. This
+document explains the differences between the 1.X and 2.0 versions of the
+converter, and provides information about how to use the 1.X version if
+required.
+
+## Summary of changes in Python API between 1.X and 2.0 <a name="differences"></a>
+
+The following section summarizes the changes in the Python API from 1.X to 2.0.
+If any of the changes raise concerns, please file a
+[GitHub issue](https://github.com/tensorflow/tensorflow/issues).
+
+### Formats supported by `TFLiteConverter`
+
+The 2.0 version of the converter supports SavedModel and Keras model files
+generated in both 1.X and 2.0. However, the conversion process no longer
+supports "frozen graph" `GraphDef` files generated in 1.X.
+
+#### Converting frozen graphs
+
+Users who want to convert frozen graph `GraphDef` files (`.pb` files) to
+TensorFlow Lite should use `tf.compat.v1.lite.TFLiteConverter`.
+
+The following snippet shows a frozen graph file being converted:
+
+```python
+# Path to the frozen graph file
+graph_def_file = 'frozen_graph.pb'
+# A list of the names of the model's input tensors
+input_arrays = ['input_name']
+# A list of the names of the model's output tensors
+output_arrays = ['output_name']
+# Load and convert the frozen graph
+converter = lite.TFLiteConverter.from_frozen_graph(
+ graph_def_file, input_arrays, output_arrays)
+tflite_model = converter.convert()
+# Write the converted model to disk
+open("converted_model.tflite", "wb").write(tflite_model)
+```
+
+### Quantization-aware training
+
+The following attributes and methods associated with
+[quantization-aware training](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize)
+have been removed from `TFLiteConverter` in TensorFlow 2.0:
+
+* `inference_type`
+* `inference_input_type`
+* `quantized_input_stats`
+* `default_ranges_stats`
+* `reorder_across_fake_quant`
+* `change_concat_input_ranges`
+* `post_training_quantize` - Deprecated in the 1.X API
+* `get_input_arrays()`
+
+The rewriter function that supports quantization-aware training does not support
+models generated by TensorFlow 2.0. Additionally, TensorFlow Lite’s quantization
+API is being reworked and streamlined in a direction that supports
+quantization-aware training through the Keras API. These attributes will be
+removed in the 2.0 API until the new quantization API is launched. Users who
+want to convert models generated by the rewriter function can use
+`tf.compat.v1.lite.TFLiteConverter`.
+
+### Changes to `TFLiteConverter` attributes
+
+The `target_ops` attribute has become an attribute of `TargetSpec` and renamed
+to `supported_ops` in line with future additions to the optimization framework.
+
+Additionally, the following attributes have been removed:
+
+* `drop_control_dependency` (default: `True`)
+* _Graph visualization_ - The recommended approach for visualizing a
+ TensorFlow Lite graph in TensorFlow 2.0 will be to use
+ [visualize.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/visualize.py).
+ Unlike GraphViz, it enables users to visualize the graph after post training
+ quantization has occurred. The following attributes related to graph
+ visualization will be removed:
+ * `output_format`
+ * `dump_graphviz_dir`
+ * `dump_graphviz_video`
+
+### General API changes
+
+The following section explains several significant API changes between
+TensorFlow 1.X and 2.0.
+
+#### Conversion methods
+
+The following methods that were previously deprecated in 1.X will no longer be
+exported in 2.0:
+
+* `lite.toco_convert`
+* `lite.TocoConverter`
+
+#### `lite.constants`
+
+The `lite.constants` API was removed in 2.0 in order to decrease duplication
+between TensorFlow and TensorFlow Lite. The following list maps the
+`lite.constant` type to the TensorFlow type:
+
+* `lite.constants.FLOAT`: `tf.float32`
+* `lite.constants.INT8`: `tf.int8`
+* `lite.constants.INT32`: `tf.int32`
+* `lite.constants.INT64`: `tf.int64`
+* `lite.constants.STRING`: `tf.string`
+* `lite.constants.QUANTIZED_UINT8`: `tf.uint8`
+
+Additionally, `lite.constants.TFLITE` and `lite.constants.GRAPHVIZ_DOT` were
+removed due to the deprecation of the `output_format` flag in `TFLiteConverter`.
+
+#### `lite.OpHint`
+
+The `OpHint` API is currently not available in 2.0 due to an incompatibility
+with the 2.0 APIs. This API enables conversion of LSTM based models. Support for
+LSTMs in 2.0 is being investigated. All related `lite.experimental` APIs have
+been removed due to this issue.
diff --git a/tensorflow/lite/g3doc/convert/cmdline.md b/tensorflow/lite/g3doc/convert/cmdline.md
index 4d9e445637e..2d89c04e6f1 100644
--- a/tensorflow/lite/g3doc/convert/cmdline.md
+++ b/tensorflow/lite/g3doc/convert/cmdline.md
@@ -1,7 +1,7 @@
# Converter command line reference
This page describes how to use the [TensorFlow Lite converter](index.md) using
-the command line tool. However, The[Python API](python_api.md) is recommended
+the command line tool. However, the [Python API](python_api.md) is recommended
for the majority of cases.
Note: This only contains documentation on the command line tool in TensorFlow 2.
diff --git a/tensorflow/lite/g3doc/convert/python_api.md b/tensorflow/lite/g3doc/convert/python_api.md
index b8f7312c3fc..8fd32325705 100644
--- a/tensorflow/lite/g3doc/convert/python_api.md
+++ b/tensorflow/lite/g3doc/convert/python_api.md
@@ -164,94 +164,6 @@ for tf_result, tflite_result in zip(tf_results, tflite_results):
np.testing.assert_almost_equal(tf_result, tflite_result, decimal=5)
```
-## Summary of changes in Python API between 1.X and 2.0 <a name="differences"></a>
-
-The following section summarizes the changes in the Python API from 1.X to 2.0.
-If any of the changes raise concerns, please file a
-[GitHub issue](https://github.com/tensorflow/tensorflow/issues).
-
-### Formats supported by `TFLiteConverter`
-
-`TFLiteConverter` in 2.0 supports SavedModels and Keras model files generated in
-both 1.X and 2.0. However, the conversion process no longer supports frozen
-`GraphDefs` generated in 1.X. Users who want to convert frozen `GraphDefs` to
-TensorFlow Lite should use `tf.compat.v1.lite.TFLiteConverter`.
-
-### Quantization-aware training
-
-The following attributes and methods associated with
-[quantization-aware training](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize)
-have been removed from `TFLiteConverter` in TensorFlow 2.0:
-
-* `inference_type`
-* `inference_input_type`
-* `quantized_input_stats`
-* `default_ranges_stats`
-* `reorder_across_fake_quant`
-* `change_concat_input_ranges`
-* `post_training_quantize` - Deprecated in the 1.X API
-* `get_input_arrays()`
-
-The rewriter function that supports quantization-aware training does not support
-models generated by TensorFlow 2.0. Additionally, TensorFlow Lite’s quantization
-API is being reworked and streamlined in a direction that supports
-quantization-aware training through the Keras API. These attributes will be
-removed in the 2.0 API until the new quantization API is launched. Users who
-want to convert models generated by the rewriter function can use
-`tf.compat.v1.lite.TFLiteConverter`.
-
-### Changes to `TFLiteConverter` attributes
-
-The `target_ops` attribute has become an attribute of `TargetSpec` and renamed
-to `supported_ops` in line with future additions to the optimization framework.
-
-Additionally, the following attributes have been removed:
-
-* `drop_control_dependency` (default: `True`) - Control flow is currently not
- supported by TFLite so it is always `True`.
-* _Graph visualization_ - The recommended approach for visualizing a
- TensorFlow Lite graph in TensorFlow 2.0 will be to use
- [visualize.py](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tools/visualize.py).
- Unlike GraphViz, it enables users to visualize the graph after post training
- quantization has occurred. The following attributes related to graph
- visualization will be removed:
- * `output_format`
- * `dump_graphviz_dir`
- * `dump_graphviz_video`
-
-### General API changes
-
-#### Conversion methods
-
-The following methods that were previously deprecated in 1.X will no longer be
-exported in 2.0:
-
-* `lite.toco_convert`
-* `lite.TocoConverter`
-
-#### `lite.constants`
-
-The `lite.constants` API was removed in 2.0 in order to decrease duplication
-between TensorFlow and TensorFlow Lite. The following list maps the
-`lite.constant` type to the TensorFlow type:
-
-* `lite.constants.FLOAT`: `tf.float32`
-* `lite.constants.INT8`: `tf.int8`
-* `lite.constants.INT32`: `tf.int32`
-* `lite.constants.INT64`: `tf.int64`
-* `lite.constants.STRING`: `tf.string`
-* `lite.constants.QUANTIZED_UINT8`: `tf.uint8`
-
-Additionally, `lite.constants.TFLITE` and `lite.constants.GRAPHVIZ_DOT` were
-removed due to the deprecation of the `output_format` flag in `TFLiteConverter`.
-
-#### `lite.OpHint`
-
-The `OpHint` API is currently not available in 2.0 due to an incompatibility
-with the 2.0 APIs. This API enables conversion of LSTM based models. Support for
-LSTMs in 2.0 is being investigated. All related `lite.experimental` APIs have
-been removed due to this issue.
-
## Installing TensorFlow <a name="versioning"></a>
### Installing the TensorFlow nightly <a name="2.0-nightly"></a>