![]() |
![]() |
![]() |
![]() |
TensorFlow Lite 模型中繼資料是標準模型說明格式。其中包含一般模型資訊、輸入/輸出和相關檔案的豐富語意,讓模型具備自我描述能力和可交換性。
模型中繼資料目前用於下列兩個主要用途
透過 TensorFlow Lite Task Library 和 codegen 工具,輕鬆啟用模型推論。模型中繼資料包含推論期間所需的必要資訊,例如圖片分類中的標籤檔案、音訊分類中音訊輸入的取樣率,以及處理自然語言模型中輸入字串的權杖產生器類型。
讓模型建立者加入說明文件,例如模型輸入/輸出說明或模型使用方式。模型使用者可以透過 Netron 等視覺化工具查看這些說明文件。
TensorFlow Lite Metadata Writer API 提供易於使用的 API,可為 TFLite Task Library 支援的熱門 ML 工作建立模型中繼資料。這個筆記本顯示如何為下列工作填入中繼資料的範例
BERT 自然語言分類器和 BERT 問題解答器的中繼資料寫入器即將推出。
如果您想為不支援的用途新增中繼資料,請使用 Flatbuffers Python API。請參閱此處的教學課程。
必要條件
安裝 TensorFlow Lite Support Pypi 套件。
pip install tflite-support-nightly
為 Task Library 和 Codegen 建立模型中繼資料
圖片分類器
如要進一步瞭解支援的模型格式,請參閱圖片分類器模型相容性需求。
步驟 1:匯入必要套件。
from tflite_support.metadata_writers import image_classifier
from tflite_support.metadata_writers import writer_utils
步驟 2:下載圖片分類器範例 mobilenet_v2_1.0_224.tflite 和標籤檔案。
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite -o mobilenet_v2_1.0_224.tflite
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt -o mobilenet_labels.txt
步驟 3:建立中繼資料寫入器並填入資料。
ImageClassifierWriter = image_classifier.MetadataWriter
_MODEL_PATH = "mobilenet_v2_1.0_224.tflite"
# Task Library expects label files that are in the same format as the one below.
_LABEL_FILE = "mobilenet_labels.txt"
_SAVE_TO_PATH = "mobilenet_v2_1.0_224_metadata.tflite"
# Normalization parameters is required when reprocessing the image. It is
# optional if the image pixel values are in range of [0, 255] and the input
# tensor is quantized to uint8. See the introduction for normalization and
# quantization parameters below for more details.
# https://tensorflow.dev.org.tw/lite/models/convert/metadata#normalization_and_quantization_parameters)
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5
# Create the metadata writer.
writer = ImageClassifierWriter.create_for_inference(
writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABEL_FILE])
# Verify the metadata generated by metadata writer.
print(writer.get_metadata_json())
# Populate the metadata into the model.
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
物件偵測器
如要進一步瞭解支援的模型格式,請參閱物件偵測器模型相容性需求。
步驟 1:匯入必要套件。
from tflite_support.metadata_writers import object_detector
from tflite_support.metadata_writers import writer_utils
步驟 2:下載物件偵測器範例 ssd_mobilenet_v1.tflite 和標籤檔案。
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/ssd_mobilenet_v1.tflite -o ssd_mobilenet_v1.tflite
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/labelmap.txt -o ssd_mobilenet_labels.txt
步驟 3:建立中繼資料寫入器並填入資料。
ObjectDetectorWriter = object_detector.MetadataWriter
_MODEL_PATH = "ssd_mobilenet_v1.tflite"
# Task Library expects label files that are in the same format as the one below.
_LABEL_FILE = "ssd_mobilenet_labels.txt"
_SAVE_TO_PATH = "ssd_mobilenet_v1_metadata.tflite"
# Normalization parameters is required when reprocessing the image. It is
# optional if the image pixel values are in range of [0, 255] and the input
# tensor is quantized to uint8. See the introduction for normalization and
# quantization parameters below for more details.
# https://tensorflow.dev.org.tw/lite/models/convert/metadata#normalization_and_quantization_parameters)
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5
# Create the metadata writer.
writer = ObjectDetectorWriter.create_for_inference(
writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABEL_FILE])
# Verify the metadata generated by metadata writer.
print(writer.get_metadata_json())
# Populate the metadata into the model.
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
圖片分區器
如要進一步瞭解支援的模型格式,請參閱圖片分區器模型相容性需求。
步驟 1:匯入必要套件。
from tflite_support.metadata_writers import image_segmenter
from tflite_support.metadata_writers import writer_utils
步驟 2:下載圖片分區器範例 deeplabv3.tflite 和標籤檔案。
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/deeplabv3.tflite -o deeplabv3.tflite
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/labelmap.txt -o deeplabv3_labels.txt
步驟 3:建立中繼資料寫入器並填入資料。
ImageSegmenterWriter = image_segmenter.MetadataWriter
_MODEL_PATH = "deeplabv3.tflite"
# Task Library expects label files that are in the same format as the one below.
_LABEL_FILE = "deeplabv3_labels.txt"
_SAVE_TO_PATH = "deeplabv3_metadata.tflite"
# Normalization parameters is required when reprocessing the image. It is
# optional if the image pixel values are in range of [0, 255] and the input
# tensor is quantized to uint8. See the introduction for normalization and
# quantization parameters below for more details.
# https://tensorflow.dev.org.tw/lite/models/convert/metadata#normalization_and_quantization_parameters)
_INPUT_NORM_MEAN = 127.5
_INPUT_NORM_STD = 127.5
# Create the metadata writer.
writer = ImageSegmenterWriter.create_for_inference(
writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],
[_LABEL_FILE])
# Verify the metadata generated by metadata writer.
print(writer.get_metadata_json())
# Populate the metadata into the model.
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
自然語言分類器
如要進一步瞭解支援的模型格式,請參閱自然語言分類器模型相容性需求。
步驟 1:匯入必要套件。
from tflite_support.metadata_writers import nl_classifier
from tflite_support.metadata_writers import metadata_info
from tflite_support.metadata_writers import writer_utils
步驟 2:下載自然語言分類器範例 movie_review.tflite、標籤檔案和詞彙檔案。
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/movie_review.tflite -o movie_review.tflite
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/labels.txt -o movie_review_labels.txt
curl -L https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/nl_classifier/vocab.txt -o movie_review_vocab.txt
步驟 3:建立中繼資料寫入器並填入資料。
NLClassifierWriter = nl_classifier.MetadataWriter
_MODEL_PATH = "movie_review.tflite"
# Task Library expects label files and vocab files that are in the same formats
# as the ones below.
_LABEL_FILE = "movie_review_labels.txt"
_VOCAB_FILE = "movie_review_vocab.txt"
# NLClassifier supports tokenize input string using the regex tokenizer. See
# more details about how to set up RegexTokenizer below:
# https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/metadata_writers/metadata_info.py#L130
_DELIM_REGEX_PATTERN = r"[^\w\']+"
_SAVE_TO_PATH = "moview_review_metadata.tflite"
# Create the metadata writer.
writer = nl_classifier.MetadataWriter.create_for_inference(
writer_utils.load_file(_MODEL_PATH),
metadata_info.RegexTokenizerMd(_DELIM_REGEX_PATTERN, _VOCAB_FILE),
[_LABEL_FILE])
# Verify the metadata generated by metadata writer.
print(writer.get_metadata_json())
# Populate the metadata into the model.
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
音訊分類器
如要進一步瞭解支援的模型格式,請參閱音訊分類器模型相容性需求。
步驟 1:匯入必要套件。
from tflite_support.metadata_writers import audio_classifier
from tflite_support.metadata_writers import metadata_info
from tflite_support.metadata_writers import writer_utils
步驟 2:下載音訊分類器範例 yamnet.tflite 和標籤檔案。
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_wavin_quantized_mel_relu6.tflite -o yamnet.tflite
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_521_labels.txt -o yamnet_labels.txt
步驟 3:建立中繼資料寫入器並填入資料。
AudioClassifierWriter = audio_classifier.MetadataWriter
_MODEL_PATH = "yamnet.tflite"
# Task Library expects label files that are in the same format as the one below.
_LABEL_FILE = "yamnet_labels.txt"
# Expected sampling rate of the input audio buffer.
_SAMPLE_RATE = 16000
# Expected number of channels of the input audio buffer. Note, Task library only
# support single channel so far.
_CHANNELS = 1
_SAVE_TO_PATH = "yamnet_metadata.tflite"
# Create the metadata writer.
writer = AudioClassifierWriter.create_for_inference(
writer_utils.load_file(_MODEL_PATH), _SAMPLE_RATE, _CHANNELS, [_LABEL_FILE])
# Verify the metadata generated by metadata writer.
print(writer.get_metadata_json())
# Populate the metadata into the model.
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
建立包含語意資訊的模型中繼資料
您可以透過 Metadata Writer API 填入更多關於模型和每個張量的描述性資訊,以協助改善模型理解。這可以透過每個中繼資料寫入器中的「create_from_metadata_info」方法完成。一般來說,您可以透過「create_from_metadata_info」的參數填入資料,即 general_md
、input_md
和 output_md
。請參閱以下範例,為圖片分類器建立豐富的模型中繼資料。
步驟 1:匯入必要套件。
from tflite_support.metadata_writers import image_classifier
from tflite_support.metadata_writers import metadata_info
from tflite_support.metadata_writers import writer_utils
from tflite_support import metadata_schema_py_generated as _metadata_fb
步驟 2:下載圖片分類器範例 mobilenet_v2_1.0_224.tflite 和標籤檔案。
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite -o mobilenet_v2_1.0_224.tflite
curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt -o mobilenet_labels.txt
步驟 3:建立模型和張量資訊。
model_buffer = writer_utils.load_file("mobilenet_v2_1.0_224.tflite")
# Create general model information.
general_md = metadata_info.GeneralMd(
name="ImageClassifier",
version="v1",
description=("Identify the most prominent object in the image from a "
"known set of categories."),
author="TensorFlow Lite",
licenses="Apache License. Version 2.0")
# Create input tensor information.
input_md = metadata_info.InputImageTensorMd(
name="input image",
description=("Input image to be classified. The expected image is "
"128 x 128, with three channels (red, blue, and green) per "
"pixel. Each element in the tensor is a value between min and "
"max, where (per-channel) min is [0] and max is [255]."),
norm_mean=[127.5],
norm_std=[127.5],
color_space_type=_metadata_fb.ColorSpaceType.RGB,
tensor_type=writer_utils.get_input_tensor_types(model_buffer)[0])
# Create output tensor information.
output_md = metadata_info.ClassificationTensorMd(
name="probability",
description="Probabilities of the 1001 labels respectively.",
label_files=[
metadata_info.LabelFileMd(file_path="mobilenet_labels.txt",
locale="en")
],
tensor_type=writer_utils.get_output_tensor_types(model_buffer)[0])
步驟 4:建立中繼資料寫入器並填入資料。
ImageClassifierWriter = image_classifier.MetadataWriter
# Create the metadata writer.
writer = ImageClassifierWriter.create_from_metadata_info(
model_buffer, general_md, input_md, output_md)
# Verify the metadata generated by metadata writer.
print(writer.get_metadata_json())
# Populate the metadata into the model.
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
讀取填入模型的中繼資料。
您可以透過下列程式碼,在 TFLite 模型中顯示中繼資料和相關檔案
from tflite_support import metadata
displayer = metadata.MetadataDisplayer.with_model_file("mobilenet_v2_1.0_224_metadata.tflite")
print("Metadata populated:")
print(displayer.get_metadata_json())
print("Associated file(s) populated:")
for file_name in displayer.get_packed_associated_file_list():
print("file name: ", file_name)
print("file content:")
print(displayer.get_associated_file_buffer(file_name))