![]() |
![]() |
![]() |
![]() |
歡迎閱讀關於使用 Keras 權重剪枝,以透過 XNNPACK 改善裝置端推論延遲的指南。
本指南介紹新推出的 tfmot.sparsity.keras.PruningPolicy
API 的用法,並示範如何使用它來加速現代 CPU 上主要為卷積模型的模型,方法是使用 XNNPACK 稀疏推論。
本指南涵蓋模型建立流程的下列步驟
- 建構和訓練密集基準
- 使用剪枝微調模型
- 轉換為 TFLite
- 裝置端效能評估
本指南未涵蓋使用剪枝進行微調的最佳實務做法。如需關於此主題的更多詳細資訊,請參閱我們的綜合指南。
設定
pip install -q tensorflow
pip install -q tensorflow-model-optimization
import tempfile
import tensorflow as tf
import numpy as np
from tensorflow import keras
import tensorflow_datasets as tfds
import tensorflow_model_optimization as tfmot
import tf_keras as keras
%load_ext tensorboard
建構和訓練密集模型
我們建構和訓練一個簡單的基準 CNN,用於 CIFAR10 資料集上的分類任務。
# Load CIFAR10 dataset.
(ds_train, ds_val, ds_test), ds_info = tfds.load(
'cifar10',
split=['train[:90%]', 'train[90%:]', 'test'],
as_supervised=True,
with_info=True,
)
# Normalize the input image so that each pixel value is between 0 and 1.
def normalize_img(image, label):
"""Normalizes images: `uint8` -> `float32`."""
return tf.image.convert_image_dtype(image, tf.float32), label
# Load the data in batches of 128 images.
batch_size = 128
def prepare_dataset(ds, buffer_size=None):
ds = ds.map(normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds = ds.cache()
if buffer_size:
ds = ds.shuffle(buffer_size)
ds = ds.batch(batch_size)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
return ds
ds_train = prepare_dataset(ds_train,
buffer_size=ds_info.splits['train'].num_examples)
ds_val = prepare_dataset(ds_val)
ds_test = prepare_dataset(ds_test)
# Build the dense baseline model.
dense_model = keras.Sequential([
keras.layers.InputLayer(input_shape=(32, 32, 3)),
keras.layers.ZeroPadding2D(padding=1),
keras.layers.Conv2D(
filters=8,
kernel_size=(3, 3),
strides=(2, 2),
padding='valid'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.DepthwiseConv2D(kernel_size=(3, 3), padding='same'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(filters=16, kernel_size=(1, 1)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.ZeroPadding2D(padding=1),
keras.layers.DepthwiseConv2D(
kernel_size=(3, 3), strides=(2, 2), padding='valid'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(filters=32, kernel_size=(1, 1)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.GlobalAveragePooling2D(),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Compile and train the dense model for 10 epochs.
dense_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
dense_model.fit(
ds_train,
epochs=10,
validation_data=ds_val)
# Evaluate the dense model.
_, dense_model_accuracy = dense_model.evaluate(ds_test, verbose=0)
2024-03-09 12:24:36.121481: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:282] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected Epoch 1/10 352/352 [==============================] - 30s 22ms/step - loss: 1.9770 - accuracy: 0.2809 - val_loss: 2.2183 - val_accuracy: 0.1870 Epoch 2/10 352/352 [==============================] - 5s 14ms/step - loss: 1.7223 - accuracy: 0.3653 - val_loss: 1.7735 - val_accuracy: 0.3536 Epoch 3/10 352/352 [==============================] - 5s 14ms/step - loss: 1.6209 - accuracy: 0.4032 - val_loss: 1.8308 - val_accuracy: 0.3450 Epoch 4/10 352/352 [==============================] - 5s 14ms/step - loss: 1.5506 - accuracy: 0.4355 - val_loss: 1.5608 - val_accuracy: 0.4204 Epoch 5/10 352/352 [==============================] - 5s 14ms/step - loss: 1.5062 - accuracy: 0.4489 - val_loss: 1.6044 - val_accuracy: 0.4158 Epoch 6/10 352/352 [==============================] - 5s 14ms/step - loss: 1.4679 - accuracy: 0.4653 - val_loss: 1.5631 - val_accuracy: 0.4178 Epoch 7/10 352/352 [==============================] - 5s 14ms/step - loss: 1.4425 - accuracy: 0.4773 - val_loss: 1.4628 - val_accuracy: 0.4752 Epoch 8/10 352/352 [==============================] - 5s 14ms/step - loss: 1.4227 - accuracy: 0.4844 - val_loss: 1.5183 - val_accuracy: 0.4478 Epoch 9/10 352/352 [==============================] - 5s 14ms/step - loss: 1.4066 - accuracy: 0.4886 - val_loss: 1.5305 - val_accuracy: 0.4382 Epoch 10/10 352/352 [==============================] - 5s 14ms/step - loss: 1.3929 - accuracy: 0.4952 - val_loss: 1.4030 - val_accuracy: 0.4894
建構稀疏模型
使用綜合指南中的指示,我們套用 tfmot.sparsity.keras.prune_low_magnitude
函式,並搭配以透過剪枝為目標裝置端加速的參數,即 tfmot.sparsity.keras.PruneForLatencyOnXNNPack
政策。
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
# Compute end step to finish pruning after after 5 epochs.
end_epoch = 5
num_iterations_per_epoch = len(ds_train)
end_step = num_iterations_per_epoch * end_epoch
# Define parameters for pruning.
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.25,
final_sparsity=0.75,
begin_step=0,
end_step=end_step),
'pruning_policy': tfmot.sparsity.keras.PruneForLatencyOnXNNPack()
}
# Try to apply pruning wrapper with pruning policy parameter.
try:
model_for_pruning = prune_low_magnitude(dense_model, **pruning_params)
except ValueError as e:
print(e)
呼叫 prune_low_magnitude
會產生 ValueError
,並顯示訊息 Could not find a GlobalAveragePooling2D layer with keepdims = True in all output branches
。(在所有輸出分支中找不到 keepdims = True 的 GlobalAveragePooling2D 層)。此訊息表示模型不支援使用 tfmot.sparsity.keras.PruneForLatencyOnXNNPack
政策進行剪枝,且特別是 GlobalAveragePooling2D
層需要參數 keepdims = True
。讓我們修正此問題並重新套用 prune_low_magnitude
函式。
fixed_dense_model = keras.Sequential([
keras.layers.InputLayer(input_shape=(32, 32, 3)),
keras.layers.ZeroPadding2D(padding=1),
keras.layers.Conv2D(
filters=8,
kernel_size=(3, 3),
strides=(2, 2),
padding='valid'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.DepthwiseConv2D(kernel_size=(3, 3), padding='same'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(filters=16, kernel_size=(1, 1)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.ZeroPadding2D(padding=1),
keras.layers.DepthwiseConv2D(
kernel_size=(3, 3), strides=(2, 2), padding='valid'),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(filters=32, kernel_size=(1, 1)),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.GlobalAveragePooling2D(keepdims=True),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Use the pretrained model for pruning instead of training from scratch.
fixed_dense_model.set_weights(dense_model.get_weights())
# Try to reapply pruning wrapper.
model_for_pruning = prune_low_magnitude(fixed_dense_model, **pruning_params)
prune_low_magnitude
的調用已完成,且未發生任何錯誤,這表示模型完全支援 tfmot.sparsity.keras.PruneForLatencyOnXNNPack
政策,並且可以使用 XNNPACK 稀疏推論加速。
微調稀疏模型
依照剪枝範例,我們使用密集模型的權重微調稀疏模型。我們從稀疏度 25% (25% 的權重設為零) 開始微調模型,並以稀疏度 75% 結束。
logdir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]
model_for_pruning.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
model_for_pruning.fit(
ds_train,
epochs=15,
validation_data=ds_val,
callbacks=callbacks)
# Evaluate the dense model.
_, pruned_model_accuracy = model_for_pruning.evaluate(ds_test, verbose=0)
print('Dense model test accuracy:', dense_model_accuracy)
print('Pruned model test accuracy:', pruned_model_accuracy)
Epoch 1/15 352/352 [==============================] - 11s 17ms/step - loss: 1.3992 - accuracy: 0.4897 - val_loss: 1.9449 - val_accuracy: 0.3402 Epoch 2/15 352/352 [==============================] - 5s 15ms/step - loss: 1.4250 - accuracy: 0.4852 - val_loss: 1.7185 - val_accuracy: 0.3716 Epoch 3/15 352/352 [==============================] - 5s 15ms/step - loss: 1.4584 - accuracy: 0.4666 - val_loss: 1.8855 - val_accuracy: 0.3426 Epoch 4/15 352/352 [==============================] - 5s 15ms/step - loss: 1.4717 - accuracy: 0.4616 - val_loss: 1.8802 - val_accuracy: 0.3554 Epoch 5/15 352/352 [==============================] - 5s 15ms/step - loss: 1.4519 - accuracy: 0.4727 - val_loss: 1.6495 - val_accuracy: 0.3972 Epoch 6/15 352/352 [==============================] - 5s 15ms/step - loss: 1.4326 - accuracy: 0.4800 - val_loss: 1.4971 - val_accuracy: 0.4416 Epoch 7/15 352/352 [==============================] - 5s 15ms/step - loss: 1.4205 - accuracy: 0.4860 - val_loss: 1.7675 - val_accuracy: 0.4002 Epoch 8/15 352/352 [==============================] - 5s 15ms/step - loss: 1.4114 - accuracy: 0.4893 - val_loss: 1.5721 - val_accuracy: 0.4234 Epoch 9/15 352/352 [==============================] - 5s 15ms/step - loss: 1.4038 - accuracy: 0.4917 - val_loss: 1.6057 - val_accuracy: 0.4236 Epoch 10/15 352/352 [==============================] - 5s 15ms/step - loss: 1.3959 - accuracy: 0.4930 - val_loss: 1.5344 - val_accuracy: 0.4484 Epoch 11/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3899 - accuracy: 0.4969 - val_loss: 1.4643 - val_accuracy: 0.4768 Epoch 12/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3829 - accuracy: 0.4996 - val_loss: 1.5114 - val_accuracy: 0.4494 Epoch 13/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3777 - accuracy: 0.5020 - val_loss: 1.5931 - val_accuracy: 0.4278 Epoch 14/15 352/352 [==============================] - 5s 14ms/step - loss: 1.3749 - accuracy: 0.5018 - val_loss: 1.4799 - val_accuracy: 0.4680 Epoch 15/15 352/352 [==============================] - 5s 15ms/step - loss: 1.3704 - accuracy: 0.5041 - val_loss: 1.5630 - val_accuracy: 0.4490 Dense model test accuracy: 0.49380001425743103 Pruned model test accuracy: 0.44940000772476196
記錄顯示每個層級稀疏度的進展。
#docs_infra: no_execute
%tensorboard --logdir={logdir}
在使用剪枝進行微調後,與密集模型相比,測試準確度展現了適度的提升 (43% 提升至 44%)。讓我們使用 TFLite 效能評估比較裝置端延遲。
模型轉換和效能評估
若要將剪枝模型轉換為 TFLite,我們需要透過 strip_pruning
函式將 PruneLowMagnitude
包裝函式取代為原始層。此外,由於剪枝模型 (model_for_pruning
) 的權重大多為零,因此我們可以套用最佳化 tf.lite.Optimize.EXPERIMENTAL_SPARSITY
,以有效率地儲存產生的 TFLite 模型。密集模型不需要此最佳化旗標。
converter = tf.lite.TFLiteConverter.from_keras_model(dense_model)
dense_tflite_model = converter.convert()
_, dense_tflite_file = tempfile.mkstemp('.tflite')
with open(dense_tflite_file, 'wb') as f:
f.write(dense_tflite_model)
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
converter.optimizations = [tf.lite.Optimize.EXPERIMENTAL_SPARSITY]
pruned_tflite_model = converter.convert()
_, pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(pruned_tflite_file, 'wb') as f:
f.write(pruned_tflite_model)
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmprnl_sl6s/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmprnl_sl6s/assets WARNING: All log messages before absl::InitializeLog() is called are written to STDERR W0000 00:00:1709987241.973351 18472 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format. W0000 00:00:1709987241.973414 18472 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency. INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpk0lumuch/assets INFO:tensorflow:Assets written to: /tmpfs/tmp/tmpk0lumuch/assets W0000 00:00:1709987245.660280 18472 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format. W0000 00:00:1709987245.660323 18472 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
依照 TFLite 模型效能評估工具的指示,我們建構工具、將其與密集和剪枝 TFLite 模型一起上傳至 Android 裝置,並在裝置上評估這兩個模型的效能。
! adb shell /data/local/tmp/benchmark_model \
--graph=/data/local/tmp/dense_model.tflite \
--use_xnnpack=true \
--num_runs=100 \
--num_threads=1
/bin/bash: adb: command not found
! adb shell /data/local/tmp/benchmark_model \
--graph=/data/local/tmp/pruned_model.tflite \
--use_xnnpack=true \
--num_runs=100 \
--num_threads=1
/bin/bash: adb: command not found
在 Pixel 4 上的效能評估結果顯示,密集模型的平均推論時間為 17 微秒,而剪枝模型的平均推論時間為 12 微秒。裝置端效能評估顯示,即使是對於如此小的模型,延遲也明顯改善了 5 微秒或 30%。根據我們的經驗,基於 MobileNetV3 或 EfficientNet-lite 的較大模型也展現了相似的效能提升。加速效果取決於 1x1 卷積對整體模型的相對貢獻。
結論
在本教學課程中,我們示範如何使用 TF MOT API 和 XNNPack 推出的新功能,建立稀疏模型以獲得更快的裝置端效能。這些稀疏模型比其密集模型更小、更快,同時保持甚至超越其品質。
我們鼓勵您試用這項新功能,這對於在裝置上部署模型可能特別重要。