![]() |
![]() |
![]() |
![]() |
本指南示範如何將 TPU 上嵌入的訓練作業,從 TensorFlow 1 的 embedding_column
API 與 TPUEstimator
遷移至 TensorFlow 2 的 TPUEmbedding
層 API 與 TPUStrategy
。
嵌入是 (大型) 矩陣。它們是查詢表,可將稀疏特徵空間對應至密集向量。嵌入提供有效率又密集的表示法,能擷取特徵之間複雜的相似性和關聯性。
TensorFlow 包含在 TPU 上訓練嵌入的專門支援。這項 TPU 專用的嵌入支援可讓您訓練大於單一 TPU 裝置記憶體的嵌入,並在 TPU 上使用稀疏和參差不齊的輸入。
- 在 TensorFlow 1 中,
tf.compat.v1.estimator.tpu.TPUEstimator
是高階 API,封裝了 TPU 的訓練、評估、預測和匯出以進行服務。它特別支援tf.compat.v1.tpu.experimental.embedding_column
。 - 若要在 TensorFlow 2 中實作此功能,請使用 TensorFlow Recommenders 的
tfrs.layers.embedding.TPUEmbedding
層。對於訓練和評估,請使用 TPU 分散式策略 (tf.distribute.TPUStrategy
),此策略與 Keras API 相容,例如模型建構 (tf.keras.Model
)、最佳化工具 (tf.keras.optimizers.Optimizer
),以及使用Model.fit
進行訓練或使用tf.function
和tf.GradientTape
進行自訂訓練迴圈。
如需其他資訊,請參閱 tfrs.layers.embedding.TPUEmbedding
層的 API 文件,以及 tf.tpu.experimental.embedding.TableConfig
和 tf.tpu.experimental.embedding.FeatureConfig
文件以取得其他資訊。如需 tf.distribute.TPUStrategy
的總覽,請查看分散式訓練指南和使用 TPU 指南。如果您要從 TPUEstimator
遷移至 TPUStrategy
,請查看 TPU 遷移指南。
設定
首先安裝 TensorFlow Recommenders 並匯入一些必要的套件
pip install tensorflow-recommenders
import tensorflow as tf
import tensorflow.compat.v1 as tf1
# TPUEmbedding layer is not part of TensorFlow.
import tensorflow_recommenders as tfrs
/tmpfs/src/tf_docs_env/lib/python3.6/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.8) or chardet (2.3.0)/charset_normalizer (2.0.12) doesn't match a supported version! RequestsDependencyWarning)
並準備用於示範用途的簡單資料集
features = [[1., 1.5]]
embedding_features_indices = [[0, 0], [0, 1]]
embedding_features_values = [0, 5]
labels = [[0.3]]
eval_features = [[4., 4.5]]
eval_embedding_features_indices = [[0, 0], [0, 1]]
eval_embedding_features_values = [4, 3]
eval_labels = [[0.8]]
TensorFlow 1:使用 TPUEstimator 在 TPU 上訓練嵌入
在 TensorFlow 1 中,您可以使用 tf.compat.v1.tpu.experimental.embedding_column
API 設定 TPU 嵌入,並使用 tf.compat.v1.estimator.tpu.TPUEstimator
在 TPU 上訓練/評估模型。
輸入是介於零到 TPU 嵌入表詞彙大小之間的整數。首先使用 tf.feature_column.categorical_column_with_identity
將輸入編碼為類別 ID。由於輸入特徵為整數值,因此將「sparse_feature」用於 key
參數,而 num_buckets
則是嵌入表的詞彙大小 (10)。
embedding_id_column = (
tf1.feature_column.categorical_column_with_identity(
key="sparse_feature", num_buckets=10))
接著,使用 tpu.experimental.embedding_column
將稀疏類別輸入轉換為密集表示法,其中 dimension
是嵌入表的寬度。它會為每個 num_buckets
儲存一個嵌入向量。
embedding_column = tf1.tpu.experimental.embedding_column(
embedding_id_column, dimension=5)
現在,透過 tf.estimator.tpu.experimental.EmbeddingConfigSpec
定義 TPU 專用的嵌入設定。您稍後會將其做為 embedding_config_spec
參數傳遞至 tf.estimator.tpu.TPUEstimator
。
embedding_config_spec = tf1.estimator.tpu.experimental.EmbeddingConfigSpec(
feature_columns=(embedding_column,),
optimization_parameters=(
tf1.tpu.experimental.AdagradParameters(0.05)))
接著,若要使用 TPUEstimator
,請定義
- 訓練資料的輸入函式
- 評估資料的評估輸入函式
- 模型函式,用於指示
TPUEstimator
如何使用特徵和標籤定義訓練作業
def _input_fn(params):
dataset = tf1.data.Dataset.from_tensor_slices((
{"dense_feature": features,
"sparse_feature": tf1.SparseTensor(
embedding_features_indices,
embedding_features_values, [1, 2])},
labels))
dataset = dataset.repeat()
return dataset.batch(params['batch_size'], drop_remainder=True)
def _eval_input_fn(params):
dataset = tf1.data.Dataset.from_tensor_slices((
{"dense_feature": eval_features,
"sparse_feature": tf1.SparseTensor(
eval_embedding_features_indices,
eval_embedding_features_values, [1, 2])},
eval_labels))
dataset = dataset.repeat()
return dataset.batch(params['batch_size'], drop_remainder=True)
def _model_fn(features, labels, mode, params):
embedding_features = tf1.keras.layers.DenseFeatures(embedding_column)(features)
concatenated_features = tf1.keras.layers.Concatenate(axis=1)(
[embedding_features, features["dense_feature"]])
logits = tf1.layers.Dense(1)(concatenated_features)
loss = tf1.losses.mean_squared_error(labels=labels, predictions=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
optimizer = tf1.tpu.CrossShardOptimizer(optimizer)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
return tf1.estimator.tpu.TPUEstimatorSpec(mode, loss=loss, train_op=train_op)
定義這些函式後,請建立提供叢集資訊的 tf.distribute.cluster_resolver.TPUClusterResolver
,以及 tf.compat.v1.estimator.tpu.RunConfig
物件。
連同您已定義的模型函式,您現在可以建立 TPUEstimator
。在這裡,您將略過檢查點儲存,以簡化流程。然後,您將為 TPUEstimator
指定訓練和評估的批次大小。
cluster_resolver = tf1.distribute.cluster_resolver.TPUClusterResolver(tpu='')
print("All devices: ", tf1.config.list_logical_devices('TPU'))
All devices: []
tpu_config = tf1.estimator.tpu.TPUConfig(
iterations_per_loop=10,
per_host_input_for_training=tf1.estimator.tpu.InputPipelineConfig
.PER_HOST_V2)
config = tf1.estimator.tpu.RunConfig(
cluster=cluster_resolver,
save_checkpoints_steps=None,
tpu_config=tpu_config)
estimator = tf1.estimator.tpu.TPUEstimator(
model_fn=_model_fn, config=config, train_batch_size=8, eval_batch_size=8,
embedding_config_spec=embedding_config_spec)
WARNING:tensorflow:Estimator's model_fn (<function _model_fn at 0x7f168033fc80>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpxc9fm1_q INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpxc9fm1_q', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true cluster_def { job { name: "worker" tasks { key: 0 value: "10.240.1.10:8470" } } } isolate_session_state: true , '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({'worker': ['10.240.1.10:8470']}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': 'grpc://10.240.1.10:8470', '_evaluation_master': 'grpc://10.240.1.10:8470', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=10, num_shards=None, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None, eval_training_input_configuration=2, experimental_host_call_every_n_steps=1, experimental_allow_per_host_v2_parallel_get_next=False, experimental_feed_hook=None), '_cluster': <tensorflow.python.distribute.cluster_resolver.tpu.tpu_cluster_resolver.TPUClusterResolver object at 0x7f16803b4a20>} INFO:tensorflow:_TPUContext: eval_on_tpu True
呼叫 TPUEstimator.train
以開始訓練模型
estimator.train(_input_fn, steps=1)
INFO:tensorflow:Querying Tensorflow master (grpc://10.240.1.10:8470) for TPU system metadata. INFO:tensorflow:Found TPU system: INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 6349538157198932596) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 9059152445598865227) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 3922455949451878923) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, -6084187114011162725) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 8400191476321474241) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 5484621084550964852) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, -8416772895681308377) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 2490716523526845408) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 7973779273400954871) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 6478654019570154047) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 7299189593611257732) WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts. INFO:tensorflow:Calling model_fn. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/tpu/feature_column_v2.py:479: IdentityCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version. Instructions for updating: The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead. INFO:tensorflow:Querying Tensorflow master (grpc://10.240.1.10:8470) for TPU system metadata. INFO:tensorflow:Found TPU system: INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 6349538157198932596) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 9059152445598865227) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 3922455949451878923) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, -6084187114011162725) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 8400191476321474241) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 5484621084550964852) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, -8416772895681308377) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 2490716523526845408) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 7973779273400954871) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 6478654019570154047) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 7299189593611257732) WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/training/adagrad.py:77: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor INFO:tensorflow:Bypassing TPUEstimator hook INFO:tensorflow:Done calling model_fn. INFO:tensorflow:TPU job name worker INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py:758: Variable.load (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Prefer Variable.assign which has equivalent behavior in 2.X. INFO:tensorflow:Initialized dataset iterators in 0 seconds INFO:tensorflow:Installing graceful shutdown hook. INFO:tensorflow:Creating heartbeat manager for ['/job:worker/replica:0/task:0/device:CPU:0'] INFO:tensorflow:Configuring worker heartbeat: shutdown_mode: WAIT_FOR_COORDINATOR INFO:tensorflow:Init TPU system INFO:tensorflow:Initialized TPU in 8 seconds INFO:tensorflow:Starting infeed thread controller. INFO:tensorflow:Starting outfeed thread controller. INFO:tensorflow:Enqueue next (1) batch(es) of data to infeed. INFO:tensorflow:Dequeue next (1) batch(es) of data from outfeed. INFO:tensorflow:Outfeed finished for iteration (0, 0) INFO:tensorflow:loss = 0.6467617, step = 1 INFO:tensorflow:Stop infeed thread controller INFO:tensorflow:Shutting down InfeedController thread. INFO:tensorflow:InfeedController received shutdown signal, stopping. INFO:tensorflow:Infeed thread finished, shutting down. INFO:tensorflow:infeed marked as finished INFO:tensorflow:Stop output thread controller INFO:tensorflow:Shutting down OutfeedController thread. INFO:tensorflow:OutfeedController received shutdown signal, stopping. INFO:tensorflow:Outfeed thread finished, shutting down. INFO:tensorflow:outfeed marked as finished INFO:tensorflow:Shutdown TPU system. INFO:tensorflow:Loss for final step: 0.6467617. INFO:tensorflow:training_loop marked as finished <tensorflow_estimator.python.estimator.tpu.tpu_estimator.TPUEstimator at 0x7f168035b128>
然後,呼叫 TPUEstimator.evaluate
以使用評估資料評估模型
estimator.evaluate(_eval_input_fn, steps=1)
INFO:tensorflow:Could not find trained model in model_dir: /tmp/tmpxc9fm1_q, running initialization to evaluate. INFO:tensorflow:Calling model_fn. INFO:tensorflow:Querying Tensorflow master (grpc://10.240.1.10:8470) for TPU system metadata. INFO:tensorflow:Found TPU system: INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 6349538157198932596) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 9059152445598865227) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 3922455949451878923) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, -6084187114011162725) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 8400191476321474241) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 5484621084550964852) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, -8416772895681308377) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 2490716523526845408) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 7973779273400954871) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 6478654019570154047) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 7299189593611257732) WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py:3406: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Starting evaluation at 2022-03-02T13:22:48 INFO:tensorflow:TPU job name worker INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Init TPU system INFO:tensorflow:Initialized TPU in 11 seconds INFO:tensorflow:Starting infeed thread controller. INFO:tensorflow:Starting outfeed thread controller. INFO:tensorflow:Initialized dataset iterators in 0 seconds INFO:tensorflow:Enqueue next (1) batch(es) of data to infeed. INFO:tensorflow:Dequeue next (1) batch(es) of data from outfeed. INFO:tensorflow:Outfeed finished for iteration (0, 0) INFO:tensorflow:Evaluation [1/1] INFO:tensorflow:Stop infeed thread controller INFO:tensorflow:Shutting down InfeedController thread. INFO:tensorflow:InfeedController received shutdown signal, stopping. INFO:tensorflow:Infeed thread finished, shutting down. INFO:tensorflow:infeed marked as finished INFO:tensorflow:Stop output thread controller INFO:tensorflow:Shutting down OutfeedController thread. INFO:tensorflow:OutfeedController received shutdown signal, stopping. INFO:tensorflow:Outfeed thread finished, shutting down. INFO:tensorflow:outfeed marked as finished INFO:tensorflow:Shutdown TPU system. INFO:tensorflow:Inference Time : 12.06464s INFO:tensorflow:Finished evaluation at 2022-03-02-13:23:00 INFO:tensorflow:Saving dict for global step 1: global_step = 1, loss = 0.16138805 INFO:tensorflow:evaluation_loop marked as finished {'loss': 0.16138805, 'global_step': 1}
TensorFlow 2:使用 TPUStrategy 在 TPU 上訓練嵌入
在 TensorFlow 2 中,若要在 TPU 工作站上進行訓練,請將 tf.distribute.TPUStrategy
與 Keras API 搭配使用,以進行模型定義和訓練/評估。(如需使用 Keras Model.fit 和自訂訓練迴圈 (搭配 tf.function
和 tf.GradientTape
) 進行訓練的更多範例,請參閱使用 TPU 指南。)
由於您需要執行一些初始化工作,才能連線至遠端叢集並初始化 TPU 工作站,因此請先建立 TPUClusterResolver
以提供叢集資訊並連線至叢集。(如要瞭解詳情,請參閱使用 TPU 指南中的TPU 初始化章節。)
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
INFO:tensorflow:Clearing out eager caches INFO:tensorflow:Clearing out eager caches INFO:tensorflow:Initializing the TPU system: grpc://10.240.1.10:8470 INFO:tensorflow:Initializing the TPU system: grpc://10.240.1.10:8470 INFO:tensorflow:Finished initializing TPU system. INFO:tensorflow:Finished initializing TPU system. All devices: [LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:0', device_type='TPU'), LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:1', device_type='TPU'), LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:2', device_type='TPU'), LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:3', device_type='TPU'), LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:4', device_type='TPU'), LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:5', device_type='TPU'), LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:6', device_type='TPU'), LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:7', device_type='TPU')]
接著,準備您的資料。這與您在 TensorFlow 1 範例中建立資料集的方式類似,不同之處在於,現在資料集函式會傳遞 tf.distribute.InputContext
物件,而不是 params
字典。您可以使用這個物件來判斷本機批次大小 (以及這個管線適用於哪個主機,以便您正確地分割資料)。
- 使用
tfrs.layers.embedding.TPUEmbedding
API 時,務必在使用Dataset.batch
批次處理資料集時加入drop_remainder=True
選項,因為TPUEmbedding
需要固定的批次大小。 - 此外,如果評估和訓練在同一組裝置上進行,則必須使用相同的批次大小。
- 最後,您應該搭配特殊輸入選項 (
experimental_fetch_to_device=False
) 使用tf.keras.utils.experimental.DatasetCreator
和tf.distribute.InputOptions
(其中包含策略專用設定)。以下示範此做法
global_batch_size = 8
def _input_dataset(context: tf.distribute.InputContext):
dataset = tf.data.Dataset.from_tensor_slices((
{"dense_feature": features,
"sparse_feature": tf.SparseTensor(
embedding_features_indices,
embedding_features_values, [1, 2])},
labels))
dataset = dataset.shuffle(10).repeat()
dataset = dataset.batch(
context.get_per_replica_batch_size(global_batch_size),
drop_remainder=True)
return dataset.prefetch(2)
def _eval_dataset(context: tf.distribute.InputContext):
dataset = tf.data.Dataset.from_tensor_slices((
{"dense_feature": eval_features,
"sparse_feature": tf.SparseTensor(
eval_embedding_features_indices,
eval_embedding_features_values, [1, 2])},
eval_labels))
dataset = dataset.repeat()
dataset = dataset.batch(
context.get_per_replica_batch_size(global_batch_size),
drop_remainder=True)
return dataset.prefetch(2)
input_options = tf.distribute.InputOptions(
experimental_fetch_to_device=False)
input_dataset = tf.keras.utils.experimental.DatasetCreator(
_input_dataset, input_options=input_options)
eval_dataset = tf.keras.utils.experimental.DatasetCreator(
_eval_dataset, input_options=input_options)
接著,資料準備就緒後,您將建立 TPUStrategy
,並在此策略的範圍 (Strategy.scope
) 內定義模型、指標和最佳化工具。
您應該在 Model.compile
中為 steps_per_execution
選取一個數字,因為它指定每次 tf.function
呼叫期間要執行的批次數量,而且對於效能至關重要。此引數與 TPUEstimator
中使用的 iterations_per_loop
類似。
在 TensorFlow 1 中透過 tf.tpu.experimental.embedding_column
(和 tf.tpu.experimental.shared_embedding_column
) 指定的特徵和表格設定,可以直接在 TensorFlow 2 中透過一對設定物件指定
(如需更多詳細資訊,請參閱相關的 API 文件。)
strategy = tf.distribute.TPUStrategy(cluster_resolver)
with strategy.scope():
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
dense_input = tf.keras.Input(shape=(2,), dtype=tf.float32, batch_size=global_batch_size)
sparse_input = tf.keras.Input(shape=(), dtype=tf.int32, batch_size=global_batch_size)
embedded_input = tfrs.layers.embedding.TPUEmbedding(
feature_config=tf.tpu.experimental.embedding.FeatureConfig(
table=tf.tpu.experimental.embedding.TableConfig(
vocabulary_size=10,
dim=5,
initializer=tf.initializers.TruncatedNormal(mean=0.0, stddev=1)),
name="sparse_input"),
optimizer=optimizer)(sparse_input)
input = tf.keras.layers.Concatenate(axis=1)([dense_input, embedded_input])
result = tf.keras.layers.Dense(1)(input)
model = tf.keras.Model(inputs={"dense_feature": dense_input, "sparse_feature": sparse_input}, outputs=result)
model.compile(optimizer, "mse", steps_per_execution=10)
INFO:tensorflow:Found TPU system: INFO:tensorflow:Found TPU system: INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Cores: 8 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Workers: 1 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Num TPU Cores Per Worker: 8 INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:localhost/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0) INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 0, 0)
這樣一來,您就可以使用訓練資料集訓練模型
model.fit(input_dataset, epochs=5, steps_per_epoch=10)
Epoch 1/5 10/10 [==============================] - 2s 175ms/step - loss: 0.0057 10/10 [==============================] - 0s 3ms/step - loss: 0.0000e+00 10/10 [==============================] - 0s 3ms/step - loss: 0.0000e+00 10/10 [==============================] - 0s 3ms/step - loss: 0.0000e+00 10/10 [==============================] - 0s 3ms/step - loss: 0.0000e+00 <keras.callbacks.History at 0x7f16803b4a90>
最後,使用評估資料集評估模型
model.evaluate(eval_dataset, steps=1, return_dict=True)
1/1 [==============================] - 1s 1s/step - loss: 12.2297 {'loss': 12.229663848876953}
後續步驟
在 API 文件中瞭解更多關於設定 TPU 專用嵌入的資訊
tfrs.layers.embedding.TPUEmbedding
:特別是有關特徵和表格設定、設定最佳化工具、建立模型 (使用 Keras 函式 API 或透過子類別化tf.keras.Model
)、訓練/評估,以及使用tf.saved_model
進行模型服務tf.tpu.experimental.embedding.TableConfig
tf.tpu.experimental.embedding.FeatureConfig
如需 TensorFlow 2 中 TPUStrategy
的更多資訊,請參閱下列資源
- 指南:使用 TPU (涵蓋使用 Keras
Model.fit
/使用tf.distribute.TPUStrategy
的自訂訓練迴圈進行訓練,以及關於使用tf.function
提升效能的秘訣) - 指南:使用 TensorFlow 進行分散式訓練
- 指南:從 TPUEstimator 遷移至 TPUStrategy。
如要瞭解更多關於自訂訓練的資訊,請參閱
- 指南:自訂 Model.fit 中的事件
- 指南:從頭開始編寫訓練迴圈
TPU (Google 專為機器學習設計的特殊應用積體電路) 可透過 Google Colab、TPU 研究雲端和 Cloud TPU 取得。