![]() |
![]() |
![]() |
![]() |
tf.distribute
API 提供使用者將訓練從單一機器擴展到多部機器的簡單方法。擴展模型時,使用者也必須跨多個裝置分散輸入。tf.distribute
提供 API,讓您可以使用這些 API 自動跨裝置分散輸入。
本指南將向您展示使用 tf.distribute
API 建立分散式資料集和迭代器的不同方式。此外,還將涵蓋下列主題
- 使用
tf.distribute.Strategy.experimental_distribute_dataset
和tf.distribute.Strategy.distribute_datasets_from_function
時的使用方式、分片與批次處理選項。 - 您可以迭代分散式資料集的不同方式。
tf.distribute.Strategy.experimental_distribute_dataset
/tf.distribute.Strategy.distribute_datasets_from_function
API 與tf.data
API 之間的差異,以及使用者在使用時可能遇到的任何限制。
本指南未涵蓋搭配 Keras API 使用分散式輸入。
分散式資料集
若要使用 tf.distribute
API 進行擴展,請使用 tf.data.Dataset
來表示輸入。tf.distribute
與 tf.data.Dataset
搭配使用時效率很高,例如,透過自動預先擷取到每個加速器裝置和定期效能更新。如果您有使用 tf.data.Dataset
以外項目的使用案例,請參閱本指南中的 張量輸入章節。在非分散式訓練迴圈中,首先建立 tf.data.Dataset
執行個體,然後迭代元素。例如
import tensorflow as tf
# Helper libraries
import numpy as np
import os
print(tf.__version__)
2023-12-07 02:57:52.101761: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-07 02:57:52.101810: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-07 02:57:52.103319: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2.15.0
# Simulate multiple CPUs with virtual devices
N_VIRTUAL_DEVICES = 2
physical_devices = tf.config.list_physical_devices("CPU")
tf.config.set_logical_device_configuration(
physical_devices[0], [tf.config.LogicalDeviceConfiguration() for _ in range(N_VIRTUAL_DEVICES)])
print("Available devices:")
for i, device in enumerate(tf.config.list_logical_devices()):
print("%d) %s" % (i, device))
Available devices: 0) LogicalDevice(name='/device:CPU:0', device_type='CPU') 1) LogicalDevice(name='/device:CPU:1', device_type='CPU') 2) LogicalDevice(name='/device:GPU:0', device_type='GPU') 3) LogicalDevice(name='/device:GPU:1', device_type='GPU') 4) LogicalDevice(name='/device:GPU:2', device_type='GPU') 5) LogicalDevice(name='/device:GPU:3', device_type='GPU')
global_batch_size = 16
# Create a tf.data.Dataset object.
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(global_batch_size)
@tf.function
def train_step(inputs):
features, labels = inputs
return labels - 0.3 * features
# Iterate over the dataset using the for..in construct.
for inputs in dataset:
print(train_step(inputs))
tf.Tensor( [[0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7]], shape=(16, 1), dtype=float32) tf.Tensor( [[0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7]], shape=(16, 1), dtype=float32) tf.Tensor( [[0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7]], shape=(16, 1), dtype=float32) tf.Tensor( [[0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7]], shape=(16, 1), dtype=float32) tf.Tensor( [[0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7]], shape=(16, 1), dtype=float32) tf.Tensor( [[0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7] [0.7]], shape=(16, 1), dtype=float32) tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32)
為了讓使用者能夠以最少的程式碼變更搭配使用者的現有程式碼使用 tf.distribute
策略,我們推出了兩個 API,它們會分散 tf.data.Dataset
執行個體並傳回分散式資料集物件。然後,使用者可以迭代這個分散式資料集執行個體,並像以前一樣訓練模型。現在讓我們更詳細地瞭解兩個 API - tf.distribute.Strategy.experimental_distribute_dataset
和 tf.distribute.Strategy.distribute_datasets_from_function
tf.distribute.Strategy.experimental_distribute_dataset
使用方式
此 API 接受 tf.data.Dataset
執行個體做為輸入,並傳回 tf.distribute.DistributedDataset
執行個體。您應該以等於全域批次大小的值批次處理輸入資料集。這個全域批次大小是您想要在 1 個步驟中跨所有裝置處理的樣本數。您可以使用 Pythonic 方式迭代這個分散式資料集,或使用 iter
建立迭代器。傳回的物件不是 tf.data.Dataset
執行個體,且不支援任何其他以任何方式轉換或檢查資料集的 API。如果您沒有想要跨不同複本分片輸入的特定方式,建議使用此 API。
global_batch_size = 16
mirrored_strategy = tf.distribute.MirroredStrategy()
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(global_batch_size)
# Distribute input using the `experimental_distribute_dataset`.
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
# 1 global batch of data fed to the model in 1 step.
print(next(iter(dist_dataset)))
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> })
屬性
批次處理
tf.distribute
使用新的批次大小重新批次處理輸入 tf.data.Dataset
執行個體,新的批次大小等於全域批次大小除以同步複本數。同步複本數等於在訓練期間參與梯度 allreduce 的裝置數。當使用者在分散式迭代器上呼叫 next
時,會在每個複本上傳回每個複本批次大小的資料。重新批次處理的資料集基數將永遠是複本數的倍數。以下是一些範例
tf.data.Dataset.range(6).batch(4, drop_remainder=False)
- 不分散式
- 批次 1:[0, 1, 2, 3]
- 批次 2:[4, 5]
跨 2 個複本分散式。最後一個批次 ([4, 5]) 在 2 個複本之間分割。
批次 1
- 複本 1:[0, 1]
- 複本 2:[2, 3]
批次 2
- 複本 1:[4]
- 複本 2:[5]
tf.data.Dataset.range(4).batch(4)
- 不分散式
- 批次 1:[0, 1, 2, 3]
- 跨 5 個複本分散式
- 批次 1
- 複本 1:[0]
- 複本 2:[1]
- 複本 3:[2]
- 複本 4:[3]
- 複本 5:[]
tf.data.Dataset.range(8).batch(4)
- 不分散式
- 批次 1:[0, 1, 2, 3]
- 批次 2:[4, 5, 6, 7]
- 跨 3 個複本分散式
- 批次 1
- 複本 1:[0, 1]
- 複本 2:[2, 3]
- 複本 3:[]
- 批次 2
- 複本 1:[4, 5]
- 複本 2:[6, 7]
- 複本 3:[]
重新批次處理資料集的空間複雜度會隨著複本數線性增加。這表示對於多工作站訓練使用案例,輸入管線可能會遇到 OOM 錯誤。
分片
tf.distribute
也會在搭配 MultiWorkerMirroredStrategy
和 TPUStrategy
的多工作站訓練中自動分片輸入資料集。每個資料集都在工作站的 CPU 裝置上建立。跨一組工作站自動分片資料集表示每個工作站都會被指派整個資料集的子集 (如果設定了正確的 tf.data.experimental.AutoShardPolicy
)。這是為了確保在每個步驟中,每個工作站都會處理全域批次大小的非重疊資料集元素。自動分片有幾個不同的選項,可以使用 tf.data.experimental.DistributeOptions
指定。請注意,使用 ParameterServerStrategy
的多工作站訓練中沒有自動分片,而且有關使用此策略建立資料集的更多資訊,請參閱 ParameterServerStrategy 教學課程。
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(64).batch(16)
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.DATA
dataset = dataset.with_options(options)
您可以為 tf.data.experimental.AutoShardPolicy
設定三個不同的選項
- AUTO:這是預設選項,表示將嘗試依 FILE 分片。如果未偵測到以檔案為基礎的資料集,則依 FILE 分片的嘗試會失敗。
tf.distribute
接著會回復為依 DATA 分片。請注意,如果輸入資料集是以檔案為基礎,但檔案數少於工作站數,則會引發InvalidArgumentError
。如果發生這種情況,請明確將政策設定為AutoShardPolicy.DATA
,或將輸入來源分割成較小的檔案,讓檔案數大於工作站數。 FILE:如果您想要跨所有工作站分片輸入檔案,這是選項。如果輸入檔案數遠大於工作站數,且檔案中的資料平均分散,您應該使用此選項。此選項的缺點是,如果檔案中的資料未平均分散,則工作站會閒置。如果檔案數少於工作站數,則會引發
InvalidArgumentError
。如果發生這種情況,請明確將政策設定為AutoShardPolicy.DATA
。例如,讓我們在 2 個工作站上分散 2 個檔案,每個工作站有 1 個複本。檔案 1 包含 [0, 1, 2, 3, 4, 5],檔案 2 包含 [6, 7, 8, 9, 10, 11]。假設同步複本總數為 2,全域批次大小為 4。- 工作站 0
- 批次 1 = 複本 1:[0, 1]
- 批次 2 = 複本 1:[2, 3]
- 批次 3 = 複本 1:[4]
- 批次 4 = 複本 1:[5]
- 工作站 1
- 批次 1 = 複本 2:[6, 7]
- 批次 2 = 複本 2:[8, 9]
- 批次 3 = 複本 2:[10]
- 批次 4 = 複本 2:[11]
DATA:這會自動跨所有工作站分片元素。每個工作站都會讀取整個資料集,並且只處理指派給它的分片。所有其他分片都會被捨棄。如果輸入檔案數少於工作站數,且您想要在所有工作站之間更好地分片資料,通常會使用此選項。缺點是每個工作站都會讀取整個資料集。例如,讓我們在 2 個工作站上分散 1 個檔案。檔案 1 包含 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]。假設同步複本總數為 2。
- 工作站 0
- 批次 1 = 複本 1:[0, 1]
- 批次 2 = 複本 1:[4, 5]
- 批次 3 = 複本 1:[8, 9]
- 工作站 1
- 批次 1 = 複本 2:[2, 3]
- 批次 2 = 複本 2:[6, 7]
- 批次 3 = 複本 2:[10, 11]
OFF:如果您關閉自動分片,每個工作站都會處理所有資料。例如,讓我們在 2 個工作站上分散 1 個檔案。檔案 1 包含 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]。假設同步複本總數為 2。然後每個工作站都會看到下列分佈
- 工作站 0
- 批次 1 = 複本 1:[0, 1]
- 批次 2 = 複本 1:[2, 3]
- 批次 3 = 複本 1:[4, 5]
- 批次 4 = 複本 1:[6, 7]
- 批次 5 = 複本 1:[8, 9]
批次 6 = 複本 1:[10, 11]
工作站 1
批次 1 = 複本 2:[0, 1]
批次 2 = 複本 2:[2, 3]
批次 3 = 複本 2:[4, 5]
批次 4 = 複本 2:[6, 7]
批次 5 = 複本 2:[8, 9]
批次 6 = 複本 2:[10, 11]
預先擷取
依預設,tf.distribute
會在使用者提供的 tf.data.Dataset
執行個體結尾新增預先擷取轉換。預先擷取轉換的引數,即 buffer_size
,等於同步複本數。
tf.distribute.Strategy.distribute_datasets_from_function
使用方式
此 API 接受輸入函式,並傳回 tf.distribute.DistributedDataset
執行個體。使用者傳入的輸入函式具有 tf.distribute.InputContext
引數,且應傳回 tf.data.Dataset
執行個體。透過此 API,tf.distribute
不會對使用者從輸入函式傳回的 tf.data.Dataset
執行個體進行任何進一步的變更。批次處理和分片資料集是使用者的責任。tf.distribute
會在每個工作站的 CPU 裝置上呼叫輸入函式。除了允許使用者指定自己的批次處理和分片邏輯外,與用於多工作站訓練時的 tf.distribute.Strategy.experimental_distribute_dataset
相比,此 API 也展現了更好的擴充性和效能。
mirrored_strategy = tf.distribute.MirroredStrategy()
def dataset_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(64).batch(16)
dataset = dataset.shard(
input_context.num_input_pipelines, input_context.input_pipeline_id)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(2) # This prefetches 2 batches per device.
return dataset
dist_dataset = mirrored_strategy.distribute_datasets_from_function(dataset_fn)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
屬性
批次處理
輸入函式傳回值的 tf.data.Dataset
執行個體應使用每個複本批次大小進行批次處理。每個複本批次大小是全域批次大小除以參與同步訓練的複本數。這是因為 tf.distribute
會在每個工作站的 CPU 裝置上呼叫輸入函式。在給定工作站上建立的資料集應準備好供該工作站上的所有複本使用。
分片
由 tf.distribute
在幕後建立的 tf.distribute.InputContext
物件會隱含地做為引數傳遞至使用者的輸入函式。它具有有關工作站數、目前工作站 ID 等的資訊。此輸入函式可以根據使用者使用屬於 tf.distribute.InputContext
物件一部分的這些屬性設定的政策來處理分片。
預先擷取
tf.distribute
不會在使用者提供的輸入函式傳回的 tf.data.Dataset
結尾新增預先擷取轉換,因此您可以在上述範例中明確呼叫 Dataset.prefetch
。
分散式迭代器
與非分散式 tf.data.Dataset
執行個體類似,您需要在 tf.distribute.DistributedDataset
執行個體上建立迭代器,以迭代它並存取 tf.distribute.DistributedDataset
中的元素。以下是您可以建立 tf.distribute.DistributedIterator
並使用它來訓練模型的方式
使用方式
使用 Pythonic for 迴圈結構
您可以使用使用者友善的 Pythonic 迴圈來迭代 tf.distribute.DistributedDataset
。tf.distribute.DistributedIterator
傳回的元素可以是單一 tf.Tensor
或 tf.distribute.DistributedValues
,其中包含每個複本的值。將迴圈放在 tf.function
內將有助於提升效能。但是,目前不支援對放置在 tf.function
內的 tf.distribute.DistributedDataset
進行迴圈的 break
和 return
。
global_batch_size = 16
mirrored_strategy = tf.distribute.MirroredStrategy()
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
@tf.function
def train_step(inputs):
features, labels = inputs
return labels - 0.3 * features
for x in dist_dataset:
# train_step trains the model using the dataset elements
loss = mirrored_strategy.run(train_step, args=(x,))
print("Loss is ", loss)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor([[0.7]], shape=(1, 1), dtype=float32), 1: tf.Tensor([[0.7]], shape=(1, 1), dtype=float32), 2: tf.Tensor([[0.7]], shape=(1, 1), dtype=float32), 3: tf.Tensor([[0.7]], shape=(1, 1), dtype=float32) }
使用 iter
建立明確迭代器
若要迭代 tf.distribute.DistributedDataset
執行個體中的元素,您可以使用 tf.distribute.DistributedIterator
上的 iter
API 來建立 tf.distribute.DistributedIterator
。使用明確迭代器,您可以迭代固定數量的步驟。為了從 tf.distribute.DistributedIterator
執行個體 dist_iterator
取得下一個元素,您可以呼叫 next(dist_iterator)
、dist_iterator.get_next()
或 dist_iterator.get_next_as_optional()
。前兩者基本上相同
num_epochs = 10
steps_per_epoch = 5
for epoch in range(num_epochs):
dist_iterator = iter(dist_dataset)
for step in range(steps_per_epoch):
# train_step trains the model using the dataset elements
loss = mirrored_strategy.run(train_step, args=(next(dist_iterator),))
# which is the same as
# loss = mirrored_strategy.run(train_step, args=(dist_iterator.get_next(),))
print("Loss is ", loss)
Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) } Loss is PerReplica:{ 0: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 1: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 2: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32), 3: tf.Tensor( [[0.7] [0.7] [0.7] [0.7]], shape=(4, 1), dtype=float32) }
使用 next
或 tf.distribute.DistributedIterator.get_next
,如果 tf.distribute.DistributedIterator
已到達結尾,則會擲回 OutOfRange 錯誤。用戶端可以在 Python 端攔截錯誤,並繼續執行其他工作,例如檢查點和評估。但是,如果您使用主機訓練迴圈 (即,每個 tf.function
執行多個步驟),這將無法運作,如下所示
@tf.function
def train_fn(iterator):
for _ in tf.range(steps_per_loop):
strategy.run(step_fn, args=(next(iterator),))
此範例 train_fn
透過將步驟主體包裝在 tf.range
內來包含多個步驟。在這種情況下,迴圈中沒有相依性的不同迭代可以並行啟動,因此在先前迭代的計算完成之前,可能會在稍後的迭代中觸發 OutOfRange 錯誤。一旦擲回 OutOfRange 錯誤,函式中的所有運算都會立即終止。如果您想要避免這種情況,另一個不會擲回 OutOfRange 錯誤的替代方案是 tf.distribute.DistributedIterator.get_next_as_optional
。get_next_as_optional
傳回 tf.experimental.Optional
,其中包含下一個元素,如果 tf.distribute.DistributedIterator
已到達結尾,則不包含值。
# You can break the loop with `get_next_as_optional` by checking if the `Optional` contains a value
global_batch_size = 4
steps_per_loop = 5
strategy = tf.distribute.MirroredStrategy()
dataset = tf.data.Dataset.range(9).batch(global_batch_size)
distributed_iterator = iter(strategy.experimental_distribute_dataset(dataset))
@tf.function
def train_fn(distributed_iterator):
for _ in tf.range(steps_per_loop):
optional_data = distributed_iterator.get_next_as_optional()
if not optional_data.has_value():
break
per_replica_results = strategy.run(lambda x: x, args=(optional_data.get_value(),))
tf.print(strategy.experimental_local_results(per_replica_results))
train_fn(distributed_iterator)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') ([0], [1], [2], [3]) ([4], [5], [6], [7]) ([8], [], [], [])
使用 element_spec
屬性
如果您將分散式資料集的元素傳遞至 tf.function
,並且想要 tf.TypeSpec
保證,您可以指定 tf.function
的 input_signature
引數。分散式資料集的輸出是 tf.distribute.DistributedValues
,可以代表單一裝置或多個裝置的輸入。若要取得對應於此分散式值的 tf.TypeSpec
,您可以使用 tf.distribute.DistributedDataset.element_spec
或 tf.distribute.DistributedIterator.element_spec
。
global_batch_size = 16
epochs = 5
steps_per_epoch = 5
mirrored_strategy = tf.distribute.MirroredStrategy()
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(global_batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
@tf.function(input_signature=[dist_dataset.element_spec])
def train_step(per_replica_inputs):
def step_fn(inputs):
return 2 * inputs
return mirrored_strategy.run(step_fn, args=(per_replica_inputs,))
for _ in range(epochs):
iterator = iter(dist_dataset)
for _ in range(steps_per_epoch):
output = train_step(next(iterator))
tf.print(output)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }) (PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> }, PerReplica:{ 0: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 1: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 2: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)>, 3: <tf.Tensor: shape=(4, 1), dtype=float32, numpy= array([[1.], [1.], [1.], [1.]], dtype=float32)> })
資料預先處理
到目前為止,您已瞭解如何分散 tf.data.Dataset
。然而,在資料準備好用於模型之前,需要先進行預先處理,例如透過清理、轉換和擴增資料。其中兩組方便的工具是
Keras 預先處理層:一組 Keras 層,可讓開發人員建構 Keras 原生輸入處理管線。某些 Keras 預先處理層包含非可訓練狀態,可以在初始化或
adapt
時設定 (請參閱 Keras 預先處理層指南的adapt
章節)。分散具狀態的預先處理層時,狀態應複製到所有工作站。若要使用這些層,您可以將它們設為模型的一部分,或將它們套用至資料集。TensorFlow Transform (tf.Transform):適用於 TensorFlow 的程式庫,可讓您透過資料預先處理管線定義執行個體層級和完整傳遞資料轉換。Tensorflow Transform 有兩個階段。第一個階段是分析階段,其中原始訓練資料會在完整傳遞程序中進行分析,以計算轉換所需的統計資料,而轉換邏輯會產生為執行個體層級運算。第二個階段是轉換階段,其中原始訓練資料會在執行個體層級程序中進行轉換。
Keras 預先處理層與 Tensorflow Transform
Tensorflow Transform 和 Keras 預先處理層都提供了一種在訓練期間分割預先處理,並在推論期間將預先處理與模型捆綁在一起的方法,從而減少訓練/服務偏差。
Tensorflow Transform 與 TFX 深度整合,提供可擴充的 map-reduce 解決方案,以分析和轉換任何大小的資料集,這項工作與訓練管線分開。如果您需要對無法在單一機器上容納的資料集執行分析,Tensorflow Transform 應該是您的首選。
Keras 預先處理層更適合在訓練期間套用的預先處理,在從磁碟讀取資料之後。它們與 Keras 程式庫中的模型開發無縫結合。它們透過 adapt
支援較小資料集的分析,並支援影像資料擴增等使用案例,其中輸入資料集的每次傳遞都會產生不同的範例以進行訓練。
這兩個程式庫也可以混合使用,其中 Tensorflow Transform 用於輸入資料的分析和靜態轉換,而 Keras 預先處理層則用於訓練時間轉換 (例如,單熱編碼或資料擴增)。
搭配 tf.distribute 的最佳做法
使用這兩種工具都涉及到初始化轉換邏輯以應用於資料,這可能會建立 Tensorflow 資源。這些資源或狀態應複製到所有工作站,以節省工作站之間或工作站與協調器之間的通訊。為此,建議您在 tf.distribute.Strategy.scope
下建立 Keras 預處理層、tft.TFTransformOutput.transform_features_layer
或 tft.TransformFeaturesLayer
,就像您對任何其他 Keras 層所做的那樣。
以下範例分別示範了 tf.distribute.Strategy
API 與高階 Keras Model.fit
API 以及自訂訓練迴圈的用法。
Keras 預處理層使用者的額外注意事項
預處理層和大型詞彙表
在多工作站設定中處理大型詞彙表(超過 1 GB)時(例如,tf.distribute.MultiWorkerMirroredStrategy
、tf.distribute.experimental.ParameterServerStrategy
、tf.distribute.TPUStrategy
),建議將詞彙表儲存到所有工作站都可存取的靜態檔案中(例如,使用 Cloud Storage)。這將減少在訓練期間將詞彙表複製到所有工作站所花費的時間。
tf.data
管道中的預處理與模型中的預處理
雖然 Keras 預處理層可以作為模型的一部分應用,也可以直接應用於 tf.data.Dataset
,但每個選項都有其優勢
- 在模型中應用預處理層使您的模型具有可攜性,並有助於減少訓練/服務偏差。(如需更多詳細資訊,請參閱使用預處理層指南中「在推論時在模型內部進行預處理的好處」章節)
- 在
tf.data
管道中應用允許預先擷取或卸載到 CPU,這通常在使用加速器時提供更好的效能。
當在一個或多個 TPU 上執行時,使用者幾乎應始終將 Keras 預處理層放置在 tf.data
管道中,因為並非所有層都支援 TPU,而且字串運算不在 TPU 上執行。(兩個例外是 tf.keras.layers.Normalization
和 tf.keras.layers.Rescaling
,它們在 TPU 上執行良好,並且通常用作影像模型中的第一層。)
使用 Model.fit
進行預處理
當使用 Keras Model.fit
時,您不需要使用 tf.distribute.Strategy.experimental_distribute_dataset
或 tf.distribute.Strategy.distribute_datasets_from_function
本身來分配資料。請查看使用預處理層指南和使用 Keras 進行分散式訓練指南以取得詳細資訊。縮短的範例如下
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
# Create the layer(s) under scope.
integer_preprocessing_layer = tf.keras.layers.IntegerLookup(vocabulary=FILE_PATH)
model = ...
model.compile(...)
dataset = dataset.map(lambda x, y: (integer_preprocessing_layer(x), y))
model.fit(dataset)
使用 tf.distribute.experimental.ParameterServerStrategy
和 Model.fit
API 的使用者需要使用 tf.keras.utils.experimental.DatasetCreator
作為輸入。(請參閱參數伺服器訓練指南以取得更多資訊)
strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver,
variable_partitioner=variable_partitioner)
with strategy.scope():
preprocessing_layer = tf.keras.layers.StringLookup(vocabulary=FILE_PATH)
model = ...
model.compile(...)
def dataset_fn(input_context):
...
dataset = dataset.map(preprocessing_layer)
...
return dataset
dataset_creator = tf.keras.utils.experimental.DatasetCreator(dataset_fn)
model.fit(dataset_creator, epochs=5, steps_per_epoch=20, callbacks=callbacks)
使用自訂訓練迴圈進行預處理
當編寫自訂訓練迴圈時,您將使用 tf.distribute.Strategy.experimental_distribute_dataset
API 或 tf.distribute.Strategy.distribute_datasets_from_function
API 來分配您的資料。如果您透過 tf.distribute.Strategy.experimental_distribute_dataset
分配您的資料集,則在您的資料管道中應用這些預處理 API 將導致資源自動與資料管道共同配置,以避免遠端資源存取。因此,此處的範例都將使用 tf.distribute.Strategy.distribute_datasets_from_function
,在這種情況下,將這些 API 的初始化放置在 strategy.scope()
下以提高效率至關重要
strategy = tf.distribute.MirroredStrategy()
vocab = ["a", "b", "c", "d", "f"]
with strategy.scope():
# Create the layer(s) under scope.
layer = tf.keras.layers.StringLookup(vocabulary=vocab)
def dataset_fn(input_context):
# a tf.data.Dataset
dataset = tf.data.Dataset.from_tensor_slices(["a", "c", "e"]).repeat()
# Custom your batching, sharding, prefetching, etc.
global_batch_size = 4
batch_size = input_context.get_per_replica_batch_size(global_batch_size)
dataset = dataset.batch(batch_size)
dataset = dataset.shard(
input_context.num_input_pipelines,
input_context.input_pipeline_id)
# Apply the preprocessing layer(s) to the tf.data.Dataset
def preprocess_with_kpl(input):
return layer(input)
processed_ds = dataset.map(preprocess_with_kpl)
return processed_ds
distributed_dataset = strategy.distribute_datasets_from_function(dataset_fn)
# Print out a few example batches.
distributed_dataset_iterator = iter(distributed_dataset)
for _ in range(3):
print(next(distributed_dataset_iterator))
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') PerReplica:{ 0: tf.Tensor([1], shape=(1,), dtype=int64), 1: tf.Tensor([3], shape=(1,), dtype=int64), 2: tf.Tensor([0], shape=(1,), dtype=int64), 3: tf.Tensor([1], shape=(1,), dtype=int64) } PerReplica:{ 0: tf.Tensor([3], shape=(1,), dtype=int64), 1: tf.Tensor([0], shape=(1,), dtype=int64), 2: tf.Tensor([1], shape=(1,), dtype=int64), 3: tf.Tensor([3], shape=(1,), dtype=int64) } PerReplica:{ 0: tf.Tensor([0], shape=(1,), dtype=int64), 1: tf.Tensor([1], shape=(1,), dtype=int64), 2: tf.Tensor([3], shape=(1,), dtype=int64), 3: tf.Tensor([0], shape=(1,), dtype=int64) }
請注意,如果您正在使用 tf.distribute.experimental.ParameterServerStrategy
進行訓練,您也將呼叫 tf.distribute.experimental.coordinator.ClusterCoordinator.create_per_worker_dataset
@tf.function
def per_worker_dataset_fn():
return strategy.distribute_datasets_from_function(dataset_fn)
per_worker_dataset = coordinator.create_per_worker_dataset(per_worker_dataset_fn)
per_worker_iterator = iter(per_worker_dataset)
對於 Tensorflow Transform,如上所述,分析階段與訓練階段分開完成,因此此處省略。有關詳細的操作方法,請參閱教學課程。通常,此階段包括建立 tf.Transform
預處理函式,並使用此預處理函式在 Apache Beam 管道中轉換資料。在分析階段結束時,輸出可以匯出為 TensorFlow 圖表,您可以用於訓練和服務。我們的範例僅涵蓋訓練管道部分
with strategy.scope():
# working_dir contains the tf.Transform output.
tf_transform_output = tft.TFTransformOutput(working_dir)
# Loading from working_dir to create a Keras layer for applying the tf.Transform output to data
tft_layer = tf_transform_output.transform_features_layer()
...
def dataset_fn(input_context):
...
dataset.map(tft_layer, num_parallel_calls=tf.data.AUTOTUNE)
...
return dataset
distributed_dataset = strategy.distribute_datasets_from_function(dataset_fn)
部分批次
在以下情況下會遇到部分批次:1) 使用者建立的 tf.data.Dataset
執行個體可能包含未被副本數量整除的批次大小;或 2) 當資料集執行個體的基數不能被批次大小整除時。這表示當資料集分佈在多個副本上時,對某些迭代器的 next
呼叫將導致 tf.errors.OutOfRangeError
。為了處理此使用案例,tf.distribute
在沒有更多資料要處理的副本上傳回批次大小為 0
的虛擬批次。
對於單一工作站案例,如果資料未由迭代器上的 next
呼叫傳回,則會建立批次大小為 0 的虛擬批次,並與資料集中的實際資料一起使用。在部分批次的情況下,最後一個全域資料批次將包含實際資料以及虛擬資料批次。現在,處理資料的停止條件會檢查任何副本上是否有資料。如果任何副本上都沒有資料,您將收到 tf.errors.OutOfRangeError
。
對於多工作站案例,代表每個工作站上資料存在的布林值會使用跨副本通訊進行彙總,這用於識別所有工作站是否已完成處理分散式資料集。由於這涉及跨工作站通訊,因此會產生一些效能損失。
注意事項
當在多工作站設定中使用
tf.distribute.Strategy.experimental_distribute_dataset
API 時,您會傳遞從檔案讀取的tf.data.Dataset
。如果tf.data.experimental.AutoShardPolicy
設定為AUTO
或FILE
,則每步的實際批次大小可能小於您為全域批次大小定義的大小。當檔案中的剩餘元素少於全域批次大小時,可能會發生這種情況。您可以耗盡資料集而不依賴要執行的步驟數,或將tf.data.experimental.AutoShardPolicy
設定為DATA
以解決此問題。目前
tf.distribute
不支援具狀態的資料集轉換,並且資料集可能具有的任何具狀態的運算目前都會被忽略。例如,如果您的資料集具有使用tf.random.uniform
旋轉影像的map_fn
,則您有一個資料集圖表,該圖表取決於執行 Python 程序的本機電腦上的狀態(即隨機種子)。預設停用的實驗性
tf.data.experimental.OptimizationOptions
在某些情況下(例如與tf.distribute
一起使用時)可能會導致效能下降。您應僅在驗證它們有益於分散式設定中工作負載的效能後才啟用它們。請參閱本指南,以瞭解如何大致使用
tf.data
優化您的輸入管道。一些額外的提示如果您有多個工作站,並且正在使用
tf.data.Dataset.list_files
從符合一個或多個 glob 模式的所有檔案建立資料集,請記住設定seed
引數或設定shuffle=False
,以便每個工作站一致地分片檔案。如果您的輸入管道同時包含在記錄層級隨機排序資料和剖析資料,除非未剖析的資料明顯大於剖析的資料(通常情況並非如此),否則請先隨機排序,然後再剖析,如下列範例所示。這可能有益於記憶體使用量和效能。
d = tf.data.Dataset.list_files(pattern, shuffle=False)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
tf.data.Dataset.shuffle(buffer_size, seed=None, reshuffle_each_iteration=None)
維護buffer_size
元素的內部緩衝區,因此減少buffer_size
可以減輕 OOM 問題。當使用
tf.distribute.experimental_distribute_dataset
或tf.distribute.distribute_datasets_from_function
時,工作站處理資料的順序沒有保證。如果您使用tf.distribute
來擴展預測,則通常需要這樣做。但是,您可以為批次中的每個元素插入索引,並相應地排序輸出。以下程式碼片段是如何排序輸出的範例。
mirrored_strategy = tf.distribute.MirroredStrategy()
dataset_size = 24
batch_size = 6
dataset = tf.data.Dataset.range(dataset_size).enumerate().batch(batch_size)
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
def predict(index, inputs):
outputs = 2 * inputs
return index, outputs
result = {}
for index, inputs in dist_dataset:
output_index, outputs = mirrored_strategy.run(predict, args=(index, inputs))
indices = list(mirrored_strategy.experimental_local_results(output_index))
rindices = []
for a in indices:
rindices.extend(a.numpy())
outputs = list(mirrored_strategy.experimental_local_results(outputs))
routputs = []
for a in outputs:
routputs.extend(a.numpy())
for i, value in zip(rindices, routputs):
result[i] = value
print(result)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. {0: 0, 1: 2, 2: 4, 3: 6, 4: 8, 5: 10, 6: 12, 7: 14, 8: 16, 9: 18, 10: 20, 11: 22, 12: 24, 13: 26, 14: 28, 15: 30, 16: 32, 17: 34, 18: 36, 19: 38, 20: 40, 21: 42, 22: 44, 23: 46}
張量輸入而非 tf.data
有時使用者無法使用 tf.data.Dataset
來表示其輸入,並且後續無法使用上述 API 將資料集分配到多個裝置。在這種情況下,您可以使用原始張量或來自產生器的輸入。
針對任意張量輸入使用 experimental_distribute_values_from_function
strategy.run
接受 tf.distribute.DistributedValues
,這是 next(iterator)
的輸出。若要傳遞張量值,請使用 tf.distribute.Strategy.experimental_distribute_values_from_function
從原始張量建構 tf.distribute.DistributedValues
。使用者必須使用 tf.distribute.experimental.ValueContext
輸入物件,在其輸入函式中指定自己的批次處理和分片邏輯。
mirrored_strategy = tf.distribute.MirroredStrategy()
def value_fn(ctx):
return tf.constant(ctx.replica_id_in_sync_group)
distributed_values = mirrored_strategy.experimental_distribute_values_from_function(value_fn)
for _ in range(4):
result = mirrored_strategy.run(lambda x: x, args=(distributed_values,))
print(result)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance. PerReplica:{ 0: tf.Tensor(0, shape=(), dtype=int32), 1: tf.Tensor(1, shape=(), dtype=int32), 2: tf.Tensor(2, shape=(), dtype=int32), 3: tf.Tensor(3, shape=(), dtype=int32) } PerReplica:{ 0: tf.Tensor(0, shape=(), dtype=int32), 1: tf.Tensor(1, shape=(), dtype=int32), 2: tf.Tensor(2, shape=(), dtype=int32), 3: tf.Tensor(3, shape=(), dtype=int32) } PerReplica:{ 0: tf.Tensor(0, shape=(), dtype=int32), 1: tf.Tensor(1, shape=(), dtype=int32), 2: tf.Tensor(2, shape=(), dtype=int32), 3: tf.Tensor(3, shape=(), dtype=int32) } PerReplica:{ 0: tf.Tensor(0, shape=(), dtype=int32), 1: tf.Tensor(1, shape=(), dtype=int32), 2: tf.Tensor(2, shape=(), dtype=int32), 3: tf.Tensor(3, shape=(), dtype=int32) }
如果您的輸入來自產生器,請使用 tf.data.Dataset.from_generator
如果您有想要使用的產生器函式,您可以使用 from_generator
API 建立 tf.data.Dataset
執行個體。
mirrored_strategy = tf.distribute.MirroredStrategy()
def input_gen():
while True:
yield np.random.rand(4)
# use Dataset.from_generator
dataset = tf.data.Dataset.from_generator(
input_gen, output_types=(tf.float32), output_shapes=tf.TensorShape([4]))
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
iterator = iter(dist_dataset)
for _ in range(4):
result = mirrored_strategy.run(lambda x: x, args=(next(iterator),))
print(result)
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3') PerReplica:{ 0: tf.Tensor([0.795073], shape=(1,), dtype=float32), 1: tf.Tensor([0.4941732], shape=(1,), dtype=float32), 2: tf.Tensor([0.51117146], shape=(1,), dtype=float32), 3: tf.Tensor([0.791901], shape=(1,), dtype=float32) } PerReplica:{ 0: tf.Tensor([0.10990978], shape=(1,), dtype=float32), 1: tf.Tensor([0.61591166], shape=(1,), dtype=float32), 2: tf.Tensor([0.17349982], shape=(1,), dtype=float32), 3: tf.Tensor([0.8937937], shape=(1,), dtype=float32) } PerReplica:{ 0: tf.Tensor([0.97211426], shape=(1,), dtype=float32), 1: tf.Tensor([0.30425492], shape=(1,), dtype=float32), 2: tf.Tensor([0.80144566], shape=(1,), dtype=float32), 3: tf.Tensor([0.25493157], shape=(1,), dtype=float32) } PerReplica:{ 0: tf.Tensor([0.07450782], shape=(1,), dtype=float32), 1: tf.Tensor([0.23319475], shape=(1,), dtype=float32), 2: tf.Tensor([0.22552523], shape=(1,), dtype=float32), 3: tf.Tensor([0.7449827], shape=(1,), dtype=float32) }