Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches (in this case, 3000 batches). You may need to use the repeat() function when building your dataset.
history = model.fit(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
以上是出错代码,我使用的训练样本数为2000,验证样本数为1000,batch_size=32
出错原因:
每个轮次应该包含整个训练数据集。epochs:使用整个数据集训练你的模型的次数。
由于内存限制,需要分批训练,而不是一次性加载所有内容。
因此批处理数*批次大小=总数训练个样本数,即steps_per_epoch*batch_size=num_training_examples
因此,steps_per_epoch=num_training_examples/batch_size。拿这个值的上限来说。
对于我的情况,2000/32=63,同理validation_steps=1000/32=32
代码应该修改为:
history = model.fit(
train_generator,
steps_per_epoch=63,
epochs=100,
validation_data=validation_generator,
validation_steps=32)