checkpoints

import tensorflow as tf
tf.reset_default_graph()
global_step=tf.train.get_or_create_global_step()
step=tf.assign_add(global_step,1)
#设置检查点路径喂log/checkpoints
with tf.train.MonitoredTrainingSession(checkpoint_dir='D:/data_savelog/checkpoints',save_checkpoint_secs=2) as sess:
    print(sess.run([global_step]))
    while not sess.should_stop():
        i=sess.run(step)
        print(i)

 

上一篇:PyTorch 1.0 中文文档:torch.utils.checkpoint


下一篇:PostgreSQL如何删除不使用的xlog文件