7.Tensorboard可视化,Tensorboard的数据形式, 查看网络运行时候的数据

1.Tensorboard可以记录与展示以下数据形式:

  1. 标量Scalars
  2. 图片Images
  3. 音频Audio
  4. 计算图Graph
  5. 数据分布Distribution
  6. 直方图Histograms
  7. 嵌入向量Embeddings

2.Tensorboard的可视化过程

  1. 首先肯定是先建立一个graph,你想从这个graph中获取某些数据的信息
  2. 确定要在graph中的哪些节点放置summary operations以记录信息

使用tf.summary.scalar记录标量
使用tf.summary.histogram记录数据的直方图
使用tf.summary.distribution记录数据的分布图
使用tf.summary.image记录图像数据
….

3 operations并不会去真的执行计算,除非你告诉他们需要去run,或者它被其他的需要run的operation所依赖。而我们上一步创建的这些summary operations其实并不被其他节点依赖,因此,我们需要特地去运行所有的summary节点。但是呢,一份程序下来可能有超多这样的summary 节点,要手动一个一个去启动自然是及其繁琐的,因此我们可以使用tf.summary.merge_all去将所有summary节点合并成一个节点,只要运行这个节点,就能产生所有我们之前设置的summary data。

4 使用tf.summary.FileWriter将运行后输出的数据都保存到本地磁盘中

5 运行整个程序,并在命令行输入运行tensorboard的指令,之后打开web端可查看可视化的结果


import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
batch_size = 100 
n_batch = mnist.train.num_examples // batch_size

#参数概要
def variable_summaries(var):
    with tf.name_scope('summary'):
        mean = tf.reduce_mean(var)
        #scalar 监测并记录下来,之后合并
        tf.summary.scalar('mean', mean) #平均值 params1 名字,params2 mean的值
        with tf.name_scope('stddev'):
            stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
        tf.summary.scalar('stddev', stddev) #标准差
        tf.summary.scalar('max', tf.reduce_max(var))  # 标准差
        tf.summary.scalar('min', tf.reduce_min(var))  # 标准差
        tf.summary.histogram('histogram', var)  # 直方图

with tf.name_scope('input'):
    x = tf.placeholder(tf.float32, [None, 784], name='x-input') 
    y = tf.placeholder(tf.float32, [None, 10],  name='y-input') 
    keep_prob = tf.placeholder(tf.float32, name='keep_prob')
    lr = tf.Variable(0.001, dtype=tf.float32, name='lr')

with tf.name_scope('layer1'):
    with tf.name_scope('weights1'):
        W1 = tf.Variable(tf.truncated_normal([784,500],stddev=0.1, name='W1'))
    with tf.name_scope('biases1'):
        b1 = tf.Variable(tf.zeros([500])+0.1, name='b1')
    with tf.name_scope('L1'):
        L1 = tf.nn.tanh(tf.matmul(x,W1)+b1, name="L1")
    with tf.name_scope('dropout'):
        L1_drop = tf.nn.dropout(L1, keep_prob, name='dropout1') 
        
with tf.name_scope('layer2'):
    with tf.name_scope("weight2"):
        W2 = tf.Variable(tf.truncated_normal([500,300],stddev=0.1), name='W2') 
    with tf.name_scope('biases2'):
        b2 = tf.Variable(tf.zeros([300])+0.1, name='b2')
    with tf.name_scope('L2'):
        L2 = tf.nn.tanh(tf.matmul(L1_drop,W2)+b2, name='L2')
    with tf.name_scope('dropout'):
        L2_drop = tf.nn.dropout(L2, keep_prob, name='dropout2')

with tf.name_scope('layer3'):
    with tf.name_scope("weight3"):
        W3 = tf.Variable(tf.truncated_normal([300,10],stddev=0.1), name='W3') 
        variable_summaries(W3) ## 记录W3的数据
    with tf.name_scope('biases3'):
        b3 = tf.Variable(tf.zeros([10])+0.1, name='b3')
        variable_summaries(b3) ## 记录b3的数据
    with tf.name_scope('L3'):
        prediction = tf.nn.softmax(tf.matmul(L2_drop, W3) + b3, name='L3')

with tf.name_scope('loss'):
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=prediction), name='loss') #logits预测值
    tf.summary.scalar('loss', loss) # 记录loss的数据
with tf.name_scope('train'):
    train_step = tf.train.AdamOptimizer(lr).minimize(loss)

with tf.name_scope('accuracy'):
    with tf.name_scope("corrent_prediction"):
        corrent_prediction =tf.equal(tf.argmax(y,1), tf.argmax(prediction, 1))
    with tf.name_scope("accuracy"):
        accuracy = tf.reduce_mean(tf.cast(corrent_prediction, tf.float32))
        tf.summary.scalar('acc', accuracy)  # 记录acc的数据

#合并所有的summary
merged = tf.summary.merge_all()

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    write = tf.summary.FileWriter('logs/',sess.graph) # params1 是路径,params2 是存放的图
    for epoch in range(31):
        sess.run(tf.assign(lr,0.001 * (0.95 ** epoch)))
        for batch in range(n_batch):
            batch_xs, batch_ys = mnist.train.next_batch(batch_size)
            summary, _ = sess.run([merged, train_step], feed_dict={x:batch_xs, y:batch_ys, keep_prob:1.0})
            #一边训练,一边统计 , merged会有一个反馈
        write.add_summary(summary, epoch) # 写到这个文件中,记录(一个summary,一个周期)
        test_acc = sess.run(accuracy, feed_dict={x:mnist.test.images, y:mnist.test.labels, keep_prob:1.0})
        train_acc = sess.run(accuracy, feed_dict={x:mnist.train.images, y:mnist.train.labels, keep_prob:1.0})

        print('iter'+ str(epoch)+ ',Testing Accuracy' + str(test_acc),',Training Accuracy'+ str(train_acc))

E:>tensorboard --logdir=E:\logs\

7.Tensorboard可视化,Tensorboard的数据形式, 查看网络运行时候的数据

7.Tensorboard可视化,Tensorboard的数据形式, 查看网络运行时候的数据

上一篇:OO unit3 summary


下一篇:「Summary」NOIp 2020赛前总结