#!/usr/bin/env python2 # -*- coding: utf-8 -*- """ Created on Mon Jul 10 09:35:04 2017 @author: myhaspl@myhaspl.com,http://blog.csdn.net/myhaspl """ #逻辑或 import tensorflow as tf batch_size=10 w1=tf.Variable(tf.random_normal([2,6],stddev=1,seed=1)) w2=tf.Variable(tf.random_normal([6,1],stddev=1,seed=1)) b=tf.Variable(tf.zeros([6]),tf.float32) x=tf.placeholder(tf.float32,shape=(None,2),name="x") y=tf.placeholder(tf.float32,shape=(None,1),name="y") h=tf.matmul(x,w1)+b yo=tf.matmul(h,w2) #损失函数计算差异平均值 cross_entropy=tf.reduce_mean(tf.abs(y-yo)) #反向传播 train_step=tf.train.AdamOptimizer(0.05).minimize(cross_entropy) #生成样本 x_=[[0.,0.],[0.,1.],[1.,0.],[1.,1.]] y_=[[0.],[1.],[1.],[1.]] b_=tf.zeros([6]) with tf.Session() as sess: #初始化变量 init_op=tf.global_variables_initializer() sess.run(init_op) print sess.run(w1) print sess.run(w2) #设定训练轮数 TRAINCOUNT=500 for i in range(TRAINCOUNT): #开始训练 sess.run(train_step,feed_dict={x:x_,y:y_}) if i%10==0: total_cross_entropy=sess.run(cross_entropy,feed_dict={x:x_,y:y_}) print("%d 次训练之后,损失:%g"%(i+1,total_cross_entropy)) print(sess.run(w1)) print(sess.run(w2)) #生成测试样本,仅进行前向传播验证: testyo=sess.run(yo,feed_dict={x:[[0.,1.],[1.,1.]]}) myout=[int(testout>0.5) for testout in testyo] print myout
(Initialize initial 1st moment vector)
v_0 <- 0 (Initialize initial 2nd moment vector)
t <- 0 (Initialize timestep)
The update rule for variable
with gradient g
uses an optimization described at the end of section2 of the paper:
t <- t + 1
lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)
m_t <- beta1 * m_{t-1} + (1 - beta1) * g
v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g
variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
The default value of 1e-8 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1. Note that since AdamOptimizer uses the formulation just before Section 2.1 of the Kingma and Ba paper rather than the formulation in Algorithm 1, the "epsilon" referred to here is "epsilon hat" in the paper.
The sparse implementation of this algorithm (used when the gradient is an IndexedSlices object, typically because of tf.gather
or an embedding lookup in the forward pass) does apply momentum to variable slices even if they were not used in the forward pass (meaning they have a gradient equal to zero). Momentum decay (beta1) is also applied to the entire momentum accumulator. This means that the sparse behavior is equivalent to the dense behavior (in contrast to some momentum implementations which ignore momentum unless a variable slice was actually used).