import tensorflow as tf weights = tf.constant([[1.0, -2.0],[-3.0 , 4.0]]) >>> sess.run(tf.contrib.layers.l1_regularizer(0.5)(weights)) 5.0 >>> sess.run(tf.keras.regularizers.l1(0.5)(weights)) 5.0 >>> sess.run(tf.keras.regularizers.l1()(weights)) 0.099999994 >>> sess.run(tf.keras.regularizers.l1(1)(weights)) 10.0 >>> sess.run(tf.nn.l2_loss(weights)) 15.0 >>> sess.run(tf.keras.regularizers.l2(1)(weights)) 30.0 >>> sess.run(tf.keras.regularizers.l2(0.5)(weights)) 15.0 >>> sess.run(tf.contrib.layers.l1_regularizer(0.5)(weights)) 5.0 >>> sess.run(tf.contrib.layers.l2_regularizer(0.5)(weights)) 7.5 >>> sess.run(tf.contrib.layers.l2_regularizer(1.0)(weights)) 15.0
在tensorflow中,tf.nn中只有tf.nn.l2_loss,却没有l1_loss,于是自己网上查阅资料,了解到tf.contrib.layers中有tf.contrib.layers.l1_regularizer(),但是tf.contrib目前新版本已经被弃用了,后来发现tf.keras.regularizers下面有l1和l2正则化器,但是该正则化器的l2有点不一样,从上面的结果可以看出,scale都为1时,它要多一倍。可以查看源代码,tf.nn.l2_loss和 tf.contrib.layers.l2_regularizer 中都统一除以了2.所以值要少一半。
>>> sess.run(tf.nn.l2_loss(weights)) 15.0 >>> sess.run(tf.keras.regularizers.l2(1)(weights)) 30.0 >>> sess.run(tf.contrib.layers.l2_regularizer(1.0)(weights)) 15.0
将scale设为0.5后,可以得到一样的值,因此,以后在损失函数中可以使用这样的方式来求l2损失和l1损失。
>>> sess.run(tf.keras.regularizers.l2(0.5)(weights))
15.0
参考了 day-17 L1和L2正则化的tensorflow示例 - 派森蛙 - 博客园
https://www.cnblogs.com/python-frog/p/9416970.html
''' 输入: x = [[1.0,2.0]] w = [[1.0,2.0],[3,0,4.0]] 输出: y = x*w = [[7.0,10.0]] l1 = (1.0+2.0+3.0+4.0)*0.5 = 5.0 l2 = (1.0**2 + 2.0**2 + 3.0**2 + 4.0**2) / 2)*0.5 = 7.5 ''' import tensorflow as tf from tensorflow.contrib.layers import * w = tf.constant([[1.0,2.0],[3.0,4.0]]) x = tf.placeholder(dtype=tf.float32,shape=[None,2]) y = tf.matmul(x,w) with tf.Session() as sess: init = tf.global_variables_initializer() sess.run(init) print(sess.run(y,feed_dict={x:[[1.0,2.0]]})) print("=========================") print(sess.run(l1_regularizer(scale=0.5)(w))) #(1.0+2.0+3.0+4.0)*0.5 = 5.0 print("=========================") print(sess.run(l2_regularizer(scale=0.5)(w))) #(1.0**2 + 2.0**2 + 3.0**2 + 4.0**2) / 2)*0.5 = 7.5