3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

文章目录

循环神经网络实现文本情感分类

3.1循环神经网络数学理解

3.2 LSTM和GRU循环神经网络

3.3RNN网络扩展:堆叠RNN、递归神经网络、图网络

3.4RNN 层使用方法

目标

  1. 知道LSTM和GRU的使用方法及输入输出的格式
  2. 能够应用LSTM和GRU实现文本情感分类

1. tensorflow 中LSTM和GRU模块使用

1.1 LSTMCell

LSTMCell 的用法和 SimpleRNNCell 基本一致,区别在于 LSTM 的状态变量 List 有两个,即 [ h t , c t ] \left[\boldsymbol{h}_{t}, \boldsymbol{c}_{t}\right] [ht​,ct​], 需要分别初始化,其中 L i s t \mathrm{List} List 第一个元素为 h t \boldsymbol{h}_{t} ht​, 第二个元素为 c t \boldsymbol{c}_{t} ct​ 。调用 cell 完成前向运算时,返回两个元素,第一个元素为 cell 的输出,也就是 h t \boldsymbol{h}_{t} ht​, 第二个元素为 cell 的更新后的状态 List: [ h t , c t ] \left[\boldsymbol{h}_{t}, \boldsymbol{c}_{t}\right]_{} [ht​,ct​]​ 。首先新建一个状态向量长度 h = 64 h=64 h=64 的 LSTM Cell,其中状态向量 c t \boldsymbol{c}_{t} ct​ 和输出向量 h t \boldsymbol{h}_{t} ht​ 的长度都为h,代码如下:

x = tf.random.normal([2, 80, 100])
xt = x[:, 0, :]  # 得到一个时间戳的输入

cell = layers.LSTMCell(64)  # 创建 LSTM Cell
# 初始化状态 List,[h,c]
state = [tf.zeros([2, 64]), tf.zeros([2, 64])]
out, state = cell(xt, state)  # 前向计算 ,和输出
# 查看返回元素的 id
id(out), id(state[0]), id(state[1])  # 为C_t
(1936930386344, 1936930386344, 1936930387048)

可以看到,返回的输出 out 和 L i s t \mathrm{List} List 的第一个元素 h t \boldsymbol{h}_{t} ht​ 的 id 是相同的,这与RNN 初衷 一致,都是为了格式的统一。

通过在时间步上展开循环运算,即可完成一次层的前向传播,写法与RNN 一 样。例如:

# 在序列长度维度上解开,循环送入 LSTM Cell 单元
for xt in tf.unstack(x, axis=1):
    # 前向计算
    out, state = cell(xt, state)

输出可以仅使用最后一个时间戳上的输出,也可以聚合所有时间戳上的输出向量。

1.2 LSTM

LSTM层和GRU层都是由tf.keras.layers.提供.其中LSTM:

keras.layers.LSTM(units, activation='tanh', recurrent_activation='sigmoid', 
                  use_bias=True, return_sequences=False, return_state=False, 
                  go_backwards=False, stateful=False, unroll=False)

参数

  • units: 正整数,输出空间的维度。
  • input_dim: 输入的维度(整数)。将此层用作模型中的第一层时,此参数(或者,关键字参数 input_shape)是必需的。
  • input_length: 输入序列的长度,在恒定时指定。如果你要在上游连接 Flatten 和 Dense 层,则需要此参数(如果没有它,无法计算全连接输出的尺寸)。
    请注意,如果循环神经网络层不是模型中的第一层,则需要在第一层的层级指定输入长度(例如,通过 input_shape 参数)。
  • activation: 要使用的激活函数。如果传入 None,则不使用激活函数(即 线性激活:a(x) = x)。
  • recurrent_activation: 用于循环时间步的激活函数.默认:分段线性近似 sigmoid (hard_sigmoid)。
    如果传入 None,则不使用激活函数(即 线性激活:a(x) = x)。
  • dropout: 在 0 和 1 之间的浮点数。单元的丢弃比例,用于输入的线性转换。
  • recurrent_dropout: 在 0 和 1 之间的浮点数。单元的丢弃比例,用于循环层状态的线性转换。
  • use_bias: 布尔值,该层是否使用偏置向量。
  • unit_forget_bias: 布尔值。如果为 True,初始化时,将忘记门的偏置加 1。将其设置为 True 同时还会强制 bias_initializer="zeros"。这个建议来自 Jozefowicz et al. (2015)
  • return_sequences: 布尔值。是返回输出序列中的最后一个输出,还是全部序列的隐藏状态。
  • return_state: 布尔值。除了输出之外是否返回最后一个状态。状态列表的返回元素分别是隐藏状态和单元状态。
  • go_backwards: 布尔值 (默认 False)。如果为 True,则向后处理输入序列并返回相反的序列。
  • stateful: 布尔值 (默认 False)。如果为 True,则批次中索引 i 处的每个样品的最后状态将用作下一批次中索引 i 样品的初始状态。
  • unroll: 布尔值 (默认 False)。如果为 True,则网络将展开,否则将使用符号循环。展开可以加速 RNN,但它往往会占用更多的内存。展开只适用于短序列。

输入尺寸

3D 张量,尺寸为 (batch_size, timesteps, input_dim)

输出尺寸

  • 如果 return_state:返回张量列表。第一个张量为输出。剩余的张量为最后的状态,每个张量的尺寸为 (batch_size, units)。例如,对于 RNN/GRU,状态张量数目为 1,对 LSTM 为 2,即 h t h_t ht​ 和 C t C_t Ct​。
  • 如果 return_sequences:返回 3D 张量,尺寸为 (batch_size, timesteps, units)
  • 否则,返回尺寸为 (batch_size, units) 的 2D 张量。

Masking

该层支持以可变数量的时间步对输入数据进行 masking。要将 masking 引入你的数据,请使用 Embedding 层,并将 mask_zero 参数设置为 True

关于在 RNN 中使用「状态(statefulness)」的说明

你可以将 RNN 层设置为 stateful(有状态的),这意味着针对一个批次的样本计算的状态将被重新用作下一批样本的初始状态。这假定在不同连续批次的样品之间有一对一的映射。

为了使状态有效:

  • 在层构造器中指定 stateful=True

  • 为你的模型指定一个固定的批次大小,如果是顺序模型,为你的模型的第一层传递一个 batch_input_shape=(...) 参数。

  • 为你的模型指定一个固定的批次大小,如果是顺序模型,为你的模型的第一层传递一个 batch_input_shape=(...)
    如果是带有 1 个或多个 Input 层的函数式模型,为你的模型的所有第一层传递一个 batch_shape=(...)
    这是你的输入的预期尺寸,包括批量维度

    它应该是整数的元组,例如 (32, 10, 100)

  • 在调用 fit() 是指定 shuffle=False

要重置模型的状态,请在特定图层或整个模型上调用 .reset_states()

关于指定 RNN 初始状态的说明

您可以通过使用关键字参数 initial_state 调用它们来符号化地指定 RNN 层的初始状态。initial_state 的值应该是表示 RNN 层初始状态的张量或张量列表。

您可以通过调用带有关键字参数 statesreset_states 方法来数字化地指定 RNN 层的初始状态。states 的值应该是一个代表 RNN 层初始状态的 Numpy 数组或者 Numpy 数组列表。

关于给 RNN 传递外部常量的说明

你可以使用 RNN.__call__(以及 RNN.call)的 constants 关键字参数将「外部」常量传递给单元。这要求 cell.call 方法接受相同的关键字参数 constants

这些常数可用于调节附加静态输入(不随时间变化)上的单元转换,也可用于注意力机制。

1.2 LSTM使用示例

假设数据输入为 input 形状是[10,20],假设embedding的形状是[100,30]

则LSTM使用示例如下:

batch_size =10 #句子的数量
seq_len = 20  #每个句子的长度
embedding_dim = 30  #每个词语使用多长的向量表示
word_vocab = 100  #词典中词语的总数
hidden_size = 12  #隐层中lstm的个数

#准备输入数据
input = np.random.randint(low=0,high=100,size=(batch_size,seq_len))
#准备embedding
embedding  = layers.Embedding(word_vocab,embedding_dim)

#进行mebed操作
embed = embedding(input) #[10,20,30]

lstm = layers.LSTM(hidden_size)
output = lstm(embed)
#(batch_size, units)
print(output.shape)#(10, 12)

此时只会返回一个hidden state 值。如果input 数据包含多个时间步,则这个hidden state 是最后一个时间步的结果

1.2.1区别 cell state 和 hidden state

LSTM 网络结构

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

图 1 : L S T M 网 络 结 构 图1:LSTM 网络结构 图1:LSTM网络结构

图1是LSTM 的cell 单元,其中:

  • h t h_{t} ht​ 表示第 t t t 步的 hidden state,是 t t t 时刻的数据被编码后的向量,也是LSTM对外的输出。
  • C t C_{t} Ct​ 表示第 t 步的 cell state,是 t 时刻数据的记忆向量,代表LSTM的记忆,通常只在内部流动,不对外输出。cell state 是实现LSTM的关键。

通常我们只需拿到 hidden state 作为输LSTM的输出就够了,而不需要访问cell state,但是当想要设计复杂一点的网络的话,就需要用到cell state,比如encoder-decoder模型和Attention机制,设置return_state=Truereturn_sequences=True 返回hidden state 所有时间步的结果.

lstm = layers.LSTM(hidden_size,return_state=True)
whole_seq_output, h_t,c_t = lstm(embed)
#(batch_size, units),(batch_size, units),(batch_size, units)
print(whole_seq_output.shape)#(10, 12)
print(h_t.shape)#(10, 12)
print(c_t.shape)#(10, 12)

输出所有时间步的hidden state。

lstm = layers.LSTM(hidden_size,return_sequences=True)
whole_seq_output = lstm(embed)
#(batch_size, timesteps, units)
print(whole_seq_output.shape)#(10, 20, 12)

lstm = layers.LSTM(hidden_size,return_sequences=True,return_state=True)
whole_seq_output, h_t,c_t = lstm(embed)
#(batch_size, timesteps, units),(batch_size, units),(batch_size, units)
print(whole_seq_output.shape)#(10, 20, 12)
print(h_t.shape)#(10, 12)
print(c_t.shape)#(10, 12)

whole_seq_output 输出所有时间步的hidden state。h_t最后一个时间步的 hidden state。 c_t是最后一个时间步 cell state结果。

输出如下

(10, 12)
(10, 12)
(10, 12)
(10, 12)
(10, 20, 12)
(10, 20, 12)
(10, 12)
(10, 12)

通过前面的学习,我们知道,最后一次的 h t h_t ht​应该和whole_seq_output 的最后一个time step的输出是一样的

通过下面的代码,我们来验证一下:

a = whole_seq_output[:,-1,:]

print(h_t.shape)#(10, 12)
print(a.shape)#(10, 12)

a == h_t


(10, 12)
(10, 12)
<tf.Tensor: shape=(10, 12), dtype=bool, numpy=
array([[ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True],
       [ True,  True,  True,  True,  True,  True,  True,  True,  True,
         True,  True,  True]])>
# define model
inputs = layers.Input(shape = (seq_len, embedding_dim))#batch_size没有
lstm, state_h, state_c = layers.LSTM(1,return_state = True)(inputs)   
model = keras.Model(inputs = inputs,outputs = [lstm, state_h, state_c])

# define data and predict
test = np.random.randint(0,100,size=(seq_len, embedding_dim)).reshape([1,seq_len, embedding_dim])
print(model.predict(test))
[array([[0.7615942]], dtype=float32),  # lstm:     LSTM的输出
array([[0.7615942]], dtype=float32),  # state_h: 最后时间步的 hidden state
array([[1.]], dtype=float32)]     # state_c: 最后时间步的 cell state   

最后时间步的 hidden state == LSTM的输出

1.2.2小结

  1. 输入和输出的类型

相对之前的tensor,这里多了个参数timesteps,其表示啥意思?举个栗子,假如我们输入有10个句子,每个句子都由20个单词组成,而每个单词用30维的词向量表示。那么batch_size=20,timesteps=10,input_dim=30,你可以简单地理解timesteps就是输入序列的长度seq_len(如上面的例子)

  1. units

假如units=12,就一个单词而言,你可以把LSTM内部简化看成 Y = X 1 × 30 W 30 × 12 Y=X_{1\times30}W_{30\times12} Y=X1×30​W30×12​ ,X为上面提及的词向量比如30维,W中的12就是units,也就是说通过LSTM,把词的维度由30转变成了12

1.2.3多层LSTM

对于多层神经网络,可以通过 Sequential 容器包裹多层 LSTM 层,并设置所有非末层网络 return_sequences=True,这是因为非末层的 LSTM 层需要上一层在所有时间戳的输出作为输入。例如:

net = keras.Sequential([
    layers.LSTM(64, return_sequences=True), # 非末层需要返回所有时间戳输出
    layers.LSTM(64)
])
# 一次通过网络模型,即可得到最末层、最后一个时间戳的输出
out = net(x)

也可以所有的hidden经过全连接层计算,得到最终的状态表征。

def build_model(lstm_layers, dense_layers,hidden_size):
    """
    lstm_layers:lstm层数
    dense_layers: 全连接层数
    """
    model = keras.Sequential()

    model.add(layers.LSTM(units = hidden_size, #output_dim也是hidden_size
                   input_shape=(2, 3),
                   return_sequences=True))
    for i in range(lstm_layers - 1):
        model.add(layers.LSTM( units = hidden_size * (i+1),
                       return_sequences=True))

    for i in range(dense_layers - 1):
        model.add(layers.Dense(256,
                        activation='selu'))
        model.add(layers.Dropout(0.5))
    model.compile(loss='mae', optimizer='adam', metrics=['accuracy'])
    model.summary()
    return model
    
build_model(lstm_layers=5, dense_layers=3,hidden_size=64)
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_10 (LSTM)               (None, 2, 64)             17408     
_________________________________________________________________
lstm_11 (LSTM)               (None, 2, 64)             33024     
_________________________________________________________________
lstm_12 (LSTM)               (None, 2, 128)            98816     
_________________________________________________________________
lstm_13 (LSTM)               (None, 2, 192)            246528    
_________________________________________________________________
lstm_14 (LSTM)               (None, 2, 256)            459776    
_________________________________________________________________
dense_2 (Dense)              (None, 2, 256)            65792     
_________________________________________________________________
dropout_2 (Dropout)          (None, 2, 256)            0         
_________________________________________________________________
dense_3 (Dense)              (None, 2, 256)            65792     
_________________________________________________________________
dropout_3 (Dropout)          (None, 2, 256)            0         
=================================================================
Total params: 987,136
Trainable params: 987,136
Non-trainable params: 0
_________________________________________________________________
<tensorflow.python.keras.engine.sequential.Sequential at 0x177836f6cc8>

1.3 GRU的使用示例

GRU模块tf.keras.layers.GRU,和LSTM的参数相同,含义相同,具体可参考文档

有两种变体。默认的是基于 1406.1078v3 的实现,同时在矩阵乘法之前将复位门应用于隐藏状态。另一种则是基于 1406.1078v1 的实现,它包括顺序倒置的操作。

第二种变体与 CuDNNGRU(GPU-only) 兼容并且允许在 CPU 上进行推理。因此它对于 kernelrecurrent_kernel 有可分离偏置。使用 'reset_after'=Truerecurrent_activation='sigmoid'

  • reset_after:
  • GRU 公约 (是否在矩阵乘法之前或者之后使用重置门)。

大家可以使用上述代码,观察GRU的输出形式

1.4 双向LSTM

如果需要使用双向LSTM,则在实例化LSTM的过程中,使用“tf.keras.layers.Bidirectional”模块

tf.keras.layers.Bidirectional(
layer,
merge_mode=‘concat’,
weights=None,
backward_layer=None
)
  • layer 神经网络,如RNN、LSTM、GRU
  • merge_mode 前向和后向RNN的输出将被组合的模式。{‘sum’,‘mul’,‘concat’,‘ave’,None}中的一个。如果为None,则将不合并输出,它们将作为列表返回。默认值为“ concat”。
  • weights : 双向模型中要加载的初始权重。
  • backward_layer :处理向后输入处理的神经网络,如果未提供,则作为参数传递的图层实例 将用于自动生成后向图层。

注意

该层的调用参数与包装的RNN层的调用参数相同。请注意,在initial_state此层的调用期间传递参数时,列表中元素列表的前半部分initial_state 将传递给正向RNN调用,而元素列表中的后半部分将传递给后向RNN调用。

观察效果,输出为

batch_size =10 #句子的数量
seq_len = 20  #每个句子的长度
embedding_dim = 30  #每个词语使用多长的向量表示
word_vocab = 100  #词典中词语的总数
hidden_size = 12  #隐层中lstm的个数

#准备输入数据
input = np.random.randint(low=0,high=100,size=(batch_size,seq_len))
#准备embedding
embedding  = layers.Embedding(word_vocab,embedding_dim)

#进行mebed操作
embed = embedding(input) #[10,20,30]

bilstm = layers.Bidirectional(layers.LSTM(hidden_size))
output = bilstm(embed)
#(batch_size, 2*units)
print(output.shape)#(10, 24)

bilstm = layers.Bidirectional(layers.LSTM(hidden_size,return_sequences=True))
whole_seq_output = bilstm(embed)
#(batch_size, timesteps,2* units)
print(whole_seq_output.shape)#(10, 20, 24)



bilstm = layers.Bidirectional(layers.LSTM(hidden_size,return_sequences=True,return_state=True))
whole_seq_output, f_ht, f_ct, b_ht, b_ct  = bilstm(embed)
#(batch_size, timesteps,2*  units),(batch_size,  units),(batch_size, units)
print(whole_seq_output.shape)#(10, 20, 24)
print(f_ht.shape)#(10, 12)
print(f_ct.shape)#(10, 12)
print(b_ht.shape)#(10, 12)
print(b_ct.shape)#(10, 12)

其中( f_ht, f_ct)是正向lstm;(b_ht, b_ct)是反向lstm。

在单向LSTM中,whole_seq_output 的最后一个time step的输出和最后一层隐藏状态 h t h_t ht​的输出相同,那么双向LSTM呢?

双向LSTM中:

output:按照正反计算的结果顺序在第2个维度进行拼接,正向第一个拼接反向的最后一个输出

hidden state:按照得到的结果在第1个维度进行拼接,正向第一个之后接着是反向第一个

  1. 正向的LSTM中,最后一个time step的输出的前hidden_size个和向前传播最后时间步的h_t的输出相同

    • 示例:

    • #-1是前向LSTM的最后一个time step,前12是前hidden_size个
      a = whole_seq_output[:,-1,:12]  #前项LSTM中最后一个time step的output
      print(a.shape)
      a == f_ht
      
      (10, 12)
      <tf.Tensor: shape=(10, 12), dtype=bool, numpy=
      array([[ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
           [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True]])>
      
  2. 反向LSTM中,最后一个time step的输出的后hidden_size个和后向传播的最后时间步的h_t的输出相同

    • 示例

    • #0 是反向LSTM的最后一个,后12是后hidden_size个
      c = whole_seq_output[:,0,12:]  #反向LSTM中的最后一个输出
      print(c.shape)
      c == b_ht
      
      (10, 12)
      <tf.Tensor: shape=(10, 12), dtype=bool, numpy=
      array([[ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True],
             [ True,  True,  True,  True,  True,  True,  True,  True,  True,
               True,  True,  True]])>
      

1.4 LSTM参数个数计算

model = keras.Sequential()
model.add(layers.Bidirectional(layers.LSTM(hidden_size),input_shape 
                               =(seq_len,embedding_dim),merge_mode='concat'))
model.summary()

Model: "sequential_6"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
bidirectional_8 (Bidirection (None, 24)                4128      
=================================================================
Total params: 4,128
Trainable params: 4,128
Non-trainable params: 0
_________________________________________________________________

此处搭建的双向LSTM模型的隐藏层的向量维度为12,输入的向量维度为30,每一个时间点都返回一个维度为【1,24】的结果,最后的输出的连接方式为直接像列表那样堆叠。其中4128个参数怎么来的?我们先来看看LSTM 的计算公式是怎么样的

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

图 2 : L S T M 网 络 结 构 图2:LSTM 网络结构 图2:LSTM网络结构

遗忘门:
f t = σ ( W f h ⋅ h t − 1 + W f x ⋅ x t + b h ) (1) \Large \color{green}{ f_{t}=\sigma\left(W_{f h} \cdot h_{t-1}+W_{f x} \cdot x_{t}+b_{h}\right)\tag{1}} ft​=σ(Wfh​⋅ht−1​+Wfx​⋅xt​+bh​)(1)
输入门:
i t = σ ( W i h ⋅ h t − 1 + W i x ⋅ x t + b i ) (2) \Large \color{green}{ i_{t}=\sigma\left(W_{i h} \cdot h_{t-1}+W_{i x} \cdot x_{t}+b_{i}\right)\tag{2}} it​=σ(Wih​⋅ht−1​+Wix​⋅xt​+bi​)(2)
输出门:
o t = σ ( W o h ⋅ h t − 1 + W o x ⋅ x t + b o ) (3) \Large \color{green}{ o_{t}=\sigma\left(W_{o h} \cdot h_{t-1}+W_{o x} \cdot x_{t}+b_{o}\right)\tag{3}} ot​=σ(Woh​⋅ht−1​+Wox​⋅xt​+bo​)(3)
当前时刻输入:
C ~ t = tanh ⁡ ( W c h ⋅ h t − 1 + W c x ⋅ x t + b c ) (4) \Large \color{green}{ \widetilde{C}_{t}=\tanh \left(W_{c h} \cdot h_{t-1}+W_{c x} \cdot x_{t}+b_{c}\right)\tag{4}} C t​=tanh(Wch​⋅ht−1​+Wcx​⋅xt​+bc​)(4)
当前时刻的记忆状态:
c t = f t − 1 ∗ c t − 1 + i t ∗ C ~ t \Large \color{green}{ c_{t}=f_{t-1} * c_{t-1}+i_{t} * \widetilde{C}_{t}} ct​=ft−1​∗ct−1​+it​∗C t​
LSTM的最终输出:
h t = o t ∗ tanh ⁡ ( c t ) \Large \color{green}{ h_{t}=o_{t} * \tanh \left(c_{t}\right)} ht​=ot​∗tanh(ct​)
上述公式中・代表矩阵乘法,Hadamard积 *代表对应元素相乘。

公式(1,2,3,4,)中用 W h W_h Wh​ 代表所有hidden的参数矩阵, W x W_x Wx​ 代表所有 x t x_t xt​的参数矩阵 , W b W_b Wb​ 代表所有 b b b的参数矩阵 .所以3个矩阵集成了单个LSTM的3个门控结构和一个输入单元状态的权重。下面讲解4128个参数怎么来的。

前向循环过程中,输入层和隐藏层之间之间的权重 W x W_x Wx​ 其维度为 ( W x W_x Wx​ ) 30 ∗ 48 _{30 * 48} 30∗48​,由于权值共享, 即不同时间点输入和循环层的权重都是一样的 (但前向和后向计算时权值不能共享)。隐藏层之间的权重 W h W_h Wh​,其维度为 ( W h ) 12 ∗ 48 , (W_h)_{12* 48}, (Wh​)12∗48​, 最后加上偏置$ (W_b)_{48}$,所以前向循环过程参数为30 * 48+12 * 48+48=2064。

反向循环中的神经网络结构和前向循环一样,只是权重不能共享,所以整个参数的个数为:2064+2064= 4128。

总结:LSTM参数个数计算:

  • ( h t h_t ht​与 x t x_t xt​的维度) * 输出维度 * 4 +四个门的bias
  • (hidden_size+input_dim)* units * 4 + bias * 4

1.5 GRU参数个数计算

model = keras.Sequential()
model.add(layers.Bidirectional(layers.GRU(hidden_size),input_shape =(seq_len,embedding_dim),merge_mode='concat'))
model.summary()

Model: "sequential_7"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
bidirectional_9 (Bidirection (None, 24)                3168      
=================================================================
Total params: 3,168
Trainable params: 3,168
Non-trainable params: 0
_________________________________________________________________

我们先来看看GRU的计算公式是怎么样的

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解
图 3 : G R U 网 络 结 构 图3:GRU 网络结构 图3:GRU网络结构

r t = σ ( W r x t + U r h t − 1 + b r ) z t = σ ( W z x t + U z h t − 1 + b z ) h ~ t = t a n h ( W h x t + U h ( r t ⊙ h t − 1 ) + b h ) h t = z t ⊙ h ~ t + ( 1 − z t ) ⊙ h t − 1 \Large \color{green}{ r_t=\sigma(W_rx_t+U_rh_{t-1}+b_r)\\ z_t=\sigma(W_zx_t+U_zh_{t-1}+b_z) \\ \tilde{h}_t=tanh(W_hx_t+U_h(r_t\odot h_{t-1})+b_h)\\ h_t=z_t \odot \tilde{h}_{t} + (1- z_t) \odot h_{t-1} \\} rt​=σ(Wr​xt​+Ur​ht−1​+br​)zt​=σ(Wz​xt​+Uz​ht−1​+bz​)h~t​=tanh(Wh​xt​+Uh​(rt​⊙ht−1​)+bh​)ht​=zt​⊙h~t​+(1−zt​)⊙ht−1​
-重置门 r t : r_{t}: rt​: 用于控制前一时刻的隐藏层状态有多大程度更新到当前候选隐藏层状态.

  • r t r_{t} rt​ 参数矩阵形状是 ( ( ( units ∗ * ∗ input_dim + + + units ∗ * ∗ units + + + bias ) ) )

-更新门 z t ∈ [ 0 , 1 ] : z_{t} \in[0,1]: zt​∈[0,1]: 用于控制前一时刻的隐藏层状态有多大程度更新到当前隐藏层状态

  • z t z_{t} zt​ 参数矩阵形状是 ( ( ( units ∗ * ∗ input_dim + + + units ∗ * ∗ units + + + bias ) ) )

- h ^ t \hat{h}_{t} h^t​ 当前时刻的候选隐藏层状态。对应参数矩阵形状(units ∗ * ∗ input_dim + + + units ∗ * ∗ units + + + bias ) ) )

所以以GRU层的总参数量为 ( ( ( units ∗ * ∗ input_dim + + + units ∗ * ∗ units + + + bias ) ∗ 3 ) * 3 )∗3

注意:

tensorflow2.0中默认 reset_after=True, 所以将 input 和 recurrent kernels 的 bias 分开计算了。因此总参数量为 ( ( ( units ∗ * ∗ features + + + units ∗ * ∗ units + + + bias + + + bias ) ∗ 3 ) * 3 )∗3

即 (12*30 + 12 * 12 +12 +12)*3 = 1584

反向循环中的神经网络结构和前向循环一样,只是权重不能共享,所以整个参数的个数为:1584+1584= 3168。

1.5 LSTM和GRU的使用注意点

  1. 第一次调用之前,需要初始化隐藏状态,如果不初始化,默认创建全为0的隐藏状态
  2. 往往会使用LSTM or GRU 的输出的最后一维的结果,来代表LSTM、GRU对文本处理的结果,其形状为[batch_size, num_directions*units]
    1. 并不是所有模型都会使用最后一维的结果
  3. 使用双向LSTM的时候,往往会分别使用每个方向最后一次的output,作为当前数据经过双向LSTM的结果
    • 最后的表示的size是[batch_size,hidden_size*2]
  4. 上述内容在GRU中同理

2. 使用LSTM、GRU完成文本情感分类

上一节:3.4RNN 层使用方法

前面我们介绍了情感分类问题,并利用 SimpleRNN 模型完成了情感分类问题的实战,在介绍完更为强大的 LSTM 和 GRU 网络后,我们将网络模型进行升级。得益于TensorFlow 在循环神经网络相关接口的格式统一,在原来的代码基础上面只需要修改少量几处,便可以完美的升级到 LSTM 模型或 GRU 模型

LSTM 模型

首先是 Cell 方式。LSTM 网络的状态 List 共有两个,需要分别初始化各层的 h h h 和 c c c向量。例如:

#初始化状态 List,[h,c] : [b, 64]
self.state0 = [tf.zeros([batchsz, units]),tf.zeros([batchsz, units])]
self.state1 = [tf.zeros([batchsz, units]),tf.zeros([batchsz, units])]

并将模型修改为 LSTMCell 模型。代码如下:

# 构建2个Cell
self.rnn_cell0 = layers.LSTMCell(units, dropout=0.5)
self.rnn_cell1 = layers.LSTMCell(units, dropout=0.5)

其它代码不需要修改即可运行。

LSTMCell

class MyRNN(keras.Model):
    # LSTMCell方式构建多层网络
    def __init__(self, units):
        super(MyRNN, self).__init__()
        # 初始化状态 List,[h,c] : [b, 64]
        self.state0 = [tf.zeros([batchsz, units]), tf.zeros([batchsz, units])]
        self.state1 = [tf.zeros([batchsz, units]), tf.zeros([batchsz, units])]
        # 词向量编码 [b, 80] => [b, 80, 100]
        self.embedding = layers.Embedding(
            vocab_size, embedding_dim, input_length=max_length)
        # 构建2个Cell
        self.rnn_cell0 = layers.LSTMCell(units, dropout=0.5)
        self.rnn_cell1 = layers.LSTMCell(units, dropout=0.5)
        # 构建分类网络,用于将CELL的输出特征进行分类,2分类
        # [b, 80, 100] => [b, 64] => [b, 1]
        self.outlayer = Sequential([
            layers.Dense(units),
            layers.Dropout(rate=0.5),
            layers.ReLU(),
            layers.Dense(1)])

    def call(self, inputs, training=None):
        x = inputs  # [b, 80]
        # embedding: [b, 80] => [b, 80, 100]
        x = self.embedding(x)
        # rnn cell compute,[b, 80, 100] => [b, 64]
        state0 = self.state0
        state1 = self.state1
        for word in tf.unstack(x, axis=1):  # word: [b, 100]
            out0, state0 = self.rnn_cell0(word, state0, training)
            out1, state1 = self.rnn_cell1(out0, state1, training)
        # 末层最后一个输出作为分类网络的输入: [b, 64] => [b, 1]
        x = self.outlayer(out1, training)
        # p(y is pos|x)
        prob = tf.sigmoid(x)

        return prob


def main():
    units = 64  # RNN状态向量长度f
    epochs = 20  # 训练epochs

    model = MyRNN(units)
    # loss,优化与评估
    model.compile(optimizer=optimizers.RMSprop(0.001),
                  loss=losses.BinaryCrossentropy(),
                  metrics=['accuracy'])

    # 训练和验证
    history3 = model.fit(db_train, epochs=epochs, validation_data=db_test)
    # 测试
    model.evaluate(db_test)
    plot_graphs(history3, 'accuracy', title="LSTMCell")
    plot_graphs(history3, 'loss', title="LSTMCell")


if __name__ == '__main__':
    main()
Epoch 1/20
195/195 [==============================] - 64s 331ms/step - loss: 0.5681 - accuracy: 0.6938 - val_loss: 0.4031 - val_accuracy: 0.8169
Epoch 2/20
195/195 [==============================] - 66s 336ms/step - loss: 0.3971 - accuracy: 0.8333 - val_loss: 0.3926 - val_accuracy: 0.8308
Epoch 3/20
195/195 [==============================] - 65s 332ms/step - loss: 0.3439 - accuracy: 0.8635 - val_loss: 0.3988 - val_accuracy: 0.8228
Epoch 4/20
195/195 [==============================] - 65s 331ms/step - loss: 0.3135 - accuracy: 0.8778 - val_loss: 0.4547 - val_accuracy: 0.8091
Epoch 5/20
195/195 [==============================] - 67s 341ms/step - loss: 0.2912 - accuracy: 0.8885 - val_loss: 0.3802 - val_accuracy: 0.8300
Epoch 6/20
195/195 [==============================] - 64s 328ms/step - loss: 0.2723 - accuracy: 0.8991 - val_loss: 0.3806 - val_accuracy: 0.8310
Epoch 7/20
195/195 [==============================] - 63s 323ms/step - loss: 0.2541 - accuracy: 0.9050 - val_loss: 0.4248 - val_accuracy: 0.8241
Epoch 8/20
195/195 [==============================] - 65s 335ms/step - loss: 0.2412 - accuracy: 0.9117 - val_loss: 0.4854 - val_accuracy: 0.8199
Epoch 9/20
195/195 [==============================] - 66s 339ms/step - loss: 0.2256 - accuracy: 0.9185 - val_loss: 0.4157 - val_accuracy: 0.8212
Epoch 10/20
195/195 [==============================] - 66s 340ms/step - loss: 0.2116 - accuracy: 0.9230 - val_loss: 0.4506 - val_accuracy: 0.8242
Epoch 11/20
195/195 [==============================] - 68s 348ms/step - loss: 0.1989 - accuracy: 0.9285 - val_loss: 0.4652 - val_accuracy: 0.8058
Epoch 12/20
195/195 [==============================] - 69s 355ms/step - loss: 0.1880 - accuracy: 0.9304 - val_loss: 0.5808 - val_accuracy: 0.8149
Epoch 13/20
195/195 [==============================] - 67s 344ms/step - loss: 0.1752 - accuracy: 0.9361 - val_loss: 0.5856 - val_accuracy: 0.8157
Epoch 14/20
195/195 [==============================] - 66s 341ms/step - loss: 0.1683 - accuracy: 0.9407 - val_loss: 0.5117 - val_accuracy: 0.8101
Epoch 15/20
195/195 [==============================] - 66s 341ms/step - loss: 0.1575 - accuracy: 0.9444 - val_loss: 0.5542 - val_accuracy: 0.8075
Epoch 16/20
195/195 [==============================] - 66s 339ms/step - loss: 0.1449 - accuracy: 0.9510 - val_loss: 0.5544 - val_accuracy: 0.8082
Epoch 17/20
195/195 [==============================] - 66s 340ms/step - loss: 0.1339 - accuracy: 0.9532 - val_loss: 0.5779 - val_accuracy: 0.8105
Epoch 18/20
195/195 [==============================] - 66s 341ms/step - loss: 0.1243 - accuracy: 0.9571 - val_loss: 0.5831 - val_accuracy: 0.8030
Epoch 19/20
195/195 [==============================] - 65s 335ms/step - loss: 0.1146 - accuracy: 0.9613 - val_loss: 0.7025 - val_accuracy: 0.7888
Epoch 20/20
195/195 [==============================] - 65s 335ms/step - loss: 0.1094 - accuracy: 0.9621 - val_loss: 0.6452 - val_accuracy: 0.8026
195/195 [==============================] - 14s 73ms/step - loss: 0.6452 - accuracy: 0.8026

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

验证的准确率为 80.26 % 80.26\% 80.26%.

对于层方式,只需要修改网络模型一处即可,修改如下:

# 构建 RNN,换成 LSTM 类即可
self.rnn = keras.Sequential([
layers.LSTM(units, dropout=0.5, return_sequences=True),
layers.LSTM(units, dropout=0.5)
])

GRU 模型

首先是 Cell 方式。GRU 的状态 List 只有一个,和 RNN 一样,只需要修改创建Cell 的类型,代码如下:

# 构建 2 个 Cell
self.rnn_cell0 = layers.GRUCell(units, dropout=0.5)
self.rnn_cell1 = layers.GRUCell(units, dropout=0.5)

对于层方式,修改网络层类型即可,代码如下:

# 构建RNN
self.rnn = keras.Sequential([
layers.GRU(units, dropout=0.5, return_sequences=True),
layers.GRU(units, dropout=0.5)
])

GRUCell

class MyRNN(keras.Model):
    # GRUCell 方式构建多层网络
    def __init__(self, units):
        super(MyRNN, self).__init__()
        # [b, 64],构建Cell初始化状态向量,重复使用
        self.state0 = [tf.zeros([batchsz, units])]
        self.state1 = [tf.zeros([batchsz, units])]
        # 词向量编码 [b, 80] => [b, 80, 100]
        self.embedding = layers.Embedding(
            vocab_size, embedding_dim, input_length=max_length)
        # 构建2个Cell
        self.rnn_cell0 = layers.GRUCell(units, dropout=0.5)
        self.rnn_cell1 = layers.GRUCell(units, dropout=0.5)
        # 构建分类网络,用于将CELL的输出特征进行分类,2分类
        # [b, 80, 100] => [b, 64] => [b, 1]
        self.outlayer = Sequential([
            layers.Dense(units),
            layers.Dropout(rate=0.5),
            layers.ReLU(),
            layers.Dense(1)])

    def call(self, inputs, training=None):
        x = inputs  # [b, 80]
        # embedding: [b, 80] => [b, 80, 100]
        x = self.embedding(x)
        # rnn cell compute,[b, 80, 100] => [b, 64]
        state0 = self.state0
        state1 = self.state1
        for word in tf.unstack(x, axis=1):  # word: [b, 100]
            out0, state0 = self.rnn_cell0(word, state0, training)
            out1, state1 = self.rnn_cell1(out0, state1, training)
        # 末层最后一个输出作为分类网络的输入: [b, 64] => [b, 1]
        x = self.outlayer(out1, training)
        # p(y is pos|x)
        prob = tf.sigmoid(x)

        return prob


def main():
    units = 64  # RNN状态向量长度f
    epochs = 20  # 训练epochs

    model = MyRNN(units)
    # loss,优化与评估
    model.compile(optimizer=optimizers.RMSprop(0.001),
                  loss=losses.BinaryCrossentropy(),
                  metrics=['accuracy'])

    # 训练和验证
    history2 = model.fit(db_train, epochs=epochs, validation_data=db_test)
    # 测试
    model.evaluate(db_test)
    plot_graphs(history2, 'accuracy', title="GRUcell")
    plot_graphs(history2, 'loss', title="GRUcell")


if __name__ == '__main__':
    main()
Epoch 1/20
195/195 [==============================] - 40s 205ms/step - loss: 0.6548 - accuracy: 0.5858 - val_loss: 0.5477 - val_accuracy: 0.7868
Epoch 2/20
195/195 [==============================] - 55s 280ms/step - loss: 0.4448 - accuracy: 0.8065 - val_loss: 0.4007 - val_accuracy: 0.8239
Epoch 3/20
195/195 [==============================] - 56s 286ms/step - loss: 0.3696 - accuracy: 0.8502 - val_loss: 0.3770 - val_accuracy: 0.8385
Epoch 4/20
195/195 [==============================] - 56s 285ms/step - loss: 0.3282 - accuracy: 0.8704 - val_loss: 0.3633 - val_accuracy: 0.8412
Epoch 5/20
195/195 [==============================] - 52s 267ms/step - loss: 0.3004 - accuracy: 0.8828 - val_loss: 0.3692 - val_accuracy: 0.8349
Epoch 6/20
195/195 [==============================] - 55s 283ms/step - loss: 0.2731 - accuracy: 0.8963 - val_loss: 0.4217 - val_accuracy: 0.8113
Epoch 7/20
195/195 [==============================] - 56s 287ms/step - loss: 0.2548 - accuracy: 0.9048 - val_loss: 0.3685 - val_accuracy: 0.8372
Epoch 8/20
195/195 [==============================] - 55s 281ms/step - loss: 0.2339 - accuracy: 0.9135 - val_loss: 0.3811 - val_accuracy: 0.8395
Epoch 9/20
195/195 [==============================] - 54s 275ms/step - loss: 0.2153 - accuracy: 0.9217 - val_loss: 0.3828 - val_accuracy: 0.8350
Epoch 10/20
195/195 [==============================] - 54s 277ms/step - loss: 0.1968 - accuracy: 0.9294 - val_loss: 0.4069 - val_accuracy: 0.8345
Epoch 11/20
195/195 [==============================] - 55s 283ms/step - loss: 0.1818 - accuracy: 0.9361 - val_loss: 0.4736 - val_accuracy: 0.8238
Epoch 12/20
195/195 [==============================] - 56s 286ms/step - loss: 0.1682 - accuracy: 0.9409 - val_loss: 0.4649 - val_accuracy: 0.8264
Epoch 13/20
195/195 [==============================] - 55s 283ms/step - loss: 0.1560 - accuracy: 0.9446 - val_loss: 0.4608 - val_accuracy: 0.8244
Epoch 14/20
195/195 [==============================] - 55s 281ms/step - loss: 0.1405 - accuracy: 0.9514 - val_loss: 0.5357 - val_accuracy: 0.8171
Epoch 15/20
195/195 [==============================] - 55s 283ms/step - loss: 0.1335 - accuracy: 0.9538 - val_loss: 0.5598 - val_accuracy: 0.8096
Epoch 16/20
195/195 [==============================] - 55s 284ms/step - loss: 0.1232 - accuracy: 0.9581 - val_loss: 0.5288 - val_accuracy: 0.8156
Epoch 17/20
195/195 [==============================] - 54s 279ms/step - loss: 0.1173 - accuracy: 0.9605 - val_loss: 0.5532 - val_accuracy: 0.8062
Epoch 18/20
195/195 [==============================] - 56s 288ms/step - loss: 0.1089 - accuracy: 0.9627 - val_loss: 0.5832 - val_accuracy: 0.8084
Epoch 19/20
195/195 [==============================] - 57s 292ms/step - loss: 0.0990 - accuracy: 0.9669 - val_loss: 0.6589 - val_accuracy: 0.8025
Epoch 20/20
195/195 [==============================] - 56s 287ms/step - loss: 0.0911 - accuracy: 0.9698 - val_loss: 0.7305 - val_accuracy: 0.7993
195/195 [==============================] - 13s 68ms/step - loss: 0.7305 - accuracy: 0.7993

GRUlayers

class GRUmodels(keras.Model):
    # 构建多层网络
    def __init__(self, units):
        super(GRUmodels, self).__init__()
        # 词向量编码 [b, 100] => [b, 100, 100]
        self.embedding = layers.Embedding(
            vocab_size, embedding_dim, input_length=max_length)
        # 构建RNN
        self.rnn = keras.Sequential([
            layers.GRU(units, dropout=0.5, return_sequences=True),
            layers.GRU(units, dropout=0.5)
        ])
        # 构建分类网络,用于将CELL的输出特征进行分类,2分类
        # [b, 100, 100] => [b, 64] => [b, 1]
        self.outlayer = Sequential([
            layers.Dense(units),
            layers.Dropout(rate=0.5),
            layers.ReLU(),
            layers.Dense(1)])

    def call(self, inputs, training=None):
        x = inputs  # [b, 80]
        # embedding: [b, 80] => [b, 80, 100]
        x = self.embedding(x)
        # rnn cell compute,[b, 80, 100] => [b, 64]
        x = self.rnn(x)
        # 末层最后一个输出作为分类网络的输入: [b, 64] => [b, 1]
        x = self.outlayer(x, training)
        # p(y is pos|x)
        prob = tf.sigmoid(x)

        return prob


def main():
    units = 64  # RNN状态向量长度f
    epochs = 20  # 训练epochs

    model = GRUmodels(units)
    # 装配
    model.compile(optimizer=optimizers.RMSprop(0.001),
                  loss=losses.BinaryCrossentropy(),
                  metrics=['accuracy'])

    # 训练和验证
    history5 = model.fit(db_train, epochs=epochs, validation_data=db_test)
    # 测试
    model.evaluate(db_test)
    plot_graphs(history5, 'accuracy', title="GRU")
    plot_graphs(history5, 'loss', title="GRU")


if __name__ == '__main__':
    main()
Epoch 1/20
195/195 [==============================] - 57s 280ms/step - loss: 0.6804 - accuracy: 0.5408 - val_loss: 0.5008 - val_accuracy: 0.7498
Epoch 2/20
195/195 [==============================] - 56s 289ms/step - loss: 0.4641 - accuracy: 0.7956 - val_loss: 0.3919 - val_accuracy: 0.8277
Epoch 3/20
195/195 [==============================] - 58s 295ms/step - loss: 0.3703 - accuracy: 0.8495 - val_loss: 0.3610 - val_accuracy: 0.8406
Epoch 4/20
195/195 [==============================] - 53s 273ms/step - loss: 0.3205 - accuracy: 0.8732 - val_loss: 0.4193 - val_accuracy: 0.8131
Epoch 5/20
195/195 [==============================] - 54s 275ms/step - loss: 0.3037 - accuracy: 0.8846 - val_loss: 0.4280 - val_accuracy: 0.8312
Epoch 6/20
195/195 [==============================] - 59s 302ms/step - loss: 0.2730 - accuracy: 0.8923 - val_loss: 0.3697 - val_accuracy: 0.8343
Epoch 7/20
195/195 [==============================] - 59s 301ms/step - loss: 0.2501 - accuracy: 0.9045 - val_loss: 0.3785 - val_accuracy: 0.8320
Epoch 8/20
195/195 [==============================] - 58s 300ms/step - loss: 0.2291 - accuracy: 0.9128 - val_loss: 0.3914 - val_accuracy: 0.8322
Epoch 9/20
195/195 [==============================] - 59s 304ms/step - loss: 0.2078 - accuracy: 0.9244 - val_loss: 0.4358 - val_accuracy: 0.8321
Epoch 10/20
195/195 [==============================] - 61s 311ms/step - loss: 0.1911 - accuracy: 0.9297 - val_loss: 0.3934 - val_accuracy: 0.8213
Epoch 11/20
195/195 [==============================] - 59s 301ms/step - loss: 0.1739 - accuracy: 0.9381 - val_loss: 0.4340 - val_accuracy: 0.8241
Epoch 12/20
195/195 [==============================] - 54s 275ms/step - loss: 0.1581 - accuracy: 0.9426 - val_loss: 0.4637 - val_accuracy: 0.8231
Epoch 13/20
195/195 [==============================] - 59s 304ms/step - loss: 0.1462 - accuracy: 0.9474 - val_loss: 0.5100 - val_accuracy: 0.8083
Epoch 14/20
195/195 [==============================] - 58s 298ms/step - loss: 0.1332 - accuracy: 0.9512 - val_loss: 0.4991 - val_accuracy: 0.8184
Epoch 15/20
195/195 [==============================] - 57s 293ms/step - loss: 0.1223 - accuracy: 0.9546 - val_loss: 0.4989 - val_accuracy: 0.8128
Epoch 16/20
195/195 [==============================] - 54s 278ms/step - loss: 0.1165 - accuracy: 0.9603 - val_loss: 0.5703 - val_accuracy: 0.8063
Epoch 17/20
195/195 [==============================] - 54s 279ms/step - loss: 0.1054 - accuracy: 0.9634 - val_loss: 0.5819 - val_accuracy: 0.8054
Epoch 18/20
195/195 [==============================] - 59s 304ms/step - loss: 0.0952 - accuracy: 0.9666 - val_loss: 0.6438 - val_accuracy: 0.8004
Epoch 19/20
195/195 [==============================] - 56s 289ms/step - loss: 0.0884 - accuracy: 0.9710 - val_loss: 0.6770 - val_accuracy: 0.8022
Epoch 20/20
195/195 [==============================] - 55s 281ms/step - loss: 0.0747 - accuracy: 0.9752 - val_loss: 0.7058 - val_accuracy: 0.7942
195/195 [==============================] - 11s 55ms/step - loss: 0.7058 - accuracy: 0.7942

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

预训练的词向量

在情感分类任务时,Embedding 层是从零开始训练的。实际上,对于文本处理任务来说,领域知识大部分是共享的,因此我们能够利用在其它任务上训练好的词向量来初始化Embedding 层,完成领域知识迁移。基于预训练的 Embedding 层开始训练,少量样本时也能取得不错的效果。

我们以预训练的 GloVe 词向量为例,演示如何利用预训练的词向量模型提升任务性能。首先从下载预训练的 GloVe 词向量表。

glove 词向量词嵌入文件国内服务器下载

mxnet已经收集了stanfordnlp的glove词向量。可以使用mxnet的国内服务器进行下载,从而实现加速下载。服务器地址:https://apache-mxnet.s3.cn-north-1.amazonaws.com.cn

下载地址:

链接:

1: glov.6B.ziphttps://apache-mxnet.s3.cn-north-1.amazonaws.com.cn/gluon/embeddings/glove/glove.6B.zip

2:glove.42B.300d.ziphttps://apache-mxnet.s3.cn-north-1.amazonaws.com.cn/gluon/embeddings/glove/glove.42B.300d.zip

如果需要下载其他文件,修改链接后面的文件名称即可。

我们选择特征长度 100 的文件glove.6B.100d.txt,其中每个词汇使用长度为 100 的向量表示,下载后解压即可。

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解
图 4 : G l o V e 词 向 量 模 型 文 件 图4:GloVe 词向量模型文件 图4:GloVe词向量模型文件
利用 Python 文件 IO 代码读取单词的编码向量表,并存储到 Numpy 数组中。代码如下:

print('Indexing word vectors.')
embeddings_index = {}  # 提取单词及其向量,保存在字典中
GLOVE_DIR = r'D:\学习·\自然语言处理\数据集\glove.6B'  # 词向量模型文件存储路径
with open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt'), encoding='utf-8') as f:
    for line in f:
        values = line.split()
        word = values[0]
        embeddings = np.asarray(values[1:], dtype='float32')
        embeddings_index[word] = embeddings

print('Found %s word vectors.' % len(embeddings_index))
Indexing word vectors.
Found 400000 word vectors.
print(len(embeddings_index.keys()))
print(len(word_index.keys()))
400000
86539

GloVe.6B 版本共存储了 40 万个词汇的向量表。前面实战中我们只考虑最多 1 万个常见的词汇,我们根据词汇的数字编码表依次从 GloVe 模型中获取其词向量,并写入对应位置,创建embedding_matrix。代码如下:

MAX_NUM_WORDS = vocab_size
# prepare embedding matrix
num_words = min(vocab_size, len(word_index))
embedding_matrix = np.zeros((num_words, embedding_dim))  # 词向量表
applied_vec_count = 0
for word, i in word_index.items():
    if i >= MAX_NUM_WORDS:
        continue  # 过滤掉其他词汇
    embedding_vector = embeddings_index.get(word)  # 从 GloVe 查询词向量
    if embedding_vector is not None:
        # 未在GloVe索引中找到的词向量将是全零。
        embedding_matrix[i] = embedding_vector  # 写入对应位置
        applied_vec_count += 1
print(applied_vec_count, embedding_matrix.shape)
9838 (10000, 100)

在获得了词汇表数据后,利用词汇表初始化 Embedding 层即可,并设置 Embedding 层不参与梯度优化。代码如下:

LSTMlayers

class MyLSTM(keras.Model):
    # LSTM 方式构建多层网络
    def __init__(self, units):
        super(MyLSTM, self).__init__()

        # 词向量编码 [b, 100] => [b, 100, 100]

        self.embedding = layers.Embedding(vocab_size, embedding_dim,
                                          input_length=max_length,
                                          trainable=False)  # 不参与梯度更新
        self.embedding.build(input_shape=(None, max_length))
        # 利用 GloVe 模型初始化 Embedding 层
        self.embedding.set_weights([embedding_matrix])
        # 构建LSTM
        self.LSTM = keras.Sequential([
            layers.LSTM(units, dropout=0.5, return_sequences=True),
            layers.LSTM(units, dropout=0.5)
        ])

        # 构建分类网络,用于将CELL的输出特征进行分类,2分类
        # [b, 100, 100] => [b, 64] => [b, 1]
        self.outlayer = Sequential([
            layers.Dense(units),
            layers.Dropout(rate=0.5),
            layers.ReLU(),
            layers.Dense(1)])

    def call(self, inputs, training=None):
        x = inputs  # [b, 100]
        # embedding: [b, 100] => [b, 100, 100]
        x = self.embedding(x)
        # rnn cell compute,[b, 100, 100] => [b, 64]

        x = self.LSTM(x)
        # 末层最后一个输出作为分类网络的输入: [b, 64] => [b, 1]
        x = self.outlayer(x, training)
        # p(y is pos|x)
        prob = tf.sigmoid(x)

        return prob


def main():
    units = 64  # RNN状态向量长度f
    epochs = 50  # 训练epochs

    model = MyLSTM(units)
    # loss,优化与评估
    model.compile(optimizer=optimizers.RMSprop(0.001),
                  loss=losses.BinaryCrossentropy(),
                  metrics=['accuracy'])

    # 训练和验证
    history4 = model.fit(db_train, epochs=epochs, validation_data=db_test)
    # 测试
    model.evaluate(db_test)
    plot_graphs(history4, 'accuracy', title="LSTM")
    plot_graphs(history4, 'loss', title="LSTM")


if __name__ == '__main__':
    main()
Epoch 1/50
195/195 [==============================] - 9s 32ms/step - loss: 0.6836 - accuracy: 0.5520 - val_loss: 0.5855 - val_accuracy: 0.7033
Epoch 2/50
195/195 [==============================] - 6s 30ms/step - loss: 0.6173 - accuracy: 0.6746 - val_loss: 0.5575 - val_accuracy: 0.7146
Epoch 3/50
195/195 [==============================] - 6s 30ms/step - loss: 0.5794 - accuracy: 0.7037 - val_loss: 0.5562 - val_accuracy: 0.6993
Epoch 4/50
195/195 [==============================] - 6s 29ms/step - loss: 0.5574 - accuracy: 0.7216 - val_loss: 0.5319 - val_accuracy: 0.7397
Epoch 5/50
195/195 [==============================] - 6s 31ms/step - loss: 0.5393 - accuracy: 0.7384 - val_loss: 0.5704 - val_accuracy: 0.7183
Epoch 6/50
195/195 [==============================] - 6s 32ms/step - loss: 0.5259 - accuracy: 0.7449 - val_loss: 0.4457 - val_accuracy: 0.7944
Epoch 7/50
195/195 [==============================] - 6s 31ms/step - loss: 0.5210 - accuracy: 0.7476 - val_loss: 0.4381 - val_accuracy: 0.7942
Epoch 8/50
195/195 [==============================] - 6s 32ms/step - loss: 0.5087 - accuracy: 0.7566 - val_loss: 0.4222 - val_accuracy: 0.8084
Epoch 9/50
195/195 [==============================] - 6s 33ms/step - loss: 0.4955 - accuracy: 0.7614 - val_loss: 0.5443 - val_accuracy: 0.7620
Epoch 10/50
195/195 [==============================] - 6s 32ms/step - loss: 0.4901 - accuracy: 0.7637 - val_loss: 0.4277 - val_accuracy: 0.7968
Epoch 11/50
195/195 [==============================] - 6s 32ms/step - loss: 0.4845 - accuracy: 0.7707 - val_loss: 0.4353 - val_accuracy: 0.8030
Epoch 12/50
195/195 [==============================] - 6s 32ms/step - loss: 0.4752 - accuracy: 0.7717 - val_loss: 0.4125 - val_accuracy: 0.8113
Epoch 13/50
195/195 [==============================] - 7s 35ms/step - loss: 0.4681 - accuracy: 0.7778 - val_loss: 0.4019 - val_accuracy: 0.8183
Epoch 14/50
195/195 [==============================] - 7s 36ms/step - loss: 0.4638 - accuracy: 0.7796 - val_loss: 0.3966 - val_accuracy: 0.8265
Epoch 15/50
195/195 [==============================] - 7s 34ms/step - loss: 0.4582 - accuracy: 0.7873 - val_loss: 0.4003 - val_accuracy: 0.8234
Epoch 16/50
195/195 [==============================] - 7s 34ms/step - loss: 0.4535 - accuracy: 0.7873 - val_loss: 0.3820 - val_accuracy: 0.8297
Epoch 17/50
195/195 [==============================] - 7s 35ms/step - loss: 0.4518 - accuracy: 0.7900 - val_loss: 0.4199 - val_accuracy: 0.8167
Epoch 18/50
195/195 [==============================] - 7s 35ms/step - loss: 0.4488 - accuracy: 0.7957 - val_loss: 0.3909 - val_accuracy: 0.8253
Epoch 19/50
195/195 [==============================] - 6s 33ms/step - loss: 0.4439 - accuracy: 0.7956 - val_loss: 0.3949 - val_accuracy: 0.8203
Epoch 20/50
195/195 [==============================] - 6s 33ms/step - loss: 0.4379 - accuracy: 0.7935 - val_loss: 0.3991 - val_accuracy: 0.8140
Epoch 21/50
195/195 [==============================] - 7s 35ms/step - loss: 0.4325 - accuracy: 0.7991 - val_loss: 0.3914 - val_accuracy: 0.8194
Epoch 22/50
195/195 [==============================] - 7s 36ms/step - loss: 0.4353 - accuracy: 0.8004 - val_loss: 0.3838 - val_accuracy: 0.8282
Epoch 23/50
195/195 [==============================] - 7s 34ms/step - loss: 0.4284 - accuracy: 0.8020 - val_loss: 0.3754 - val_accuracy: 0.8288
Epoch 24/50
195/195 [==============================] - 7s 38ms/step - loss: 0.4259 - accuracy: 0.8018 - val_loss: 0.3811 - val_accuracy: 0.8330
Epoch 25/50
195/195 [==============================] - 8s 41ms/step - loss: 0.4242 - accuracy: 0.8056 - val_loss: 0.3684 - val_accuracy: 0.8369
Epoch 26/50
195/195 [==============================] - 7s 37ms/step - loss: 0.4234 - accuracy: 0.8053 - val_loss: 0.3629 - val_accuracy: 0.8385
Epoch 27/50
195/195 [==============================] - 7s 34ms/step - loss: 0.4230 - accuracy: 0.8067 - val_loss: 0.3776 - val_accuracy: 0.8262
Epoch 28/50
195/195 [==============================] - 7s 36ms/step - loss: 0.4175 - accuracy: 0.8103 - val_loss: 0.3745 - val_accuracy: 0.8334
Epoch 29/50
195/195 [==============================] - 7s 34ms/step - loss: 0.4072 - accuracy: 0.8088 - val_loss: 0.3662 - val_accuracy: 0.8346
Epoch 30/50
195/195 [==============================] - 7s 34ms/step - loss: 0.4133 - accuracy: 0.8121 - val_loss: 0.3690 - val_accuracy: 0.8323
Epoch 31/50
195/195 [==============================] - 7s 36ms/step - loss: 0.4121 - accuracy: 0.8110 - val_loss: 0.3674 - val_accuracy: 0.8378
Epoch 32/50
195/195 [==============================] - 7s 35ms/step - loss: 0.4064 - accuracy: 0.8157 - val_loss: 0.3695 - val_accuracy: 0.8359
Epoch 33/50
195/195 [==============================] - 7s 36ms/step - loss: 0.4032 - accuracy: 0.8163 - val_loss: 0.3660 - val_accuracy: 0.8312
Epoch 34/50
195/195 [==============================] - 7s 37ms/step - loss: 0.4004 - accuracy: 0.8191 - val_loss: 0.3647 - val_accuracy: 0.8389
Epoch 35/50
195/195 [==============================] - 7s 36ms/step - loss: 0.4013 - accuracy: 0.8182 - val_loss: 0.3661 - val_accuracy: 0.8366
Epoch 36/50
195/195 [==============================] - 7s 35ms/step - loss: 0.3988 - accuracy: 0.8204 - val_loss: 0.4540 - val_accuracy: 0.7972
Epoch 37/50
195/195 [==============================] - 7s 37ms/step - loss: 0.3951 - accuracy: 0.8202 - val_loss: 0.3644 - val_accuracy: 0.8410
Epoch 38/50
195/195 [==============================] - 7s 36ms/step - loss: 0.3902 - accuracy: 0.8204 - val_loss: 0.3866 - val_accuracy: 0.8287
Epoch 39/50
195/195 [==============================] - 8s 39ms/step - loss: 0.3940 - accuracy: 0.8216 - val_loss: 0.3623 - val_accuracy: 0.8389
Epoch 40/50
195/195 [==============================] - 7s 35ms/step - loss: 0.3891 - accuracy: 0.8263 - val_loss: 0.3698 - val_accuracy: 0.8330
Epoch 41/50
195/195 [==============================] - 7s 35ms/step - loss: 0.3904 - accuracy: 0.8243 - val_loss: 0.3612 - val_accuracy: 0.8391
Epoch 42/50
195/195 [==============================] - 7s 34ms/step - loss: 0.3884 - accuracy: 0.8246 - val_loss: 0.3685 - val_accuracy: 0.8373
Epoch 43/50
195/195 [==============================] - 7s 34ms/step - loss: 0.3889 - accuracy: 0.8198 - val_loss: 0.3627 - val_accuracy: 0.8349
Epoch 44/50
195/195 [==============================] - 7s 34ms/step - loss: 0.3822 - accuracy: 0.8263 - val_loss: 0.3721 - val_accuracy: 0.8368
Epoch 45/50
195/195 [==============================] - 7s 34ms/step - loss: 0.3813 - accuracy: 0.8288 - val_loss: 0.3719 - val_accuracy: 0.8297
Epoch 46/50
195/195 [==============================] - 6s 33ms/step - loss: 0.3820 - accuracy: 0.8307 - val_loss: 0.3559 - val_accuracy: 0.8435
Epoch 47/50
195/195 [==============================] - 7s 34ms/step - loss: 0.3775 - accuracy: 0.8293 - val_loss: 0.3583 - val_accuracy: 0.8397
Epoch 48/50
195/195 [==============================] - 7s 33ms/step - loss: 0.3835 - accuracy: 0.8259 - val_loss: 0.3754 - val_accuracy: 0.8347
Epoch 49/50
195/195 [==============================] - 6s 33ms/step - loss: 0.3760 - accuracy: 0.8303 - val_loss: 0.3562 - val_accuracy: 0.8425
Epoch 50/50
195/195 [==============================] - 8s 42ms/step - loss: 0.3693 - accuracy: 0.8341 - val_loss: 0.4041 - val_accuracy: 0.8251
195/195 [==============================] - 3s 15ms/step - loss: 0.4041 - accuracy: 0.8251

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

3.5 tensorflow 中LSTM和GRU模块使用意境级讲解

其它部分均保持一致。我们可以简单地比较通过预训练的 GloVe 模型初始化的 Embedding层的训练结果和随机初始化的 Embedding 层的训练结果,在训练完 50 个 Epochs 后,预训练模型的准确率达到了 82.51%,提升了约 2%。

参考

《tensorflow深度学习》龙龙老师

https://machinelearningmastery.com/return-sequences-and-return-states-for-lstms-in-keras/

官网

https://www.imooc.com/article/36743

https://*.com/questions/57318930/calculating-the-number-of-parameters-of-a-gru-layer-keras

keras中LSTM层的计算流程验证

上一篇:灾难环境下的Mobile应用构建及部署


下一篇:《UCD火花集2:有效的互联网产品设计 交互/信息设计 用户研究讨论》一17.6 提一个懒人需求—找遥控器的电视