视频学习来源
https://www.bilibili.com/video/av40787141?from=search&seid=17003307842787199553
笔记
使用dropout是要改善过拟合,将训练和测试的准确率差距变小
训练集,测试集结果相比差距较大时,过拟合状态
使用dropout后,每一周期准确率可能不高反而最后一步提升很快,这是训练的时候部分神经元工作,而最后的评估所有神经元工作
Softmax一般用在神经网络的最后一层
import numpy as np from keras.datasets import mnist #将会从网络下载mnist数据集 from keras.utils import np_utils from keras.models import Sequential #序列模型 from keras.layers import Dense,Dropout #在这里导入dropout from keras.optimizers import SGD
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend.
#载入数据 (x_train,y_train),(x_test,y_test)=mnist.load_data() #查看格式 #(60000,28,28) print('x_shape:',x_train.shape) #(60000) print('y_shape:',y_train.shape) #(60000,28,28)->(60000,784) #行数60000,列-1表示自动设置 #除以255是做数据归一化处理 x_train=x_train.reshape(x_train.shape[0],-1)/255.0 #转换数据格式 x_test=x_test.reshape(x_test.shape[0],-1)/255.0 #转换数据格式 #label标签转换成 one hot 形式 y_train=np_utils.to_categorical(y_train,num_classes=10) #分成10类 y_test=np_utils.to_categorical(y_test,num_classes=10) #分成10类 #创建模型,输入754个神经元,输出10个神经元 #偏执值初始值设为zeros(默认为zeros) model=Sequential([ Dense(units=200,input_dim=784,bias_initializer='zeros',activation='tanh'), #双曲正切激活函数 #Dropout(0.4), #百分之40的神经元不工作 Dense(units=100,bias_initializer='zeros',activation='tanh'), #双曲正切激活函数 #Dropout(0.4), #百分之40的神经元不工作 Dense(units=10,bias_initializer='zeros',activation='softmax') ]) #也可用下面的方式添加网络层 ### #model.add(Dense(...)) #model.add(Dense(...)) ### #定义优化器 #学习速率为0.2 sgd=SGD(lr=0.2) #定义优化器,损失函数,训练效果中计算准确率 model.compile( optimizer=sgd, #sgd优化器 loss='categorical_crossentropy', #损失用交叉熵,速度会更快 metrics=['accuracy'], #计算准确率 ) #训练(不同于之前,这是新的训练方式) #六万张,每次训练32张,训练10个周期(六万张全部训练完算一个周期) model.fit(x_train,y_train,batch_size=32,epochs=10) #评估模型 loss,accuracy=model.evaluate(x_test,y_test) print('\ntest loss',loss) print('\ntest accuracy',accuracy) loss,accuracy=model.evaluate(x_train,y_train) print('\ntrain loss',loss) print('\ntrain accuracy',accuracy)
x_shape: (60000, 28, 28)
y_shape: (60000,)
Epoch 1/10
60000/60000 [==============================] - 6s 100us/step - loss: 0.2539 - acc: 0.9235
Epoch 2/10
60000/60000 [==============================] - 6s 95us/step - loss: 0.1175 - acc: 0.9639
Epoch 3/10
60000/60000 [==============================] - 5s 90us/step - loss: 0.0815 - acc: 0.9745
Epoch 4/10
60000/60000 [==============================] - 5s 90us/step - loss: 0.0601 - acc: 0.9809
Epoch 5/10
60000/60000 [==============================] - 6s 92us/step - loss: 0.0451 - acc: 0.9860
Epoch 6/10
60000/60000 [==============================] - 5s 91us/step - loss: 0.0336 - acc: 0.9899
Epoch 7/10
60000/60000 [==============================] - 5s 92us/step - loss: 0.0248 - acc: 0.9926
Epoch 8/10
60000/60000 [==============================] - 6s 93us/step - loss: 0.0185 - acc: 0.9948
Epoch 9/10
60000/60000 [==============================] - 6s 93us/step - loss: 0.0128 - acc: 0.9970
Epoch 10/10
60000/60000 [==============================] - 6s 93us/step - loss: 0.0082 - acc: 0.9988
10000/10000 [==============================] - 0s 39us/step
test loss 0.07058678171953651
test accuracy 0.9786
60000/60000 [==============================] - 2s 34us/step
train loss 0.0052643890143993
train accuracy 0.9995
使用后
(将#Dropout(0.4), 去掉注释)
model=Sequential([ Dense(units=200,input_dim=784,bias_initializer='zeros',activation='tanh'), #双曲正切激活函数 Dropout(0.4), #百分之40的神经元不工作 Dense(units=100,bias_initializer='zeros',activation='tanh'), #双曲正切激活函数 Dropout(0.4), #百分之40的神经元不工作 Dense(units=10,bias_initializer='zeros',activation='softmax') #双曲正切激活函数 ])
x_shape: (60000, 28, 28)
y_shape: (60000,)
Epoch 1/10
60000/60000 [==============================] - 11s 184us/step - loss: 0.4158 - acc: 0.8753
Epoch 2/10
60000/60000 [==============================] - 10s 166us/step - loss: 0.2799 - acc: 0.9177
Epoch 3/10
60000/60000 [==============================] - 11s 177us/step - loss: 0.2377 - acc: 0.9302
Epoch 4/10
60000/60000 [==============================] - 10s 164us/step - loss: 0.2169 - acc: 0.9356
Epoch 5/10
60000/60000 [==============================] - 10s 170us/step - loss: 0.1979 - acc: 0.9413
Epoch 6/10
60000/60000 [==============================] - 11s 183us/step - loss: 0.1873 - acc: 0.9439
Epoch 7/10
60000/60000 [==============================] - 11s 180us/step - loss: 0.1771 - acc: 0.9472
Epoch 8/10
60000/60000 [==============================] - 12s 204us/step - loss: 0.1676 - acc: 0.9501
Epoch 9/10
60000/60000 [==============================] - 11s 187us/step - loss: 0.1608 - acc: 0.9527
Epoch 10/10
60000/60000 [==============================] - 10s 170us/step - loss: 0.1534 - acc: 0.9542
10000/10000 [==============================] - 1s 68us/step
test loss 0.09667835112037138
test accuracy 0.9692
60000/60000 [==============================] - 4s 70us/step
train loss 0.07203661710163578
train accuracy 0.9774666666666667
PS 本例并不能很好的体现dropout的优化,但是提供示例来使用dropout