该文章是基于哔哩哔哩上贾老师的视频记录的,链接在此:keras入门_哔哩哔哩_bilibili
语言:python,编程工具jupyter
本文还是基于手写数字识别的,增加了Dropout的使用,目的是减少中间层神经元个数,防止过拟合。
Dropout用法:深度学习中Dropout原理解析_Microstrong-CSDN博客_dropout
代码如下:
import numpy as np
import tensorflow as tf
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense,Dropout
from tensorflow.keras.optimizers import SGD
#载入数据
(x_train,y_train),(x_test,y_test)=mnist.load_data()
#
print('x_shape:',x_train.shape)
print('y_shape:',y_train.shape)
#格式化并归一化
x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_test.reshape(x_test.shape[0],-1)/255.0
#转玮one hot 格式;num_class为输出的类别
y_train = np_utils.to_categorical(y_train,num_classes=10)
y_test = np_utils.to_categorical(y_test,num_classes=10)
#创建模型,输入784个神经元,输出10个神经元
#增加隐藏层
model = Sequential([
Dense(units =200,input_dim =784,bias_initializer='one',activation='tanh'),
#防止过拟合,让40%的神经元不工作
Dropout(0.4),
Dense(units =100,bias_initializer='one',activation='tanh'),
Dropout(0.4),
Dense(units =10,bias_initializer='one',activation='softmax')
])
#定义优化器
sgd = SGD(lr=0.2)
#修改交叉熵;在做分类问题时,使用交叉熵迭代效果会更好,效率也更高
model.compile(
optimizer = sgd,
loss ='categorical_crossentropy',
metrics=['accuracy'],
)
#训练模型
model.fit(x_train,y_train,batch_size=32,epochs=10)
#评估模型
loss,accuracy = model.evaluate(x_test,y_test)
print('\ntest loss',loss)
print('test accuracy',accuracy)
loss,accuracy = model.evaluate(x_train,y_train)
print('\ntrain loss',loss)
print('train accuracy',accuracy)
可以看到在搭建模型时,修改了第一层全连接层的输出值,并在其之后加上了Dropout(0.4),即暂时让40%的神经元不工作。我们来看结果: