tensorflow训练自己的数据集实现CNN图像分类2(保存模型&测试单张图片)

  神经网络训练的时候,我们需要将模型保存下来,方便后面继续训练或者用训练好的模型进行测试。因此,我们需要创建一个saver保存模型。

 def run_training():
data_dir = 'C:/Users/wk/Desktop/bky/dataSet/'
log_dir = 'C:/Users/wk/Desktop/bky/log/'
image,label = inputData.get_files(data_dir)
image_batches,label_batches = inputData.get_batches(image,label,32,32,16,20)
print(image_batches.shape)
p = model.mmodel(image_batches,16)
cost = model.loss(p,label_batches)
train_op = model.training(cost,0.001)
acc = model.get_accuracy(p,label_batches) sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
saver = tf.train.Saver()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess = sess,coord = coord) try:
for step in np.arange(1000):
print(step)
if coord.should_stop():
break
_,train_acc,train_loss = sess.run([train_op,acc,cost])
print("loss:{} accuracy:{}".format(train_loss,train_acc))
if step % 100 == 0:
check = os.path.join(log_dir,"model.ckpt")
saver.save(sess,check,global_step = step)
except tf.errors.OutOfRangeError:
print("Done!!!")
finally:
coord.request_stop()
coord.join(threads)
sess.close()

  训练好的模型信息会记录在checkpoint文件中,大致如下:

model_checkpoint_path: "C:/Users/wk/Desktop/bky/log/model.ckpt-100"
all_model_checkpoint_paths: "C:/Users/wk/Desktop/bky/log/model.ckpt-0"
all_model_checkpoint_paths: "C:/Users/wk/Desktop/bky/log/model.ckpt-100"

  其余还会生成一些文件,分别记录了模型参数等信息,后边测试的时候程序会读取checkpoint文件去加载这些真正的数据文件

tensorflow训练自己的数据集实现CNN图像分类2(保存模型&测试单张图片)

  构建好神经网络进行训练完成后,如果用之前的代码直接进行测试,会报shape不符合的错误,大致是卷积层的输入与图像的shape不一致,这是因为上篇的代码,将weights和biases定义在了模型的外面,调用模型的时候,出现valueError的错误。

tensorflow训练自己的数据集实现CNN图像分类2(保存模型&测试单张图片)

  因此,我们需要将参数定义在模型里面,加载训练好的模型参数时,训练好的参数才能够真正初始化模型。重写模型函数如下

 def mmodel(images,batch_size):
with tf.variable_scope('conv1') as scope:
weights = tf.get_variable('weights',
shape = [3,3,3, 16],
dtype = tf.float32,
initializer=tf.truncated_normal_initializer(stddev=0.1,dtype=tf.float32))
biases = tf.get_variable('biases',
shape=[16],
dtype=tf.float32,
initializer=tf.constant_initializer(0.1))
conv = tf.nn.conv2d(images, weights, strides=[1,1,1,1], padding='SAME')
pre_activation = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(pre_activation, name= scope.name)
with tf.variable_scope('pooling1_lrn') as scope:
pool1 = tf.nn.max_pool(conv1, ksize=[1,2,2,1],strides=[1,2,2,1],
padding='SAME', name='pooling1')
norm1 = tf.nn.lrn(pool1, depth_radius=4, bias=1.0, alpha=0.001/9.0,
beta=0.75,name='norm1')
with tf.variable_scope('conv2') as scope:
weights = tf.get_variable('weights',
shape=[3,3,16,128],
dtype=tf.float32,
initializer=tf.truncated_normal_initializer(stddev=0.1,dtype=tf.float32))
biases = tf.get_variable('biases',
shape=[128],
dtype=tf.float32,
initializer=tf.constant_initializer(0.1))
conv = tf.nn.conv2d(norm1, weights, strides=[1,1,1,1],padding='SAME')
pre_activation = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(pre_activation, name='conv2')
with tf.variable_scope('pooling2_lrn') as scope:
norm2 = tf.nn.lrn(conv2, depth_radius=4, bias=1.0, alpha=0.001/9.0,
beta=0.75,name='norm2')
pool2 = tf.nn.max_pool(norm2, ksize=[1,2,2,1], strides=[1,1,1,1],
padding='SAME',name='pooling2')
with tf.variable_scope('local3') as scope:
reshape = tf.reshape(pool2, shape=[batch_size, -1])
dim = reshape.get_shape()[1].value
weights = tf.get_variable('weights',
shape=[dim,4096],
dtype=tf.float32,
initializer=tf.truncated_normal_initializer(stddev=0.005,dtype=tf.float32))
biases = tf.get_variable('biases',
shape=[4096],
dtype=tf.float32,
initializer=tf.constant_initializer(0.1))
local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name)
with tf.variable_scope('softmax_linear') as scope:
weights = tf.get_variable('softmax_linear',
shape=[4096, 2],
dtype=tf.float32,
initializer=tf.truncated_normal_initializer(stddev=0.005,dtype=tf.float32))
biases = tf.get_variable('biases',
shape=[2],
dtype=tf.float32,
initializer=tf.constant_initializer(0.1))
softmax_linear = tf.add(tf.matmul(local3, weights), biases, name='softmax_linear')
return softmax_linear

测试训练好的模型

首先获取一张测试图像

 def get_one_image(img_dir):
image = Image.open(img_dir)
plt.imshow(image)
image = image.resize([32, 32])
image_arr = np.array(image)
return image_arr

加载模型,计算测试结果

 def test(test_file):
log_dir = 'C:/Users/wk/Desktop/bky/log/'
image_arr = get_one_image(test_file) with tf.Graph().as_default():
image = tf.cast(image_arr, tf.float32)
image = tf.image.per_image_standardization(image)
image = tf.reshape(image, [1,32, 32, 3])
print(image.shape)
p = model.mmodel(image,1)
logits = tf.nn.softmax(p)
x = tf.placeholder(tf.float32,shape = [32,32,3])
saver = tf.train.Saver()
with tf.Session() as sess:
ckpt = tf.train.get_checkpoint_state(log_dir)
if ckpt and ckpt.model_checkpoint_path:
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
saver.restore(sess, ckpt.model_checkpoint_path)
print('Loading success)
else:
print('No checkpoint')
prediction = sess.run(logits, feed_dict={x: image_arr})
max_index = np.argmax(prediction)
print(max_index)

前面主要是将测试图片标准化为网络的输入图像,15-19是加载模型文件,然后将图像输入到模型里即可

上一篇:Vue-Router 页面正在加载特效


下一篇:蝉知CMS5.6反射型XSS审计复现