DL之DNN:利用MultiLayerNet模型【6*100+ReLU+SGD,weight_decay】对Mnist数据集训练来抑制过拟合

输出结

DL之DNN:利用MultiLayerNet模型【6*100+ReLU+SGD,weight_decay】对Mnist数据集训练来抑制过拟合

DL之DNN:利用MultiLayerNet模型【6*100+ReLU+SGD,weight_decay】对Mnist数据集训练来抑制过拟合

 

设计思

DL之DNN:利用MultiLayerNet模型【6*100+ReLU+SGD,weight_decay】对Mnist数据集训练来抑制过拟合

 

核心代

# weight_decay_lambda = 0

weight_decay_lambda = 0.1

for i in range(1000000):

   batch_mask = np.random.choice(train_size, batch_size)

   x_batch = x_train[batch_mask]

   t_batch = t_train[batch_mask]

   grads = network.gradient(x_batch, t_batch)  

   optimizer.update(network.params, grads)  

   if i % iter_per_epoch == 0:  

       train_acc = network.accuracy(x_train, t_train)  

       test_acc = network.accuracy(x_test, t_test)    

       train_acc_list.append(train_acc)

       test_acc_list.append(test_acc)

       print("epoch:" + str(epoch_cnt) + ", train_acc:" + str(float('%.4f' % train_acc)) + ", test_acc:" + str(float('%.4f' % test_acc)))

       epoch_cnt += 1

       if epoch_cnt >= max_epochs:  #

           break


上一篇:DL之DNN:自定义MultiLayerNet【6*100+ReLU,SGD】对MNIST数据集训练进而比较【多个超参数组合最优化】性能


下一篇:mongodb数据库备份