1.解决办法一
每层随机dropout一部分神经元
paddle.fluid.layers.
dropout
(x, dropout_prob, is_test=False, seed=None, name=None, dropout_implementation='downgrade_in_infer')
丢弃或者保持x的每个元素独立。Dropout是一种正则化手段,通过在训练过程中阻止神经元节点间的相关性来减少过拟合。根据给定的丢弃概率,dropout操作符按丢弃概率随机将一些神经元输出设置为0,其他的仍保持不变。
import paddle.fluid as fluid import numpy as np x = fluid.layers.data(name="x", shape=[32, 32], dtype="float32") droped = fluid.layers.dropout(x, dropout_prob=0.5) place = fluid.CPUPlace() exe = fluid.Executor(place) exe.run(fluid.default_startup_program()) np_x = np.random.random(size=(32, 32)).astype('float32') output = exe.run(feed={"x": np_x}, fetch_list = [droped]) print(output)
2.加入正则化项
fluid通过设置 ParamAttr
的 regularizer
属性为单个parameter设置正则化。
param_attrs = fluid.ParamAttr(name="fc_weight", regularizer=fluid.regularizer.L1DecayRegularizer(0.1)) y_predict = fluid.layers.fc(input=x, size=10, param_attr=param_attrs)
3.缩减网络的层数