svmrank 的误差惩罚因子c选择 经验

C是一个由用户去指定的系数,表示对分错的点加入多少的惩罚,当C很大的时候,分错的点就会更少,但是过拟合的情况可能会比较严重,当C很小的时候,分错的点可能会很多,不过可能由此得到的模型也会不太正确,所以如何选择C是有很多学问的,不过在大部分情况下就是通过经验尝试得到的。

 

svmrank 的误差惩罚因子c选择 经验

 

 

Trade-off between Maximum Margin and Classification Errors

http://mi.eng.cam.ac.uk/~kkc21/thesis_main/node29.html

The trade-off between maximum margin and the classification error (during training) is defined by the value C in Eqn. svmrank 的误差惩罚因子c选择 经验. The value C is called the Error Penalty. A high error penalty will force the SVM training to avoid classification errors (Section svmrank 的误差惩罚因子c选择 经验 gives a brief overview of the significance of the value of C).

A larger C will result in a larger search space for the QP optimiser. This generally increases the duration of the QP search, as results in Table svmrank 的误差惩罚因子c选择 经验 show. Other experiments with larger numbers of data points (1200) fail to converge whenC is set higher than 1000. This is mainly due to numerical problems. The cost function of the QP does not decrease monotonically svmrank 的误差惩罚因子c选择 经验. A larger search space does contribute to these problems.

The number of SVs does not change significantly with different C value. A smaller C does cause the average number of SVs to increases slightly. This could be due to more support vectors being needed to compensate the bound on the other support vectors. The svmrank 的误差惩罚因子c选择 经验 norm of w decreases with smaller C. This is as expected, because if errors are allowed, then the training algorithm can find a separating plane with much larger margin. Figures svmrank 的误差惩罚因子c选择 经验svmrank 的误差惩罚因子c选择 经验svmrank 的误差惩罚因子c选择 经验 and svmrank 的误差惩罚因子c选择 经验 show the decision boundaries for two very different error penalties on two classifiers (2-to-rest and 5-to-rest). It is clear that with higher error penalty, the optimiser gives a boundary that classifies all the training points correctly. This can give very irregular boundaries.

One can easily conclude that the more regular boundaries (Figures svmrank 的误差惩罚因子c选择 经验 and svmrank 的误差惩罚因子c选择 经验) will give better generalisation. This conclusion is also supported by the value of ||w|| which is lower for these two classifiers, i.e. they have larger margin. One can also use the expected error bound to predict the best error penalty setting. First the expected error bound is computed using Eqn. svmrank 的误差惩罚因子c选择 经验 and svmrank 的误差惩罚因子c选择 经验 ( svmrank 的误差惩罚因子c选择 经验 ). This is shown in Figure svmrank 的误差惩罚因子c选择 经验. It predicts that the best setting isC=10 and C=100. The accuracy obtained from testing data (Figure svmrank 的误差惩罚因子c选择 经验) agrees with this prediction.

 

   svmrank 的误差惩罚因子c选择 经验 

 

 

 

所以c一般 选用10,100

 

实测:

用svm_rank测试数据时,

经验参数,c=1,效果不如c=3.
故c=1,放弃。

 但c=1 训练时间比c=3训练时间短。

总的来说,c越大,svm_rank learn的迭代次数越大,所耗训练时间越长。

svmrank 的误差惩罚因子c选择 经验

上一篇:使用


下一篇:(原创)monitor H3C switch with cacti