- 当某个特征只有很少类型的取值,而且99%都是其中一类的取值时,这种特征可以删除
- 不是所有的特征都能提供足够的信息的,甚至有些特征会对我们的模型训练产生障碍,因此在模型训练开始前我们要对特征做出一定的选择。
接下来我们使用SelectKBest方法结合F检验来筛选回归模型的特征。
from sklearn.feature_selection import SelectKBest, f_regression
from sklearn.datasets import load_boston
boston = load_boston()
print('Boston data shape: ', boston.data.shape)
selector = SelectKBest(f_regression)
X_new = selector.fit_transform(boston.data, boston.target)
print('Filtered Boston data shape:', X_new.shape)
print('F-Scores:', selector.scores_)
Boston data shape: (506, 13)
Filtered Boston data shape: (506, 10)
F-Scores: [ 88.15124178 75.2576423 153.95488314 15.97151242 112.59148028
471.84673988 83.47745922 33.57957033 85.91427767 141.76135658
175.10554288 63.05422911 601.61787111]
然后我们使用SelectPercentile结合卡方检验来筛选分类模型的特征。
from sklearn.feature_selection import SelectPercentile, chi2
from sklearn.datasets import load_iris
iris = load_iris()
print('Boston data shape: ', iris.data.shape)
selector = SelectPercentile(chi2, percentile=15)
X_new = selector.fit_transform(iris.data, iris.target)
print('Filtered Boston data shape:', X_new.shape)
print('F-Scores:', selector.scores_)
Boston data shape: (150, 4)
Filtered Boston data shape: (150, 1)
F-Scores: [ 10.81782088 3.59449902 116.16984746 67.24482759]