Part6__机器学习实战学习笔记__AdaBoost

Step By Step

1、adaboost算法基本原理
2、iris和mnist数据集测试
3、算法有点和缺点


一、adaboost算法基本原理
Adaboost是一种迭代算法,其核心思想是针对同一个训练集训练不同的分类器(弱分类器),然后把这些弱分类器集合起来,构成一个更强的最终分类器(强分类器)。

Bagging&Boosting二者之间的区别

a. 样本选择上:

  • Bagging:训练集是在原始集中有放回选取的,从原始集中选出的各轮训练集之间是独立的。
  • Boosting:每一轮的训练集不变,只是训练集中每个样例在分类器中的权重发生变化。而权值是根据上一轮的分类结果进行调整。

b. 样例权重:

  • Bagging:使用均匀取样,每个样例的权重相等。
  • Boosting:根据错误率不断调整样例的权值,错误率越大则权重越大。

c. 预测函数:

  • Bagging:所有预测函数的权重相等。
  • Boosting:每个弱分类器都有相应的权重,对于分类误差小的分类器会有更大的权重。

d. 并行计算:

  • Bagging:各个预测函数可以并行生成。
  • Boosting:各个预测函数只能顺序生成,因为后一个模型参数需要前一轮模型的结果。

集成学习算法关系

  • Bagging + 决策树 = 随机森林
  • AdaBoost + 决策树 = 提升树
  • Gradient Boosting + 决策树 = GBDT
二、iris和mnist数据集测试
  • 2.1 iris数据集测试
# Load libraries
from sklearn.ensemble import AdaBoostClassifier
from sklearn import datasets
from sklearn.model_selection import train_test_split

# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target

# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # 70% training and 30% test

# Create adaboost classifer object
abc = AdaBoostClassifier(n_estimators=200, learning_rate=1)
# Train Adaboost Classifer
model = abc.fit(X_train, y_train)

# Model Accuracy, how often is the classifier correct?
print(model.score(X_test,y_test))

运行结果

0.9777777777777777
  • 2.2 mnist数据集测试
import time
import numpy as np

from sklearn.datasets import fetch_openml
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.utils import check_random_state
# Load libraries
from sklearn.ensemble import AdaBoostClassifier
from sklearn import datasets
from sklearn.model_selection import train_test_split

# 获取数据集并做打散处理
t0 = time.time()
train_samples = 5000
X, y = fetch_openml("mnist_784", version=1, return_X_y=True)
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # 70% training and 30% test


# Create adaboost classifer object
abc = AdaBoostClassifier(n_estimators=100, learning_rate=1)
# Train Adaboost Classifer
model = abc.fit(X_train, y_train)

# Model Accuracy, how often is the classifier correct?
print(model.score(X_test,y_test))

测试结果

0.7374761904761905
  • 2.3 自定义若分类器
# Load libraries
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn import datasets
# Import Support Vector Classifier
from sklearn.svm import SVC
#Import scikit-learn metrics module for accuracy calculation
from sklearn import metrics

# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5) # 70% training and 30% test

svc=SVC(probability=True, kernel='linear')

# Create adaboost classifer object
abc =AdaBoostClassifier(n_estimators=50, base_estimator=svc, learning_rate=1)

# Train Adaboost Classifer
model = abc.fit(X_train, y_train)

#Predict the response for test dataset
y_pred = model.predict(X_test)


# Model Accuracy, how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))

测试结果

Accuracy: 0.9466666666666667
三、算法有点和缺点

优点

  • 很好的利用了弱分类器进行级联;
  • 可以将不同的分类算法作为弱分类器;
  • AdaBoost具有很高的精度;
  • 相对于bagging算法和Random Forest算法,AdaBoost充分考虑的每个分类器的权重;

缺点

  • AdaBoost迭代次数也就是弱分类器数目不太好设定,可以使用交叉验证来进行确定;
  • 数据不平衡导致分类精度下降;
  • 训练比较耗时,每次重新选择当前分类器最好切分点;

更多参考

提升分类器性能利器-AdaBoost

上一篇:Part4__机器学习实战学习笔记__Logistic回归


下一篇:阿里云PAI Studio Python脚本组件使用Quick Start