这里写目录标题
开始之前,导入numpy、pandas包和数据
import pandas as pd
import numpy as np
#加载数据train.csv
df = pd.read_csv("train.csv")
df.head(3)
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
2 第二章:数据清洗及特征处理
我们拿到的数据通常是不干净的,所谓的不干净,就是数据中有缺失值,有一些异常点等,需要经过一定的处理才能继续做后面的分析或建模,所以拿到数据的第一步是进行数据清洗,本章我们将学习缺失值、重复值、字符串和数据转换等操作,将数据清洗成可以分析或建模的亚子。
2.1 缺失值观察与处理
我们拿到的数据经常会有很多缺失值,比如我们可以看到Cabin列存在NaN,那其他列还有没有缺失值,这些缺失值要怎么处理呢
2.1.1 任务一:缺失值观察
(1) 请查看每个特征缺失值个数
(2) 请查看Age, Cabin, Embarked列的数据
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
df.isnull().sum()
PassengerId 0
Survived 0
Pclass 0
Name 0
Sex 0
Age 177
SibSp 0
Parch 0
Ticket 0
Fare 0
Cabin 687
Embarked 2
dtype: int64
df[['Age','Cabin','Embarked']].head(3)
Age | Cabin | Embarked | |
---|---|---|---|
0 | 22.0 | NaN | S |
1 | 38.0 | C85 | C |
2 | 26.0 | NaN | S |
2.1.2 任务二:对缺失值进行处理
(1)处理缺失值一般有几种思路
答:常见的缺失值补全方法:均值插补、同类均值插补、建模预测、高维映射、多重插补、极大似然估计、压缩感知和矩阵补全。具体查看下放链接
机器学习中处理缺失值的9种方法https://zhuanlan.zhihu.com/p/270551105
(2) 请尝试对Age列的数据的缺失值进行处理
df[df['Age'] == None] = 0
df.head(8)
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
5 | 6 | 0 | 3 | Moran, Mr. James | male | NaN | 0 | 0 | 330877 | 8.4583 | NaN | Q |
6 | 7 | 0 | 1 | McCarthy, Mr. Timothy J | male | 54.0 | 0 | 0 | 17463 | 51.8625 | E46 | S |
7 | 8 | 0 | 3 | Palsson, Master. Gosta Leonard | male | 2.0 | 3 | 1 | 349909 | 21.0750 | NaN | S |
df[df['Age'].isnull()] = 0
df.head(8)
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
5 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0000 | 0 | 0 |
6 | 7 | 0 | 1 | McCarthy, Mr. Timothy J | male | 54.0 | 0 | 0 | 17463 | 51.8625 | E46 | S |
7 | 8 | 0 | 3 | Palsson, Master. Gosta Leonard | male | 2.0 | 3 | 1 | 349909 | 21.0750 | NaN | S |
df[df['Age'] == np.nan] = 0
df.head(8)
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
5 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0000 | 0 | 0 |
6 | 7 | 0 | 1 | McCarthy, Mr. Timothy J | male | 54.0 | 0 | 0 | 17463 | 51.8625 | E46 | S |
7 | 8 | 0 | 3 | Palsson, Master. Gosta Leonard | male | 2.0 | 3 | 1 | 349909 | 21.0750 | NaN | S |
【思考】检索空缺值用np.nan
,None
以及.isnull()
哪个更好,这是为什么?如果其中某个方式无法找到缺失值,原因又是为什么?
【回答】数值列读取数据后,空缺值的数据类型为float64所以用None一般索引不到,比较的时候最好用np.nan
(3) 请尝试使用不同的方法直接对整张表的缺失值进行处理
df.dropna().head(8)
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
5 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0000 | 0 | 0 |
6 | 7 | 0 | 1 | McCarthy, Mr. Timothy J | male | 54.0 | 0 | 0 | 17463 | 51.8625 | E46 | S |
10 | 11 | 1 | 3 | Sandstrom, Miss. Marguerite Rut | female | 4.0 | 1 | 1 | PP 9549 | 16.7000 | G6 | S |
11 | 12 | 1 | 1 | Bonnell, Miss. Elizabeth | female | 58.0 | 0 | 0 | 113783 | 26.5500 | C103 | S |
17 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0000 | 0 | 0 |
19 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0000 | 0 | 0 |
df.fillna(0).head(8)
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | 0 | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | 0 | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | 0 | S |
5 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0000 | 0 | 0 |
6 | 7 | 0 | 1 | McCarthy, Mr. Timothy J | male | 54.0 | 0 | 0 | 17463 | 51.8625 | E46 | S |
7 | 8 | 0 | 3 | Palsson, Master. Gosta Leonard | male | 2.0 | 3 | 1 | 349909 | 21.0750 | 0 | S |
【思考】dropna和fillna有哪些参数,分别如何使用呢?
(1)df.dropna()函数用于删除dataframe数据中的缺失数据,即 删除NaN数据
DataFrame.dropna(axis=0, how=‘any’, thresh=None, subset=None, inplace=False)
parameters | 说明 |
---|---|
axis | 0为行 1为列,default 0,数据删除维度 |
how | {‘any’, ‘all’}, default ‘any’,any:删除带有nan的行;all:删除全为nan的行 |
thresh | int,保留至少 int 个非nan行 |
subset | list,在特定列缺失值处理 |
inplace | bool,是否修改源文件 |
(2)df.fillna()函数用于填充确实数据
DataFrame.fillna(value=None, method=None, axis=None, inplace=False, limit=None, downcast=None)
parameters | 说明 |
---|---|
value | 用于填充孔的值(例如 0) |
method | {‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, 无},默认无 |
axis | 0为行 1为列,沿其填充缺失值的轴 |
inplace | bool,是否修改源文件 |
limit | 整数,默认无,如果指定了方法,则这是向前/向后填充的连续 NaN 值的最大数量 |
downcast | dict,默认为 None |
2.2 重复值观察与处理
由于这样那样的原因,数据中会不会存在重复值呢,如果存在要怎样处理呢
2.2.1 任务一:请查看数据中的重复值
df[df.duplicated()]
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
17 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
19 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
26 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
28 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
29 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
31 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
32 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
36 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
42 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
45 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
46 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
47 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
48 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
55 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
64 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
65 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
76 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
77 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
82 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
87 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
95 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
101 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
107 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
109 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
121 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
126 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
128 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
140 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
154 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
158 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
718 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
727 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
732 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
738 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
739 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
740 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
760 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
766 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
768 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
773 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
776 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
778 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
783 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
790 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
792 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
793 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
815 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
825 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
826 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
828 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
832 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
837 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
839 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
846 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
849 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
859 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
863 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
868 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
878 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
888 | 0 | 0 | 0 | 0 | 0 | 0.0 | 0 | 0 | 0 | 0.0 | 0 | 0 |
176 rows × 12 columns
2.2.2 任务二:对重复值进行处理
df = df.drop_duplicates()
df.head()
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
2.2.3 任务三:将前面清洗的数据保存为csv格式
df.to_csv('test_clear.csv')
2.3 特征观察与处理
我们对特征进行一下观察,可以把特征大概分为两大类:
数值型特征:Survived ,Pclass, Age ,SibSp, Parch, Fare,其中Survived, Pclass为离散型数值特征,Age,SibSp, Parch, Fare为连续型数值特征
文本型特征:Name, Sex, Cabin,Embarked, Ticket,其中Sex, Cabin, Embarked, Ticket为类别型文本特征,数值型特征一般可以直接用于模型的训练,但有时候为了模型的稳定性及鲁棒性会对连续变量进行离散化。文本型特征往往需要转换成数值型特征才能用于建模分析。
2.3.1 任务一:对年龄进行分箱(离散化)处理
(1) 分箱操作是什么?
答:是指将值划分到离散区间。好比不同大小的苹果归类到几个事先布置的箱子中;不同年龄的人划分到几个年龄段中
(2) 将连续变量Age平均分箱成5个年龄段,并分别用类别变量12345表示
df['AgeBand'] = pd.cut(df['Age'], 5,labels = [1,2,3,4,5])
df.head()
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | AgeBand | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S | 2 |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C | 3 |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S | 2 |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S | 3 |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S | 3 |
(3) 将连续变量Age划分为[0,5) [5,15) [15,30) [30,50) [50,80)五个年龄段,并分别用类别变量12345表示
df['AgeBand'] = pd.cut(df['Age'],[0,5,15,30,50,80], labels = [1,2,3,4,5])
df.head(3)
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | AgeBand | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S | 3 |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C | 4 |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S | 3 |
(4) 将连续变量Age按10% 30% 50% 70% 90%五个年龄段,并用分类变量12345表示
df['AgeBand'] = pd.qcut(df['Age'],[0,0.1,0.3,0.5,0.7,0.9],labels = [1,2,3,4,5])
df.head(3)
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | AgeBand | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S | 2 |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C | 5 |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S | 3 |
【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html
【参考】https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html
2.3.2 任务二:对文本变量进行转换
(1) 查看文本变量名及种类
#方法一: value_counts
df['Sex'].value_counts()
male 453
female 261
0 1
Name: Sex, dtype: int64
#方法二: unique
df['Sex'].unique()
array(['male', 'female', 0], dtype=object)
(2) 将文本变量Sex, Cabin ,Embarked用数值变量12345表示
#方法一: replace
df['Sex_num'] = df['Sex'].replace(['male','female'],[1,2])
df.head()
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | AgeBand | Sex_num | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S | 2 | 1 |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C | 5 | 2 |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S | 3 | 2 |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S | 4 | 2 |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S | 4 | 1 |
#方法二: map
df['Sex_num'] = df['Sex'].map({'male': 1, 'female': 2})
df.head()
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | AgeBand | Sex_num | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S | 2 | 1.0 |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C | 5 | 2.0 |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S | 3 | 2.0 |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S | 4 | 2.0 |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S | 4 | 1.0 |
#方法三: 使用sklearn.preprocessing的LabelEncoder
from sklearn.preprocessing import LabelEncoder
for feat in ['Cabin', 'Ticket']:
lbl = LabelEncoder()
label_dict = dict(zip(df[feat].unique(), range(df[feat].nunique())))
df[feat + "_labelEncode"] = df[feat].map(label_dict)
df[feat + "_labelEncode"] = lbl.fit_transform(df[feat].astype(str))
df.head()
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | AgeBand | Sex_num | Cabin_labelEncode | Ticket_labelEncode | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S | 2 | 1.0 | 135 | 409 |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C | 5 | 2.0 | 74 | 472 |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S | 3 | 2.0 | 135 | 533 |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S | 4 | 2.0 | 50 | 41 |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S | 4 | 1.0 | 135 | 374 |
(3) 将文本变量Sex, Cabin, Embarked用one-hot编码表示
#将类别文本转换为one-hot编码
#方法一: OneHotEncoder
for feat in ["Age", "Embarked"]:
# x = pd.get_dummies(df["Age"] // 6)
# x = pd.get_dummies(pd.cut(df['Age'],5))
x = pd.get_dummies(df[feat], prefix=feat)
df = pd.concat([df, x], axis=1)
#df[feat] = pd.get_dummies(df[feat], prefix=feat)
df.head()
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | ... | Age_66.0 | Age_70.0 | Age_70.5 | Age_71.0 | Age_74.0 | Age_80.0 | Embarked_0 | Embarked_C | Embarked_Q | Embarked_S | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
5 rows × 109 columns
2.3.3 任务三:从纯文本Name特征里提取出Titles的特征(所谓的Titles就是Mr,Miss,Mrs等)
df['Title'] = df.Name.str.extract('([A-Za-z]+)\.', expand=False)
df.head()
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | ... | Age_70.0 | Age_70.5 | Age_71.0 | Age_74.0 | Age_80.0 | Embarked_0 | Embarked_C | Embarked_Q | Embarked_S | Title | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | Mr |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | Mrs |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | Miss |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | Mrs |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | ... | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | Mr |
5 rows × 110 columns
#保存最终你完成的已经清理好的数据
df.to_csv('test_fin.csv')