如何删除Pandas中的连续坏数据点

我有一个Pandas数据框,看起来像:

import pandas as pd
import numpy as np
df = pd.DataFrame({"Dummy_Var": [1]*12, 
                   "B": [6, 143.3, 143.3, 143.3, 3, 4, 93.9, 93.9, 93.9, 2, 2, 7],
                   "C": [4.1, 23.2, 23.2, 23.2, 4.3, 2.5, 7.8, 7.8, 2, 7, 7, 7]})


    B       C       Dummy_Var
0   6.0     4.1     1
1   143.3   23.2    1
2   143.3   23.2    1
3   143.3   23.2    1
4   3.0     4.3     1
5   4.0     2.5     1
6   93.9    7.8     1
7   93.9    7.8     1
8   93.9    2.0     1
9   2.0     7.0     1
10  2.0     7.0     1
11  7.0     7.0     1

每当相同的数字连续三次或更多次出现时,该数据应该用NAN替换.所以结果应该是:

    B       C       Dummy_Var
0   6.0     4.1     1
1   NaN     NaN     1
2   NaN     NaN     1
3   NaN     NaN     1
4   3.0     4.3     1
5   4.0     2.5     1
6   NaN     7.8     1
7   NaN     7.8     1
8   NaN     2.0     1
9   2.0     NaN     1
10  2.0     NaN     1
11  7.0     NaN     1

我写了一个函数来做到这一点:

def non_sense_remover(df, examined_columns, allowed_repeating):
    def count_each_group(grp, column):
        grp['Count'] = grp[column].count()
        return grp
    for col in examined_columns:
        sel = df.groupby((df[col] != df[col].shift(1)).cumsum()).apply(count_each_group, column=col)["Count"] > allowed_repeating
        df.loc[sel, col] = np.nan

    return df

df = non_sense_remover(df, ["B", "C"], 2)

但是,我的真实数据帧有2M行和18列!在2M行上运行此功能非常慢.有没有更有效的方法来做到这一点?我错过了什么吗?提前致谢.

解决方法:

我们使用groupby mask

m=df[['B','C']]
df[['B','C']]=m.mask(m.apply(lambda x : x.groupby(x.diff().ne(0).cumsum()).transform('count'))>2)
df
Out[1245]: 
      B    C  Dummy_Var
0   6.0  4.1          1
1   NaN  NaN          1
2   NaN  NaN          1
3   NaN  NaN          1
4   3.0  4.3          1
5   4.0  2.5          1
6   NaN  7.8          1
7   NaN  7.8          1
8   NaN  2.0          1
9   2.0  NaN          1
10  2.0  NaN          1
11  7.0  NaN          1
上一篇:android – 与传感器监听器关联时,SampleRate.MS16,MS32,MS128代表什么?


下一篇:高通平台sensor框架图【学习笔记】