我知道我可以通过执行以下操作来应用numpy方法:
dataList是DataFrames的列表(相同的列/行).
testDF = (concat(dataList, axis=1, keys=range(len(dataList)))
.swaplevel(0, 1, axis=1)
.sortlevel(axis=1)
.groupby(level=0, axis=1))
testDF.aggregate(numpy.mean)
testDF.aggregate(numpy.var)
等等.但是,如果我想计算均值(sem)的标准误差怎么办?
我试过了:
testDF.aggregate(scipy.stats.sem)
但它给出了一个令人困惑的错误.有人知道怎么做吗? scipy.stats方法有何不同之处?
这是一些为我重现错误的代码:
from scipy import stats as st
import pandas
import numpy as np
df_list = []
for ii in range(30):
df_list.append(pandas.DataFrame(np.random.rand(600, 10),
columns = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']))
testDF = (pandas.concat(df_list, axis=1, keys=range(len(df_list)))
.swaplevel(0, 1, axis=1)
.sortlevel(axis=1)
.groupby(level=0, axis=1))
testDF.aggregate(st.sem)
这是错误消息:
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-1-184cee8fb2ce> in <module>()
12 .groupby(level=0, axis=1))
13
---> 14 testDF.aggregate(st.sem)
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/pandas/core/groupby.py in aggregate(self, arg, *args, **kwargs)
1177 return self._python_agg_general(arg, *args, **kwargs)
1178 else:
-> 1179 result = self._aggregate_generic(arg, *args, **kwargs)
1180
1181 if not self.as_index:
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/pandas/core/groupby.py in _aggregate_generic(self, func, *args, **kwargs)
1248 else:
1249 result = DataFrame(result, index=obj.index,
-> 1250 columns=result_index)
1251 else:
1252 result = DataFrame(result)
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
300 mgr = self._init_mgr(data, index, columns, dtype=dtype, copy=copy)
301 elif isinstance(data, dict):
--> 302 mgr = self._init_dict(data, index, columns, dtype=dtype)
303 elif isinstance(data, ma.MaskedArray):
304 mask = ma.getmaskarray(data)
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/pandas/core/frame.py in _init_dict(self, data, index, columns, dtype)
389
390 # consolidate for now
--> 391 mgr = BlockManager(blocks, axes)
392 return mgr.consolidate()
393
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/pandas/core/internals.py in __init__(self, blocks, axes, do_integrity_check)
329
330 if do_integrity_check:
--> 331 self._verify_integrity()
332
333 def __nonzero__(self):
/Library/Frameworks/EPD64.framework/Versions/7.3/lib/python2.7/site-packages/pandas/core/internals.py in _verify_integrity(self)
404 mgr_shape = self.shape
405 for block in self.blocks:
--> 406 assert(block.values.shape[1:] == mgr_shape[1:])
407 tot_items = sum(len(x.items) for x in self.blocks)
408 assert(len(self.items) == tot_items)
AssertionError:
解决方法:
更新的答案:
看来我可以使用各种库的工作版本来复制它.稍后,我将检查我的家庭版本,以查看这些功能的文档是否有所不同.
在此期间,以下内容使用了您的确切编辑版本对我有用:
In [35]: testDF.aggregate(lambda x: st.sem(x, axis=None))
Out[35]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 600 entries, 0 to 599
Data columns:
A 600 non-null values
B 600 non-null values
C 600 non-null values
D 600 non-null values
E 600 non-null values
F 600 non-null values
G 600 non-null values
H 600 non-null values
I 600 non-null values
J 600 non-null values
dtypes: float64(10)
这使我怀疑它与sem()轴约定有关.它的默认值为0,最终映射到的Pandas对象可能具有第0个怪异的轴或其他东西.当我使用选项axis = None时,它使应用了该对象的对象变得杂乱无章,这使它起作用.
就像进行健全性检查一样,我也这样做,它也起作用:
In [37]: testDF.aggregate(lambda x: st.sem(x, axis=1))
Out[37]:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 600 entries, 0 to 599
Data columns:
A 600 non-null values
B 600 non-null values
C 600 non-null values
D 600 non-null values
E 600 non-null values
F 600 non-null values
G 600 non-null values
H 600 non-null values
I 600 non-null values
J 600 non-null values
dtypes: float64(10)
但是您应该检查以确保这实际上是您想要的SEM值,可能是在一些较小的示例数据上.
较旧的答案:
这可能与scipy.stats的模块问题有关吗?当我使用该模块时,我必须从scipy import stats中将其称为st或类似名称. import scipy.stats不起作用,并调用import scipy; scipy.stats.sem给出错误,指出不存在名为“ stats”的模块.
熊猫似乎根本没有找到这种功能.我认为错误消息应该得到改善,因为这并不明显.
>>> from scipy import stats as st
>>> import pandas
>>> import numpy as np
>>> df_list = []
>>> for ii in range(10):
... df_list.append(pandas.DataFrame(np.random.rand(10,3),
... columns = ['A', 'B', 'C']))
...
>>> df_list
# Suppressed the output cause it was big.
>>> testDF = (pandas.concat(df_list, axis=1, keys=range(len(df_list)))
... .swaplevel(0, 1, axis=1)
... .sortlevel(axis=1)
... .groupby(level=0, axis=1))
>>> testDF
<pandas.core.groupby.DataFrameGroupBy object at 0x38524d0>
>>> testDF.aggregate(np.mean)
key_0 A B C
0 0.660324 0.408377 0.374681
1 0.459768 0.345093 0.432542
2 0.498985 0.443794 0.524327
3 0.605572 0.563768 0.558702
4 0.561849 0.488395 0.592399
5 0.466505 0.433560 0.408804
6 0.561591 0.630218 0.543970
7 0.423443 0.413819 0.486188
8 0.514279 0.479214 0.534309
9 0.479820 0.506666 0.449543
>>> testDF.aggregate(np.var)
key_0 A B C
0 0.093908 0.095746 0.055405
1 0.075834 0.077010 0.053406
2 0.094680 0.092272 0.095552
3 0.105740 0.126101 0.099316
4 0.087073 0.087461 0.111522
5 0.105696 0.110915 0.096959
6 0.082860 0.026521 0.075242
7 0.100512 0.051899 0.060778
8 0.105198 0.100027 0.097651
9 0.082184 0.060460 0.121344
>>> testDF.aggregate(st.sem)
A B C
0 0.089278 0.087590 0.095891
1 0.088552 0.081365 0.098071
2 0.087968 0.116361 0.076837
3 0.110369 0.087563 0.096460
4 0.101328 0.111676 0.046567
5 0.085044 0.099631 0.091284
6 0.113337 0.076880 0.097620
7 0.087243 0.087664 0.118925
8 0.080569 0.068447 0.106481
9 0.110658 0.071082 0.084928
似乎为我工作.