我正在整理一些基本的python代码,这些代码采用了映射到矩阵列表(矩阵代表已分类图像)的标签字典,我只是想从所有内容中减去平均图像,然后将数据居中于0- 1个刻度.由于某种原因,此代码运行缓慢.当仅迭代500张48×48图像时,大约需要10秒钟才能运行,这实际上并不能扩展到我正在使用的图像数量.查看cProfile结果后,看起来大部分时间都花在_center函数上.
我觉得我可能在这里没有充分利用numpy,并且想知道是否有比我更有经验的人有一些技巧可以加快这种速度,或者可以指出我在这里所做的一些愚蠢的事情.代码如下:
def __init__(self, master_dict, normalization = lambda x: math.exp(x)):
"""
master_dict should be a dictionary mapping classes to lists of matrices
example = {
"cats": [[[]...], [[]...]...],
"dogs": [[[]...], [[]...]...]
}
have to be python lists, not numpy arrays
normalization represents the 0-1 normalization scheme used. Defaults to simple linear
"""
normalization = np.vectorize(normalization)
full_tensor = np.array(reduce(operator.add, master_dict.values()))
centering = np.sum(np.array(reduce(operator.add, master_dict.values())), axis=0)/len(full_tensor)
self.data = {key: self._center(np.array(value), centering, normalization) for key,value in master_dict.items()}
self.normalization = normalization
def _center(self, list_of_arrays, centering_factor, normalization_scheme):
"""
Centering scheme for arrays
"""
arrays = list_of_arrays - centering_factor
normalize = lambda a: (a - np.min(a)) / (np.max(a) - np.min(a))
return normalization_scheme([normalize(array) for array in arrays])
另外,在您问之前,我对输入格式没有太多的控制权,但是我很可能会弄清楚这是否真的是限制因素.
解决方法:
从@sethMMorton的更改开始,我几乎可以将速度提高两倍.主要是通过矢量化标准化函数(在_center内部),以便您可以在整个list_of_arrays上调用_center,而不仅仅是将其放在列表推导中.这也摆脱了从numpy数组到列表再返回的额外转换.
def normalize(a):
a -= a.min(1, keepdims=True).min(2, keepdims=True)
a /= a.max(1, keepdims=True).max(2, keepdims=True)
return a
注意,我不会在_center调用中定义规范化,而是将其分开,如此答案所示.因此,在_center中,只需在整个list_of_arrays上调用normalize即可:
def _center(self, list_of_arrays, centering_factor, normalization_scheme):
"""
Centering scheme for arrays
"""
list_of_arrays -= centering_factor
return normalization_scheme(normalize(list_of_arrays))
实际上,您可以在一开始就在整个full_tensor上调用normalize和_center,而不必循环遍历,但是棘手的部分是将其再次拆分回数组列表.接下来,我将继续工作:P
如我的评论中所述,您可以替换:
full_tensor = np.array(reduce(operator.add, master_dict.values()))
与
full_tensor = np.concatenate(master_dict.values())
可能没有更快的速度,但是它更清晰,并且更标准.
最后,是一些时间安排:
>>> timeit slater_init(example)
1 loops, best of 3: 1.42 s per loop
>>> timeit seth_init(example)
1 loops, best of 3: 489 ms per loop
>>> timeit my_init(example)
1 loops, best of 3: 281 ms per loop
以下是我的完整计时代码.请注意,我用return …替换了self.data = …,以便我可以保存和比较输出以确保我们所有的代码确实返回相同的数据:)当然,您也应该针对我的版本进行测试!
import operator
import math
import numpy as np
#example dict has N keys (integers), each value is a list of n random HxW 'arrays', in list form:
test_shape = 10, 2, 4, 4 # small example for testing
timing_shape = 100, 5, 48, 48 # bigger example for timing
N, n, H, W = timing_shape
example = dict(enumerate(np.random.rand(N, n, H, W).tolist()))
def my_init(master_dict, normalization=np.exp):
full_tensor = np.concatenate(master_dict.values())
centering = np.mean(full_tensor, 0)
return {key: my_center(np.array(value), centering, normalization)
for key,value in master_dict.iteritems()} #use iteritems here
#self.normalization = normalization
def my_normalize(a):
a -= a.min(1, keepdims=True).min(2, keepdims=True)
a /= a.max(1, keepdims=True).max(2, keepdims=True)
return a
def my_center(arrays, centering_factor, normalization_scheme):
"""
Centering scheme for arrays
"""
arrays -= centering_factor
return normalization_scheme(my_normalize(arrays))
#### sethMMorton's original improvement ####
def seth_init(master_dict, normalization = np.exp):
"""
master_dict should be a dictionary mapping classes to lists of matrices
example = {
"cats": [[[]...], [[]...]...],
"dogs": [[[]...], [[]...]...]
}
have to be python lists, not numpy arrays
normalization represents the 0-1 normalization scheme used. Defaults to simple linear
"""
full_tensor = np.array(reduce(operator.add, master_dict.values()))
centering = np.sum(full_tensor, axis=0)/len(full_tensor)
return {key: seth_center(np.array(value), centering, normalization) for key,value in master_dict.items()}
#self.normalization = normalization
def seth_center(list_of_arrays, centering_factor, normalization_scheme):
"""
Centering scheme for arrays
"""
def seth_normalize(a):
a_min = np.min(a)
return (a - a_min) / (np.max(a) - a_min)
arrays = list_of_arrays - centering_factor
return normalization_scheme([seth_normalize(array) for array in arrays])
#### Original code, by slater ####
def slater_init(master_dict, normalization = lambda x: math.exp(x)):
"""
master_dict should be a dictionary mapping classes to lists of matrices
example = {
"cats": [[[]...], [[]...]...],
"dogs": [[[]...], [[]...]...]
}
have to be python lists, not numpy arrays
normalization represents the 0-1 normalization scheme used. Defaults to simple linear
"""
normalization = np.vectorize(normalization)
full_tensor = np.array(reduce(operator.add, master_dict.values()))
centering = np.sum(np.array(reduce(operator.add, master_dict.values())), axis=0)/len(full_tensor)
return {key: slater_center(np.array(value), centering, normalization) for key,value in master_dict.items()}
#self.normalization = normalization
def slater_center(list_of_arrays, centering_factor, normalization_scheme):
"""
Centering scheme for arrays
"""
arrays = list_of_arrays - centering_factor
slater_normalize = lambda a: (a - np.min(a)) / (np.max(a) - np.min(a))
return normalization_scheme([slater_normalize(array) for array in arrays])