在加载数据集的时候我们需要对读入的图片进行归一化处理,在pytorch里使用torchvision中的transform来对其进行处理,这里不介绍旋转,裁剪等操作,进介绍归一化操作,会用到下面两个函数
- transforms.ToTensor()
- transforms.Normalize()
一般处理图片时有两个操作,第一步将其归一化为0-1之间,第二步在使用Normalize进行归一化
ToTensor
这一步很简单,将图片归一化到[0, 1]之间,即将图片像素(max 255)除上255即可,同时将HWC转化为CHW。如下所示,将1除上255即的0.0039
import numpy as np
from torchvision import transforms
data = np.array([[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]]], dtype='uint8')
print(data)
print(data.shape)
transform = transforms.Compose([transforms.ToTensor()])
data = transform(data)
print(data)
print(data.shape)
'''
[[[1 1 1]
[2 2 2]]
[[3 3 3]
[4 4 4]]]
(2, 2, 3)
tensor([[[0.0039, 0.0078],
[0.0118, 0.0157]],
[[0.0039, 0.0078],
[0.0118, 0.0157]],
[[0.0039, 0.0078],
[0.0118, 0.0157]]])
torch.Size([3, 2, 2])
'''
Normalize
需要给Normalize指定参数,Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
其中((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),第一个括号内分别为RGB通道的均值,第二个括号内分别为RGB通道的方差,首先我们手动计算一下三个通道的均值和方差,由于上述例子不难,我们直接能得到均值和方差为别为(2.5, 2.5, 2.5),(1.118, 1.118, 1.118)
import torch
import numpy as np
from torchvision import transforms
data = np.array([[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]]], dtype='uint8')
print(data)
print(data.shape)
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((2.5, 2.5, 2.5), (1.118, 1.118, 1.118))])
data = transform(data)
print(data)
print(data.shape)
'''
[[[1 1 1]
[2 2 2]]
[[3 3 3]
[4 4 4]]]
(2, 2, 3)
tensor([[[-2.2326, -2.2291],
[-2.2256, -2.2221]],
[[-2.2326, -2.2291],
[-2.2256, -2.2221]],
[[-2.2326, -2.2291],
[-2.2256, -2.2221]]])
torch.Size([3, 2, 2])
'''
为了验证上述结果我们这里手动计算一下结果,可以发现与上述结果相同
import numpy as np
from torchvision import transforms
data = np.array([[[1, 1, 1],
[2, 2, 2]],
[[3, 3, 3],
[4, 4, 4]]], dtype='uint8')
print(data)
print(data.shape)
data = data/255
data = (data[:,:,:]-2.5)/1.118
print(data)
'''
[[[1 1 1]
[2 2 2]]
[[3 3 3]
[4 4 4]]]
[[[-2.23262829 -2.23262829 -2.23262829]
[-2.22912063 -2.22912063 -2.22912063]]
[[-2.22561296 -2.22561296 -2.22561296]
[-2.2221053 -2.2221053 -2.2221053 ]]]
'''
这里就会产生一个问题,为什么最终结果没有归一化到[-1, 1]之间呢,这是因为我们最终通过
x
−
u
σ
\frac{x-u}{\sigma}
σx−u归一化的结果为标准正态分布,如果要归一化到[-1, 1]需要将均值设为0.5,方差为0.5才可以,这是因为第一步ToTensor归一化到[0, 1]之后,减去0.5除上0.5就能够到[-1, 1],我们只需要改变Normalize的参数,transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
下面经过试验可以看到确实范围在[-1, 1]了
[[[1 1 1]
[2 2 2]]
[[3 3 3]
[4 4 4]]]
(2, 2, 3)
tensor([[[-0.9922, -0.9843],
[-0.9765, -0.9686]],
[[-0.9922, -0.9843],
[-0.9765, -0.9686]],
[[-0.9922, -0.9843],
[-0.9765, -0.9686]]])
torch.Size([3, 2, 2])