Pytorch学习-task7:Pytorch池化层和归一化层

Pytorch学习-task7:Pytorch池化层和归一化层

参考链接1:
参考链接2:
参考链接3:

torch.nn中提供的Pooling Layers

Pytorch学习-task7:Pytorch池化层和归一化层
Pytorch学习-task7:Pytorch池化层和归一化层
Pytorch学习-task7:Pytorch池化层和归一化层
Pytorch学习-task7:Pytorch池化层和归一化层
Pytorch学习-task7:Pytorch池化层和归一化层

池化层

1. MaxPool 最大池化操作

torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
torch.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

参数:

  • kernel_size: 窗口大小
  • stride:步长 默认值是kernel_size
  • padding:补0数
  • dilation:控制窗口中元素步幅的参数
  • return_indices:等于True,返回输出最大值的序号,上采样操作
  • ceil_mode:等于True,计算输出信号大小时,使用向上取整,默认是向下取整。

尺寸:
Pytorch学习-task7:Pytorch池化层和归一化层

2. MaxUnpool - MaxPool1d计算部分反转?

torch.nn.MaxUnpool1d(kernel_size, stride=None, padding=0)
torch.nn.MaxUnpool2d(kernel_size, stride=None, padding=0)
torch.nn.MaxUnpool3d(kernel_size, stride=None, padding=0)

尺寸:
Pytorch学习-task7:Pytorch池化层和归一化层
示例:
Pytorch学习-task7:Pytorch池化层和归一化层
output_size的使用
Pytorch学习-task7:Pytorch池化层和归一化层

3. AvgPool

torch.nn.AvgPool1d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)
torch.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)
torch.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)

参数:

  • ceil_mode: 如果是真,使用ceil(向上取整)
  • count_include_pad: 如果为真,在平均值计算时将会包含padding部分。

计算:
Pytorch学习-task7:Pytorch池化层和归一化层
尺寸:
Pytorch学习-task7:Pytorch池化层和归一化层
示例:

>>> m = nn.AvgPool1d(3,stride=2)
>>> m(torch.tensor([[[1.,2,3,4,5,6,7]]]))
tensor([[[2., 4., 6.]]])

练习

使用pytorch实现meanpooling 和 maxpooling

def mean_pooling(data,m,n):
    a,b = data.shape
    res = []
    for i in range(0,a,m):
        line = []
        for j in range(0,b,n):
            x = data[i:i+m,j:j+n]
            line.append(x.sum()/(n*m))
        res.append(line)
    return torch.tensor(res)

def max_pooling(data,m,n):
    a,b = data.shape
    res = []
    for i in range(0,a,m):
        line = []
        for j in range(0,b,n):
            x = data[i:i+m,j:j+n]
            line.append(x.max()/(n*m))
        res.append(line)
    return torch.tensor(res)
a = torch.tensor([[1.,2.,3.],[4.,5.,6.]])
res = mean_pooling(a,2,2)
print(res)
res = max_pooling(a,2,2)
print(res)

结果:
tensor([[3.0000, 2.2500]])
tensor([[1.2500, 1.5000]])

归一化层

参考链接:
BatchNorm:
LayerNorm:

上一篇:Linux系统调用 汇编 int 80h


下一篇:Android程序开发重新开始一