Summary的使用方法
Summary
Summary可以用来观察神经网络的输入输出,Pytorch和PaddlePaddle都有这个功能。
Pytorch的summary
首先导入summary
from torchsummary import summary
然后从torchvision中随便导入一个模型来尝试,这里导入resnet34
from torchvision.models import resnet34
model = resnet34().to('cuda')
假设输入是(3,256,256)形式的,summary一下
summary(model,(3,256,256),device='cuda')
得到结果
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 128, 128] 9,408
BatchNorm2d-2 [-1, 64, 128, 128] 128
ReLU-3 [-1, 64, 128, 128] 0
MaxPool2d-4 [-1, 64, 64, 64] 0
Conv2d-5 [-1, 64, 64, 64] 36,864
BatchNorm2d-6 [-1, 64, 64, 64] 128
ReLU-7 [-1, 64, 64, 64] 0
Conv2d-8 [-1, 64, 64, 64] 36,864
BatchNorm2d-9 [-1, 64, 64, 64] 128
ReLU-10 [-1, 64, 64, 64] 0
BasicBlock-11 [-1, 64, 64, 64] 0
Conv2d-12 [-1, 64, 64, 64] 36,864
BatchNorm2d-13 [-1, 64, 64, 64] 128
ReLU-14 [-1, 64, 64, 64] 0
Conv2d-15 [-1, 64, 64, 64] 36,864
BatchNorm2d-16 [-1, 64, 64, 64] 128
ReLU-17 [-1, 64, 64, 64] 0
BasicBlock-18 [-1, 64, 64, 64] 0
Conv2d-19 [-1, 64, 64, 64] 36,864
BatchNorm2d-20 [-1, 64, 64, 64] 128
ReLU-21 [-1, 64, 64, 64] 0
Conv2d-22 [-1, 64, 64, 64] 36,864
BatchNorm2d-23 [-1, 64, 64, 64] 128
ReLU-24 [-1, 64, 64, 64] 0
BasicBlock-25 [-1, 64, 64, 64] 0
Conv2d-26 [-1, 128, 32, 32] 73,728
BatchNorm2d-27 [-1, 128, 32, 32] 256
ReLU-28 [-1, 128, 32, 32] 0
Conv2d-29 [-1, 128, 32, 32] 147,456
BatchNorm2d-30 [-1, 128, 32, 32] 256
Conv2d-31 [-1, 128, 32, 32] 8,192
BatchNorm2d-32 [-1, 128, 32, 32] 256
ReLU-33 [-1, 128, 32, 32] 0
BasicBlock-34 [-1, 128, 32, 32] 0
Conv2d-35 [-1, 128, 32, 32] 147,456
BatchNorm2d-36 [-1, 128, 32, 32] 256
ReLU-37 [-1, 128, 32, 32] 0
Conv2d-38 [-1, 128, 32, 32] 147,456
BatchNorm2d-39 [-1, 128, 32, 32] 256
ReLU-40 [-1, 128, 32, 32] 0
BasicBlock-41 [-1, 128, 32, 32] 0
Conv2d-42 [-1, 128, 32, 32] 147,456
BatchNorm2d-43 [-1, 128, 32, 32] 256
ReLU-44 [-1, 128, 32, 32] 0
Conv2d-45 [-1, 128, 32, 32] 147,456
BatchNorm2d-46 [-1, 128, 32, 32] 256
ReLU-47 [-1, 128, 32, 32] 0
BasicBlock-48 [-1, 128, 32, 32] 0
Conv2d-49 [-1, 128, 32, 32] 147,456
BatchNorm2d-50 [-1, 128, 32, 32] 256
ReLU-51 [-1, 128, 32, 32] 0
Conv2d-52 [-1, 128, 32, 32] 147,456
BatchNorm2d-53 [-1, 128, 32, 32] 256
ReLU-54 [-1, 128, 32, 32] 0
BasicBlock-55 [-1, 128, 32, 32] 0
Conv2d-56 [-1, 256, 16, 16] 294,912
BatchNorm2d-57 [-1, 256, 16, 16] 512
ReLU-58 [-1, 256, 16, 16] 0
Conv2d-59 [-1, 256, 16, 16] 589,824
BatchNorm2d-60 [-1, 256, 16, 16] 512
Conv2d-61 [-1, 256, 16, 16] 32,768
BatchNorm2d-62 [-1, 256, 16, 16] 512
ReLU-63 [-1, 256, 16, 16] 0
BasicBlock-64 [-1, 256, 16, 16] 0
Conv2d-65 [-1, 256, 16, 16] 589,824
BatchNorm2d-66 [-1, 256, 16, 16] 512
ReLU-67 [-1, 256, 16, 16] 0
Conv2d-68 [-1, 256, 16, 16] 589,824
BatchNorm2d-69 [-1, 256, 16, 16] 512
ReLU-70 [-1, 256, 16, 16] 0
BasicBlock-71 [-1, 256, 16, 16] 0
Conv2d-72 [-1, 256, 16, 16] 589,824
BatchNorm2d-73 [-1, 256, 16, 16] 512
ReLU-74 [-1, 256, 16, 16] 0
Conv2d-75 [-1, 256, 16, 16] 589,824
BatchNorm2d-76 [-1, 256, 16, 16] 512
ReLU-77 [-1, 256, 16, 16] 0
BasicBlock-78 [-1, 256, 16, 16] 0
Conv2d-79 [-1, 256, 16, 16] 589,824
BatchNorm2d-80 [-1, 256, 16, 16] 512
ReLU-81 [-1, 256, 16, 16] 0
Conv2d-82 [-1, 256, 16, 16] 589,824
BatchNorm2d-83 [-1, 256, 16, 16] 512
ReLU-84 [-1, 256, 16, 16] 0
BasicBlock-85 [-1, 256, 16, 16] 0
Conv2d-86 [-1, 256, 16, 16] 589,824
BatchNorm2d-87 [-1, 256, 16, 16] 512
ReLU-88 [-1, 256, 16, 16] 0
Conv2d-89 [-1, 256, 16, 16] 589,824
BatchNorm2d-90 [-1, 256, 16, 16] 512
ReLU-91 [-1, 256, 16, 16] 0
BasicBlock-92 [-1, 256, 16, 16] 0
Conv2d-93 [-1, 256, 16, 16] 589,824
BatchNorm2d-94 [-1, 256, 16, 16] 512
ReLU-95 [-1, 256, 16, 16] 0
Conv2d-96 [-1, 256, 16, 16] 589,824
BatchNorm2d-97 [-1, 256, 16, 16] 512
ReLU-98 [-1, 256, 16, 16] 0
BasicBlock-99 [-1, 256, 16, 16] 0
Conv2d-100 [-1, 512, 8, 8] 1,179,648
BatchNorm2d-101 [-1, 512, 8, 8] 1,024
ReLU-102 [-1, 512, 8, 8] 0
Conv2d-103 [-1, 512, 8, 8] 2,359,296
BatchNorm2d-104 [-1, 512, 8, 8] 1,024
Conv2d-105 [-1, 512, 8, 8] 131,072
BatchNorm2d-106 [-1, 512, 8, 8] 1,024
ReLU-107 [-1, 512, 8, 8] 0
BasicBlock-108 [-1, 512, 8, 8] 0
Conv2d-109 [-1, 512, 8, 8] 2,359,296
BatchNorm2d-110 [-1, 512, 8, 8] 1,024
ReLU-111 [-1, 512, 8, 8] 0
Conv2d-112 [-1, 512, 8, 8] 2,359,296
BatchNorm2d-113 [-1, 512, 8, 8] 1,024
ReLU-114 [-1, 512, 8, 8] 0
BasicBlock-115 [-1, 512, 8, 8] 0
Conv2d-116 [-1, 512, 8, 8] 2,359,296
BatchNorm2d-117 [-1, 512, 8, 8] 1,024
ReLU-118 [-1, 512, 8, 8] 0
Conv2d-119 [-1, 512, 8, 8] 2,359,296
BatchNorm2d-120 [-1, 512, 8, 8] 1,024
ReLU-121 [-1, 512, 8, 8] 0
BasicBlock-122 [-1, 512, 8, 8] 0
AdaptiveAvgPool2d-123 [-1, 512, 1, 1] 0
Linear-124 [-1, 1000] 513,000
================================================================
Total params: 21,797,672
Trainable params: 21,797,672
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.75
Forward/backward pass size (MB): 125.76
Params size (MB): 83.15
Estimated Total Size (MB): 209.66
----------------------------------------------------------------
如果看model输入
summary(model,(3,256,256),device='cuda')
x = torch.randn([1,3,256,256],dtype=torch.float32).to('cuda')
y=model(x)