MNN配置

1、github链接:https://github.com/alibaba/MNN/tree/master/tools/converter

2、教程

(1)使用教程:https://www.bookstack.cn/read/MNN-zh/tools-converter-README_CN.md

(2)参考博客:https://blog.csdn.net/qq_37643960/article/details/97028743

(3)github的项目中的readme部分也有讲解;

安装过程:

编译安装MNN动态库和Convert转换工具,命令如下:

cd /MNN/
mkdir build
cd build
cmake .. -DMNN_BUILD_CONVERTER=true
make -j4

之后build文件夹中就会出现benchmark.out和MNNConvert可执行文件;

测试benchmark.out:

./benchmark.out ../benchmark/models/ 10 0

 

其中10表示前向传播10次,最后结果取平均值;0表示使用CPU;(执行推理的计算设备,有效值为 0(浮点 CPU)、1(Metal)、3(浮点OpenCL)、6(OpenGL),7(Vulkan))

测试MNNConvert:

./MNNConvert -h

 

测试:

第一步:将pytorch模型转换为onnx模型

import torch
import torchvision

dummy_input = torch.randn(10, 3, 224, 224, device='cuda')
model = torchvision.models.alexnet(pretrained=True).cuda()

# Providing input and output names sets the display names for values
# within the model's graph. Setting these does not change the semantics
# of the graph; it is only for readability.
#
# The inputs to the network consist of the flat list of inputs (i.e.
# the values you would pass to the forward() method) followed by the
# flat list of parameters. You can partially specify names, i.e. provide
# a list here shorter than the number of inputs to the model, and we will
# only set that subset of names, starting from the beginning.
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]

torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)

 

第二步:将onnx模型转换为mnn模型

./MNNConvert -f ONNX --modelFile alexnet.onnx --MNNModel alexnet.mnn --bizCode MNN

 

第三步:使用benchmark.out测试前向传播时间

./benchmark.out ./models/ 10 0

 PS:在/MNN/source/shape/ShapeSqueeze.cpp 80L中:注释掉那个NanAssert(),在新版函数中已经将它注释掉了;(要不然会报reshape的error)

上一篇:路由器


下一篇:linux – 具有指定块大小的随机读/写的CLI文件系统基准