pytorch转为onnx格式:
def Torch2Onnx(model,input_size,output_name,istrained=True):
'''
:param: model
:param: input_size .e.t. (244,244)
:param: output_name .e.t. "test_output"
:param: if convert a trained model or not. default: True
'''
x = Variable(torch.randn(1,3,input_size[0],input_size[1])).cuda()
if istrained:
torch_out = torch.onnx.export(model,x,output_name,verbose=True)
else:
torch_out = torch.onnx.export(model,x,output_name,export_params=False,verbose=True) # Only export a untrained model.
使用举例:
model = model()
model.load_state_dict(torch.load(weight_path))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
input_size = (384,288)
Torch2Onnx(model,input_size,"test.onnx")
获取model中的params:
请注意:不同的方法默认model在cpu还是在cuda上是不一样的,如果出现类似RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same的报错,请检查weight是否应该在cuda上。
方法一:使用torchsummary
使用pip安装torchsummary:
pip install torchsummary
代码片段:
from torchsummary import summary
model = model()
model.load_state_dict(torch.load(weight_path))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
summary(model,(3,384,288))
大连妇科医院哪家好 https://m.120ask.com/zhenshi/dlfk/
方法二:使用torchstat
使用pip安装torchstat:
pip install torchstat
代码片段(和summary差不多)
from torchsummary import summary
model = model()
model.load_state_dict(torch.load(weight_path))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
stat(model,(3,384,288))
方法三:使用thop(不太推荐)
使用pip安装thop:
pip install thop
代码片段:
from thop import profile,clever_format
model = model()
model.load_state_dict(torch.load(weight_path))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
flops, params = profile(model,inputs=())
flops,params = clever_format(flops,params,"%.3f")