one of the variables needed for gradient :.....with torch.autograd.set_detect_anomaly(True).

一个很神奇的错误:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [16, 8, 1024]], which is output 0 of LeakyReluBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
正常调试的时候正常,但是运行起来就报错
one of the variables needed for gradient :.....with torch.autograd.set_detect_anomaly(True).
开始百度说是激活韩式的inplace=设置要为False,改过后发现好像不行。
通过仔细检查发现是在同一个位置写了两次激活函数导致梯度无法反向传播。。。。

上一篇:A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger's Adversarial Attacks


下一篇:OpenCV图像处理--EasyPR中文开源车牌识别系统