CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
这个error 的原因是,当期指定的GPU的显存不足,可以关闭现有的process,或者重指定显卡编号。
device = torch.device("cuda:0")