python用request下载大文件,遇到MemoryError 解决方法

res = requests.get(url=url,headers=headers, stream=True)
        total_size = int(res.headers['Content-Length'])
        size_mb = total_size/1048576
        print("文件大小为:",round(size_mb,2),"MB")
        with open(os.path.join(path,dirname,filename),'wb') as fw:
            for data in res.iter_content(chunk_size=1024*1024*10):
                size = size + 10
                if size >1024:
                    print("已经下载",size/1024,"GB,完成",round(100*size/size_mb,2),"%")
                else:
                    print("已经下载",size,"MB,完成",round(100*size/size_mb,2),"%")

                fw.write(data)
                fw.flush()
                os.fsync(fw.fileno())

下载一些文件,每次遇到大于600MB的就容易出现MemoryError错误,经过查询改进如上

可以强制每下载10MB就从内存保存到硬盘,彻底解决内存不足的问题,顺便显示下载进度

只加f.flush(),文件还是会一直在内存中,后面加os.fsync(f.fileno()),就会强制清除缓存至硬盘了。因为网络稳定,故无需加断点续传的功能了

 

参见:

https://*.com/questions/16694907/download-large-file-in-python-with-requests

https://www.runoob.com/python/file-fileno.html

https://www.yiibai.com/python/os_fsync.html

 

上一篇:IOT固件模拟-dir605L_FW_113(函数劫持)


下一篇:文件的定义,文件操作函数,操作步骤以及注意事项等