C#-gzipstream内存流到文件

我正在尝试使用Gzip压缩来压缩JSON文件,以将其发送到另一个位置.它每天需要处理5,000-10,000个文件,而且我不需要本地计算机上文件的压缩版本(实际上它们已被传输到AWS S3进行长期归档).

由于我不需要它们,因此我尝试压缩到一个内存流,然后使用该内存流写入AWS,而不是将每个压缩到磁盘.每当我尝试执行此操作时,文件就会损坏(例如,当我在7-Zip中打开它们并尝试在其中打开JSON文件时,会收到“数据错误文件已损坏”).

当我尝试将内存流写入本地文件时,会发生相同的事情,因此我现在尝试解决该问题.这是代码:

string[] files = Directory.GetFiles(@"C:\JSON_Logs");

foreach(string file in files)
{
    FileInfo fileToCompress = new FileInfo(file);
    using (FileStream originalFileStream = fileToCompress.OpenRead())
    {
        using (MemoryStream compressedMemStream = new MemoryStream())
        {
            using (GZipStream compressionStream = new GZipStream(compressedMemStream, CompressionMode.Compress))
            {
                originalFileStream.CopyTo(compressionStream);
                compressedMemStream.Seek(0, SeekOrigin.Begin);
                FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".gz");

                //Eventually this will be the AWS transfer, but that's not important here
                compressedMemStream.WriteTo(compressedFileStream); 
            }
        }
    }      
}

解决方法:

重新排列您的using语句,以便在您读取内存流内容时可以肯定完成GZipStream:

foreach(string file in files)
{
    FileInfo fileToCompress = new FileInfo(file);
    using (MemoryStream compressedMemStream = new MemoryStream())
    {
        using (FileStream originalFileStream = fileToCompress.OpenRead())
        using (GZipStream compressionStream = new GZipStream(
            compressedMemStream, 
            CompressionMode.Compress,
            leaveOpen: true))
        {
            originalFileStream.CopyTo(compressionStream);
        }
        compressedMemStream.Seek(0, SeekOrigin.Begin);

        FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".gz");
        //Eventually this will be the AWS transfer, but that's not important here
        compressedMemStream.WriteTo(compressedFileStream); 
    }
}

处置流需要冲洗和关闭流.

上一篇:使用golang的for打印三角形


下一篇:Go语言实现记账本