一、视频的渲染首先需要获取对应的视频帧,这里使用AVAssetReader进行获取视频帧,具体可以看https://www.cnblogs.com/czwlinux/p/15779598.html关于视频的获取。这里使用的格式是kCVPixelFormatType_420YpCbCr8BiPlanarFullRange进行获取视频的数据
二、关于如何将kCVPixelFormatType_420YpCbCr8BiPlanarFullRange的视频帧内容转换成RGB
1、关于颜色的空间转换可以看https://www.jianshu.com/p/5b437c1df48e 相关的说明
// BT.601, which is the standard for SDTV. matrix_float3x3 kColorConversion601Default = (matrix_float3x3){ (simd_float3){1.164, 1.164, 1.164}, (simd_float3){0.0, -0.392, 2.017}, (simd_float3){1.596, -0.813, 0.0}, }; //// BT.601 full range (ref: http://www.equasys.de/colorconversion.html) matrix_float3x3 kColorConversion601FullRangeDefault = (matrix_float3x3){ (simd_float3){1.0, 1.0, 1.0}, (simd_float3){0.0, -0.343, 1.765}, (simd_float3){1.4, -0.711, 0.0}, }; //// BT.709, which is the standard for HDTV. matrix_float3x3 kColorConversion709Default[] = { (simd_float3){1.164, 1.164, 1.164}, (simd_float3){0.0, -0.213, 2.112}, (simd_float3){1.793, -0.533, 0.0}, };
如上是各个格式的相关转换,则根据上述给的链接的相关说明跟该局转的转换公式则有如下的矩阵初始化和对应的转换成rgb的公式
- (void)matrixInit { matrix_float3x3 kColorConversion601FullRangeMatrix = (matrix_float3x3){ (simd_float3){1.0, 1.0, 1.0}, (simd_float3){0.0, -0.343, 1.765}, (simd_float3){1.4, -0.711, 0.0}, }; vector_float3 kColorConversion601FullRangeOffset = (vector_float3){ -(16.0/255.0), -0.5, -0.5}; // 这个是偏移 YCConvertMatrix matrixBuffer = {kColorConversion601FullRangeMatrix, kColorConversion601FullRangeOffset}; self.convertMatrix = [self.mtkView.device newBufferWithBytes:&matrixBuffer length:sizeof(matrixBuffer) options:MTLResourceStorageModeShared]; }
fragment float4 fragmentShader(RasterizerData input[[stage_in]], texture2d<float> textureY[[texture(YCFragmentTextureIndexY)]], texture2d<float> textureUV[[texture(YCFragmentTextureIndexUV)]], constant YCConvertMatrix* matrixBuffer[[buffer(YCFragmentBufferIndexMatrix)]]) { constexpr sampler textureSampler(mag_filter::linear, min_filter::linear); float colorY = textureY.sample(textureSampler, input.textureCoordinate).r; float2 colorUV = textureUV.sample(textureSampler, input.textureCoordinate).rg; float3 yuv = float3(colorY, colorUV); float3 rgb = matrixBuffer->matrix * (yuv + matrixBuffer->offset); return float4(rgb, 1.0); }
三、关于Y和UV的获取
- (void)renderTextures:(id<MTLRenderCommandEncoder>)renderEncoder sampleBuffer:(CMSampleBufferRef)sampleBuffer { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); id<MTLTexture> textureY = nil; { size_t width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0); size_t height= CVPixelBufferGetHeightOfPlane(imageBuffer, 0); MTLPixelFormat pixelFormat = MTLPixelFormatR8Unorm; CVMetalTextureRef texture = nil; CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, self.metalTextureCache, imageBuffer, NULL, pixelFormat, width, height, 0, &texture); if (status == kCVReturnSuccess) { textureY = CVMetalTextureGetTexture(texture); CFRelease(texture); } } id<MTLTexture> textureUV = nil; { size_t height = CVPixelBufferGetHeightOfPlane(imageBuffer, 1); size_t width = CVPixelBufferGetWidthOfPlane(imageBuffer, 1); MTLPixelFormat pixelFormat = MTLPixelFormatRG8Unorm; CVMetalTextureRef texture = nil; CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, self.metalTextureCache, imageBuffer, NULL, pixelFormat, width, height, 1, &texture); if (status == kCVReturnSuccess) { textureUV = CVMetalTextureGetTexture(texture); CFRelease(texture); } } if (textureUV && textureY) { [renderEncoder setFragmentTexture:textureY atIndex:YCFragmentTextureIndexY]; [renderEncoder setFragmentTexture:textureUV atIndex:YCFragmentTextureIndexUV]; } CFRelease(sampleBuffer); }
如上有三个关键的点
1、pixelFormat:Y使用的MTLPixelFormatR8Unorm 作为获取的格式然后UV使用的MTLPixelFormatRG8Unorm作为获取的格式。这里的格式跟GL的格式是不一样的,gl使用的是RA而metal使用的是RG
2、CVMetalTextureCacheCreateTextureFromImage的planeIndex 在Y是0而在UV则是为1;具体说明可以看官方关于该函数的说明
Mapping a BGRA buffer: CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &outTexture); Mapping the luma plane of a 420v buffer: CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, MTLPixelFormatR8Unorm, width, height, 0, &outTexture); Mapping the chroma plane of a 420v buffer as a source texture: CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, MTLPixelFormatRG8Unorm width/2, height/2, 1, &outTexture); Mapping a yuvs buffer as a source texture (note: yuvs/f and 2vuy are unpacked and resampled -- not colorspace converted) CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, pixelBuffer, NULL, MTLPixelFormatGBGR422, width, height, 1, &outTexture);
关于不同的格式有对应不同的planeIndex
并且这里要注意的是width和height的设置,它这边的MTLPixelFormatRG8Unorm 使用的是width/2 和 height/2
但是我这里使用的均是原值,没有除以2
这是因为我是通过CVPixelBufferGetHeightOfPlane和CVPixelBufferGetWidthOfPlane 来获取的;如果通过CVPixelBufferGetWidth和CVPixelBufferGetHeight 则是原始值需要除以2
具体来说还是通过OfPlane来获取比较清晰明了
四、metal是如何通过CMSampleBufferRef快速创建MTLTexture
1、要了解如何快速创建的机制要先看下该图
CoreVideo提供了cpu和gpu的高速缓存通道CVMetalTextureCacheRef
如上图所示CVPixelBufferRef借助CVMetalTextureCacheRef快速创建了CVMetalTExtureRef 并获取对应的我们渲染所需要的MTLTexture
具体的细节是
CVReturn status = CVMetalTextureCacheCreateTextureFromImage(NULL, self.metalTextureCache, imageBuffer, NULL, pixelFormat, width, height, 0, &texture); if (status == kCVReturnSuccess) { textureY = CVMetalTextureGetTexture(texture); CFRelease(texture); }
总结:当了解了PUV的原理还有PUV跟RGB的对应映射过程,再在此基础上了解metal关于CoreVideo对渲染的优化则可很清晰的知道metal的视频渲染
参考:
https://www.jianshu.com/p/7114536d705a
https://www.jianshu.com/p/5b437c1df48e