swift 如何将UIImage转换为CVPixelBuffer 32BGRA用于媒体管道?

xn1cxnb4  于 2022-12-21  发布在  Swift
关注(0)|答案(1)|浏览(123)

我正在使用mediapipe开发一个iOS应用程序,现在我需要输入一个图像数据到mediapipe,但mediapipe只接受32BGRA CVPixelBuffer。
如何将UIImage转换为32BGRA CVPixelBuffer?
我正在使用此代码:

let frameSize = CGSize(width: self.cgImage!.width, height: self.cgImage!.height)
        
        var pixelBuffer:CVPixelBuffer? = nil
        let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height), kCVPixelFormatType_32BGRA , nil, &pixelBuffer)
        
        if status != kCVReturnSuccess {
            return nil
        }
        
        CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags.init(rawValue: 0))
        let data = CVPixelBufferGetBaseAddress(pixelBuffer!)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
        let context = CGContext(data: data, width: Int(frameSize.width), height: Int(frameSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)
        
        
        context?.draw(self.cgImage!, in: CGRect(x: 0, y: 0, width: self.cgImage!.width, height: self.cgImage!.height))
        
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        
        return pixelBuffer

但我会在mediapipe mediapipe/0 (11): signal SIGABRT上抛出错误

如果我使用AVCaptureVideoDataOutput,那就没问题了。
顺便说一句:我用的是swift。

62lalag4

62lalag41#

也许你可以试试这个.还有我有一个问题要问你,你知道如何在mediapipe中使用静态图像进行人脸识别吗?如果你知道,请告诉我,谢谢

func pixelBufferFromCGImage(image:CGImage) -> CVPixelBuffer? {
        let options = [
                kCVPixelBufferCGImageCompatibilityKey as String: NSNumber(value: true),
                kCVPixelBufferCGBitmapContextCompatibilityKey as String: NSNumber(value: true),
                kCVPixelBufferIOSurfacePropertiesKey as String: [:]
        ] as CFDictionary
        
        let size:CGSize = .init(width: image.width, height: image.height)
        var pxbuffer: CVPixelBuffer? = nil
        let status = CVPixelBufferCreate(
            kCFAllocatorDefault,
            Int(size.width),
            Int(size.height),
            kCVPixelFormatType_32BGRA,
            options,
            &pxbuffer)
        guard let pxbuffer = pxbuffer else { return nil }
        
        CVPixelBufferLockBaseAddress(pxbuffer, [])
        guard let pxdata = CVPixelBufferGetBaseAddress(pxbuffer) else {return nil}
        
        let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
        
        guard let context = CGContext(data: pxdata, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer), space: CGColorSpaceCreateDeviceRGB(), bitmapInfo:bitmapInfo.rawValue) else {
            return nil
        }
        context.concatenate(CGAffineTransformIdentity)
        context.draw(image, in: .init(x: 0, y: 0, width: size.width, height: size.height))
        
        ///error: CGContextRelease' is unavailable: Core Foundation objects are automatically memory managed
        ///maybe CGContextRelease should not use it 
        CGContextRelease(context)
        CVPixelBufferUnlockBaseAddress(pxbuffer, [])
        return pxbuffer
    }

相关问题