我正在使用mediapipe开发一个iOS应用程序,现在我需要输入一个图像数据到mediapipe,但mediapipe只接受32BGRA CVPixelBuffer。
如何将UIImage转换为32BGRA CVPixelBuffer?
我正在使用此代码:
let frameSize = CGSize(width: self.cgImage!.width, height: self.cgImage!.height)
var pixelBuffer:CVPixelBuffer? = nil
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height), kCVPixelFormatType_32BGRA , nil, &pixelBuffer)
if status != kCVReturnSuccess {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags.init(rawValue: 0))
let data = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
let context = CGContext(data: data, width: Int(frameSize.width), height: Int(frameSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(self.cgImage!, in: CGRect(x: 0, y: 0, width: self.cgImage!.width, height: self.cgImage!.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
但我会在mediapipe mediapipe/0 (11): signal SIGABRT
上抛出错误
如果我使用AVCaptureVideoDataOutput
,那就没问题了。
顺便说一句:我用的是swift。
1条答案
按热度按时间62lalag41#
也许你可以试试这个.还有我有一个问题要问你,你知道如何在mediapipe中使用静态图像进行人脸识别吗?如果你知道,请告诉我,谢谢