ios 我可以使用AVFoundation将下载的视频帧流式传输到OpenGL ES纹理中吗?

vawmfj5a  于 2023-01-22  发布在  iOS
关注(0)|答案(2)|浏览(142)

我已经能够使用AVFoundation的AVAssetReader类将视频帧上传到OpenGL ES纹理中。但是,它有一个caveat,因为它在与指向远程媒体的AVURLAsset一起使用时失败了。这个失败没有得到很好的记录,我想知道是否有任何方法可以克服这个缺点。

ss2ws0br

ss2ws0br1#

iOS 6发布了一些API,我可以使用它们来简化这个过程。它根本不使用AVAssetReader,而是依赖于一个名为AVPlayerItemVideoOutput的类。这个类的示例可以通过一个新的-addOutput:方法添加到任何AVPlayerItem示例中。
AVAssetReader不同,这个类可以很好地用于由远程AVURLAsset支持的AVPlayerItem,并且还具有允许更复杂的回放接口的优点,该接口支持通过-copyPixelBufferForItemTime:itemTimeForDisplay:进行非线性回放(而不是AVAssetReader的严格限制-copyNextSampleBuffer方法)。

样品代码

// Initialize the AVFoundation state
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:someUrl options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler:^{

    NSError* error = nil;
    AVKeyValueStatus status = [asset statusOfValueForKey:@"tracks" error:&error];
    if (status == AVKeyValueStatusLoaded)
    {
        NSDictionary* settings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
        AVPlayerItemVideoOutput* output = [[[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings] autorelease];
        AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
        [playerItem addOutput:[self playerItemOutput]];
        AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];

        // Assume some instance variable exist here. You'll need them to control the
        // playback of the video (via the AVPlayer), and to copy sample buffers (via the AVPlayerItemVideoOutput).
        [self setPlayer:player];
        [self setPlayerItem:playerItem];
        [self setOutput:output];
    }
    else
    {
        NSLog(@"%@ Failed to load the tracks.", self);
    }
}];

// Now at any later point in time, you can get a pixel buffer
// that corresponds to the current AVPlayer state like this:
CVPixelBufferRef buffer = [[self output] copyPixelBufferForItemTime:[[self playerItem] currentTime] itemTimeForDisplay:nil];

一旦你有了缓冲区,你就可以随心所欲地把它上传到OpenGL。我推荐可怕的CVOpenGLESTextureCacheCreateTextureFromImage()函数,因为你可以在所有新设备上获得硬件加速,它比glTexSubImage2D()快得多。比如苹果的GLCameraRipple和RosyWriter演示。

k5ifujac

k5ifujac2#

截至2023年,CGLTexImageIOSurface2D将CVPixelData转换为OpenGL纹理的速度比CVOpenGLESTextureCacheCreateTextureFromImage()快得多。
确保CVPixelBuffers由IOSurface支持并且格式正确:

videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:@{
                (id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
                (id)kCVPixelBufferIOSurfacePropertiesKey: @{},
            }];
            videoOutput.suppressesPlayerRendering = YES;

Glint texture;
glGenTextures(1, & texture);

为了得到每一帧

pixelBuffer = [output copyPixelBufferForItemTime:currentTime itemTimeForDisplay:NULL];
if (NULL == pixelBuffer) return;

let surface = CVPixelBufferGetIOSurface(pixelBuffer);
if (!surface) return;

glBindTexture(GL_TEXTURE_RECTANGLE, texture);
CGLError error = CGLTexImageIOSurface2D(CGLGetCurrentContext(),
                                            GL_TEXTURE_RECTANGLE,
                                            GL_RGBA,
                                            (int)IOSurfaceGetWidthOfPlane(surface, 0),
                                            (int)IOSurfaceGetHeightOfPlane(surface, 0),
                                            GL_BGRA,
                                            GL_UNSIGNED_INT_8_8_8_8_REV,
                                            surface,
                                            0);

more info

相关问题