将openGL上下文保存为视频输出

sq1bmfud  于 2023-06-05  发布在  其他
关注(0)|答案(5)|浏览(450)

我目前正在尝试将openGL中制作的动画保存到视频文件。我试过使用openCVvideowriter,但没有任何优势。我已经成功地生成了一个快照,并使用SDL库将其保存为bmp。如果我保存所有快照,然后使用ffmpeg生成视频,这就像收集了4GB的图像。不实用。如何在渲染期间直接写入视频帧?下面是我在需要时用来拍摄快照的代码:

void snapshot(){
SDL_Surface* snap = SDL_CreateRGBSurface(SDL_SWSURFACE,WIDTH,HEIGHT,24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);
char * pixels = new char [3 *WIDTH * HEIGHT];
glReadPixels(0, 0,WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, pixels);

for (int i = 0 ; i <HEIGHT ; i++)
    std::memcpy( ((char *) snap->pixels) + snap->pitch * i, pixels + 3 * WIDTH * (HEIGHT-i - 1), WIDTH*3 );

delete [] pixels;
SDL_SaveBMP(snap, "snapshot.bmp");
SDL_FreeSurface(snap);
}

我需要视频输出。我发现ffmpeg可以用来从C++代码创建视频,但还没有弄清楚这个过程。救命啊!

编辑:我试过使用openCVCvVideoWriter类,但程序在声明的那一刻就崩溃了(“segmentation fault”),编译当然没有错误。对此有什么建议吗?
Python用户解决方案(需要Python2.7python-imagingpython-openglpython-opencv,需要写入格式的编解码器,我在Ubuntu 14.04 64-bit):

def snap():
    pixels=[]
    screenshot = glReadPixels(0,0,W,H,GL_RGBA,GL_UNSIGNED_BYTE)
    snapshot = Image.frombuffer("RGBA",W,H),screenshot,"raw","RGBA",0,0)
    snapshot.save(os.path.dirname(videoPath) + "/temp.jpg")
    load = cv2.cv.LoadImage(os.path.dirname(videoPath) + "/temp.jpg")
    cv2.cv.WriteFrame(videoWriter,load)

这里WH是窗口尺寸(宽度,高度)。我正在使用PIL将从glReadPixels命令读取的原始像素转换为JPEG图像。我正在将该JPEG加载到openCV图像中并写入视频作家。我有一些问题,直接使用PIL图像到videowriter(这将保存数百万个时钟周期的I/O),但现在我没有工作。ImagePIL模块cv2python-opencv模块。

2g32fytz

2g32fytz1#

这听起来像是在使用命令行实用程序:ffmpeg。您应该使用libavcodeclibavformat,而不是使用命令行从静态图像集合中编码视频。这些是ffmpeg实际构建的库,允许您对视频进行编码并将其存储为标准流/交换格式(例如RIFF/AVI)而不使用单独的程序。
您可能找不到很多关于实现它的教程,因为传统上人们希望使用ffmpeg来实现另一种方式;即,解码各种视频格式以在OpenGL中显示。我认为随着PS4和Xbox One游戏机引入游戏视频编码,这种情况很快就会改变,对这种功能的需求将突然飙升。
然而,一般过程是这样的:

  • 选择容器格式和编解码器
  • 通常一个决定另一个(例如)。MPEG-2 + MPEG节目流)
  • 开始用静止帧填充缓冲区
  • 定期对静止帧的缓冲区进行编码并写入输出(MPEG术语中的数据包写入)
  • 您可以在缓冲区变满时执行此操作,也可以每隔n毫秒执行一次;你可能更喜欢一个而不是另一个,这取决于你是否想流你的视频直播或不。
  • 当程序终止时,刷新缓冲区并关闭流

一个好的方面是你实际上不需要写一个文件。由于您定期对来自静态帧缓冲区的数据包进行编码,因此如果需要,您可以通过网络流式传输已编码的视频-这就是编解码器和容器(交换)格式分开的原因。
另一个好处是你不必同步CPU和GPU,你可以设置一个像素缓冲区对象,让OpenGL将数据复制到GPU后面几帧的CPU内存中。这使得视频的实时编码要求低得多,如果视频延迟要求不是不合理的,您只需要定期编码并将视频刷新到磁盘或通过网络。这在实时渲染中非常有效,因为您有足够大的数据池,可以让CPU线程始终忙碌编码。
编码帧甚至可以在GPU上实时完成,为大型帧缓冲区提供足够的存储空间(因为最终编码数据必须从GPU复制到CPU,并且您希望尽可能少地执行此操作)。显然,这不是使用ffmpeg完成的,有专门的库使用CUDA / OpenCL /计算着色器来实现此目的。我从未使用过它们,但它们确实存在。
为了便于移植,你应该坚持使用libavcodec和Pixel Buffer Objects来实现GPU->CPU的异步复制。现在的CPU有足够的核心,如果你缓冲足够的帧并在多个同时线程中编码(这会增加同步开销并增加输出编码视频时的延迟),或者只是丢弃帧/降低分辨率(穷人的解决方案),你可能不需要GPU辅助编码。
这里涉及的许多概念远远超出了SDL的范围,但您确实询问了如何以比当前解决方案更好的性能实现这一点。简而言之,使用OpenGL Pixel Buffer Objects传输数据,使用libavcodec进行编码。可以在ffmpeg libavcodec examples页面上找到编码视频的example application

ryhaxcpt

ryhaxcpt2#

对于一些快速测试,比如下面的代码work(tested),可调整大小的窗口是未处理的。

#include <stdio.h>
FILE *avconv = NULL;
...
/* initialize */
avconv = popen("avconv -y -f rawvideo -s 800x600 -pix_fmt rgb24 -r 25 -i - -vf vflip -an -b:v 1000k test.mp4", "w");
...
/* save */
glReadPixels(0, 0, 800, 600, GL_RGB, GL_UNSIGNED_BYTE, pixels);
if (avconv)
    fwrite(pixels ,800*600*3 , 1, avconv);
...
/* term */
if (avconv)
    pclose(avconv);
j0pj023g

j0pj023g3#

FFmpeg 2.7可运行mpg示例

解释和一个超集例子:如何使用GLUT/OpenGL渲染到文件?
考虑使用https://github.com/FFmpeg/FFmpeg/blob/n3.0/doc/examples/muxing.c生成包含格式。

#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define GL_GLEXT_PROTOTYPES 1
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
#include <GL/glext.h>

#include <libavcodec/avcodec.h>
#include <libavutil/imgutils.h>
#include <libavutil/opt.h>
#include <libswscale/swscale.h>

enum Constants { SCREENSHOT_MAX_FILENAME = 256 };
static GLubyte *pixels = NULL;
static GLuint fbo;
static GLuint rbo_color;
static GLuint rbo_depth;
static const unsigned int HEIGHT = 100;
static const unsigned int WIDTH = 100;
static int offscreen = 1;
static unsigned int max_nframes = 100;
static unsigned int nframes = 0;
static unsigned int time0;

/* Model. */
static double angle;
static double delta_angle;

/* Adapted from: https://github.com/cirosantilli/cpp-cheat/blob/19044698f91fefa9cb75328c44f7a487d336b541/ffmpeg/encode.c */
static AVCodecContext *c = NULL;
static AVFrame *frame;
static AVPacket pkt;
static FILE *file;
static struct SwsContext *sws_context = NULL;
static uint8_t *rgb = NULL;

static void ffmpeg_encoder_set_frame_yuv_from_rgb(uint8_t *rgb) {
    const int in_linesize[1] = { 4 * c->width };
    sws_context = sws_getCachedContext(sws_context,
            c->width, c->height, AV_PIX_FMT_RGB32,
            c->width, c->height, AV_PIX_FMT_YUV420P,
            0, NULL, NULL, NULL);
    sws_scale(sws_context, (const uint8_t * const *)&rgb, in_linesize, 0,
            c->height, frame->data, frame->linesize);
}

void ffmpeg_encoder_start(const char *filename, int codec_id, int fps, int width, int height) {
    AVCodec *codec;
    int ret;
    avcodec_register_all();
    codec = avcodec_find_encoder(codec_id);
    if (!codec) {
        fprintf(stderr, "Codec not found\n");
        exit(1);
    }
    c = avcodec_alloc_context3(codec);
    if (!c) {
        fprintf(stderr, "Could not allocate video codec context\n");
        exit(1);
    }
    c->bit_rate = 400000;
    c->width = width;
    c->height = height;
    c->time_base.num = 1;
    c->time_base.den = fps;
    c->gop_size = 10;
    c->max_b_frames = 1;
    c->pix_fmt = AV_PIX_FMT_YUV420P;
    if (codec_id == AV_CODEC_ID_H264)
        av_opt_set(c->priv_data, "preset", "slow", 0);
    if (avcodec_open2(c, codec, NULL) < 0) {
        fprintf(stderr, "Could not open codec\n");
        exit(1);
    }
    file = fopen(filename, "wb");
    if (!file) {
        fprintf(stderr, "Could not open %s\n", filename);
        exit(1);
    }
    frame = av_frame_alloc();
    if (!frame) {
        fprintf(stderr, "Could not allocate video frame\n");
        exit(1);
    }
    frame->format = c->pix_fmt;
    frame->width  = c->width;
    frame->height = c->height;
    ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height, c->pix_fmt, 32);
    if (ret < 0) {
        fprintf(stderr, "Could not allocate raw picture buffer\n");
        exit(1);
    }
}

void ffmpeg_encoder_finish(void) {
    uint8_t endcode[] = { 0, 0, 1, 0xb7 };
    int got_output, ret;
    do {
        fflush(stdout);
        ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
        if (ret < 0) {
            fprintf(stderr, "Error encoding frame\n");
            exit(1);
        }
        if (got_output) {
            fwrite(pkt.data, 1, pkt.size, file);
            av_packet_unref(&pkt);
        }
    } while (got_output);
    fwrite(endcode, 1, sizeof(endcode), file);
    fclose(file);
    avcodec_close(c);
    av_free(c);
    av_freep(&frame->data[0]);
    av_frame_free(&frame);
}

void ffmpeg_encoder_encode_frame(uint8_t *rgb) {
    int ret, got_output;
    ffmpeg_encoder_set_frame_yuv_from_rgb(rgb);
    av_init_packet(&pkt);
    pkt.data = NULL;
    pkt.size = 0;
    ret = avcodec_encode_video2(c, &pkt, frame, &got_output);
    if (ret < 0) {
        fprintf(stderr, "Error encoding frame\n");
        exit(1);
    }
    if (got_output) {
        fwrite(pkt.data, 1, pkt.size, file);
        av_packet_unref(&pkt);
    }
}

void ffmpeg_encoder_glread_rgb(uint8_t **rgb, GLubyte **pixels, unsigned int width, unsigned int height) {
    size_t i, j, k, cur_gl, cur_rgb, nvals;
    const size_t format_nchannels = 4;
    nvals = format_nchannels * width * height;
    *pixels = realloc(*pixels, nvals * sizeof(GLubyte));
    *rgb = realloc(*rgb, nvals * sizeof(uint8_t));
    /* Get RGBA to align to 32 bits instead of just 24 for RGB. May be faster for FFmpeg. */
    glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, *pixels);
    for (i = 0; i < height; i++) {
        for (j = 0; j < width; j++) {
            cur_gl  = format_nchannels * (width * (height - i - 1) + j);
            cur_rgb = format_nchannels * (width * i + j);
            for (k = 0; k < format_nchannels; k++)
                (*rgb)[cur_rgb + k] = (*pixels)[cur_gl + k];
        }
    }
}

static int model_init(void) {
    angle = 0;
    delta_angle = 1;
}

static int model_update(void) {
    angle += delta_angle;
    return 0;
}

static int model_finished(void) {
    return nframes >= max_nframes;
}

static void init(void)  {
    int glget;

    if (offscreen) {
        /*  Framebuffer */
        glGenFramebuffers(1, &fbo);
        glBindFramebuffer(GL_FRAMEBUFFER, fbo);

        /* Color renderbuffer. */
        glGenRenderbuffers(1, &rbo_color);
        glBindRenderbuffer(GL_RENDERBUFFER, rbo_color);
        /* Storage must be one of: */
        /* GL_RGBA4, GL_RGB565, GL_RGB5_A1, GL_DEPTH_COMPONENT16, GL_STENCIL_INDEX8. */
        glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB565, WIDTH, HEIGHT);
        glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo_color);

        /* Depth renderbuffer. */
        glGenRenderbuffers(1, &rbo_depth);
        glBindRenderbuffer(GL_RENDERBUFFER, rbo_depth);
        glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, WIDTH, HEIGHT);
        glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo_depth);

        glReadBuffer(GL_COLOR_ATTACHMENT0);

        /* Sanity check. */
        assert(glCheckFramebufferStatus(GL_FRAMEBUFFER));
        glGetIntegerv(GL_MAX_RENDERBUFFER_SIZE, &glget);
        assert(WIDTH * HEIGHT < (unsigned int)glget);
    } else {
        glReadBuffer(GL_BACK);
    }

    glClearColor(0.0, 0.0, 0.0, 0.0);
    glEnable(GL_DEPTH_TEST);
    glPixelStorei(GL_PACK_ALIGNMENT, 1);
    glViewport(0, 0, WIDTH, HEIGHT);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glMatrixMode(GL_MODELVIEW);

    time0 = glutGet(GLUT_ELAPSED_TIME);
    model_init();
    ffmpeg_encoder_start("tmp.mpg", AV_CODEC_ID_MPEG1VIDEO, 25, WIDTH, HEIGHT);
}

static void deinit(void)  {
    printf("FPS = %f\n", 1000.0 * nframes / (double)(glutGet(GLUT_ELAPSED_TIME) - time0));
    free(pixels);
    ffmpeg_encoder_finish();
    free(rgb);
    if (offscreen) {
        glDeleteFramebuffers(1, &fbo);
        glDeleteRenderbuffers(1, &rbo_color);
        glDeleteRenderbuffers(1, &rbo_depth);
    }
}

static void draw_scene(void) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glLoadIdentity();
    glRotatef(angle, 0.0f, 0.0f, -1.0f);
    glBegin(GL_TRIANGLES);
    glColor3f(1.0f, 0.0f, 0.0f);
    glVertex3f( 0.0f,  0.5f, 0.0f);
    glColor3f(0.0f, 1.0f, 0.0f);
    glVertex3f(-0.5f, -0.5f, 0.0f);
    glColor3f(0.0f, 0.0f, 1.0f);
    glVertex3f( 0.5f, -0.5f, 0.0f);
    glEnd();
}

static void display(void) {
    char extension[SCREENSHOT_MAX_FILENAME];
    char filename[SCREENSHOT_MAX_FILENAME];
    draw_scene();
    if (offscreen) {
        glFlush();
    } else {
        glutSwapBuffers();
    }
    frame->pts = nframes;
    ffmpeg_encoder_glread_rgb(&rgb, &pixels, WIDTH, HEIGHT);
    ffmpeg_encoder_encode_frame(rgb);
    nframes++;
    if (model_finished())
        exit(EXIT_SUCCESS);
}

static void idle(void) {
    while (model_update());
    glutPostRedisplay();
}

int main(int argc, char **argv) {
    GLint glut_display;
    glutInit(&argc, argv);
    if (argc > 1)
        offscreen = 0;
    if (offscreen) {
        /* TODO: if we use anything smaller than the window, it only renders a smaller version of things. */
        /*glutInitWindowSize(50, 50);*/
        glutInitWindowSize(WIDTH, HEIGHT);
        glut_display = GLUT_SINGLE;
    } else {
        glutInitWindowSize(WIDTH, HEIGHT);
        glutInitWindowPosition(100, 100);
        glut_display = GLUT_DOUBLE;
    }
    glutInitDisplayMode(glut_display | GLUT_RGBA | GLUT_DEPTH);
    glutCreateWindow(argv[0]);
    if (offscreen) {
        /* TODO: if we hide the window the program blocks. */
        /*glutHideWindow();*/
    }
    init();
    glutDisplayFunc(display);
    glutIdleFunc(idle);
    atexit(deinit);
    glutMainLoop();
    return EXIT_SUCCESS;
}
mzaanser

mzaanser4#

我通过以下方式解决了从Python OpenGL在Python中编写视频文件的问题:在main部分中,设置要写入的视频文件:

#Set up video:
width=640
height=480
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
#Open video output file:
out = cv2.VideoWriter('videoout.mp4',fourcc, 20.0, (width,height))

在DisplayFunction中:

#Read frame:
screenshot = glReadPixels(0,0,width,height,GL_RGB,GL_UNSIGNED_BYTE)
#Convert from binary to cv2 numpy array:
snapshot = Image.frombuffer("RGB",(width,height),screenshot,"raw","RGB",0,0)
snapshot= np.array(snapshot)
snapshot=cv2.flip(snapshot,0)
#write frame to video file:
out.write(snapshot)
if (...):  #End movie
   glutLeaveMainLoop()
   out.release()
   print("Exit")

这将写入“videoout.mp4”。请注意,它最终需要“out.release()”来获得正确的mp4文件。

t5fffqht

t5fffqht5#

我成功地用Python从OpenGL中编写了一个视频,方法如下:
1.初始化cv2.VideoWriter()

CODEC = "avc1" # Codec name.
(width, height) = (1980,1080)
video_path = "OpenGL_vis.mp4"
fourcc = cv2.VideoWriter_fourcc(*list(CODEC))
video_writer = cv2.VideoWriter(
                video_path,
                fourcc,
                fps,
                (width, height)
            )

1.在一个循环中,从OpenGL小部件(let w = gl.GLViewWidget())逐帧获取,将其从4通道转换为3通道,并写入cv2.VideoWriter()

im = w.renderToArray((width, height))
im = cv2.cvtColor(im, cv2.COLOR_BGRA2BGR)
video_writer.write(im)

1.循环后释放cv2.VideoWriter()

if video_writer:
    video_writer.release()

相关问题