android 将位图转换为WebRTC视频帧

h7appiyu  于 2023-02-14  发布在  Android
关注(0)|答案(2)|浏览(367)

我正在开发一个基于WebRTC的Android应用程序,使用本机实现(org. webrtc:google-webrtc:1.0.24064),我需要随相机流发送一系列位图。
根据我的理解,我可以从org.webrtc.VideoCapturer派生,在单独的线程中进行渲染,并将视频帧发送给观察者;但是它希望它们是YUV420,我不确定我做的转换是否正确。
这是我目前拥有的:CustomCapturer.java
有没有什么例子可以让我看看做这类事情?谢谢。

mu0hgdu0

mu0hgdu01#

YuvConverter yuvConverter = new YuvConverter();
    int[] textures = new int[1];
    GLES20.glGenTextures(1, textures, 0);
    GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_NEAREST);
    GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
    GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
    TextureBufferImpl buffer = new TextureBufferImpl(bitmap.getWidth(), bitmap.getHeight(), VideoFrame.TextureBuffer.Type.RGB, textures[0], new Matrix(), textureHelper.getHandler(), yuvConverter, null);
    VideoFrame.I420Buffer i420Buf = yuvConverter.convert(buffer);
    VideoFrame CONVERTED_FRAME = new VideoFrame(i420Buf, 180, videoFrame.getTimestampNs()) ;
nmpmafwu

nmpmafwu2#

我试过用GL手动渲染它,就像杨的回答一样,但在处理图像流时,最终出现了一些撕裂和帧率问题。
相反,我发现SurfaceTextureHelper类帮助简化了很多事情,因为你也可以使用常规的画布绘制来将位图渲染成VideoFrame。我猜它仍然在幕后使用GL,因为性能在其他方面是相当的。下面是一个VideoCapturer的例子,它接受任意位图并将捕获的帧输出到它的观察者:

import android.content.Context
import android.graphics.Bitmap
import android.graphics.Matrix
import android.graphics.Paint
import android.os.Build
import android.view.Surface
import org.webrtc.CapturerObserver
import org.webrtc.SurfaceTextureHelper
import org.webrtc.VideoCapturer

/**
 * A [VideoCapturer] that can be manually driven by passing in [Bitmap].
 *
 * Once [startCapture] is called, call [pushBitmap] to render images as video frames.
 */
open class BitmapFrameCapturer : VideoCapturer {
    private var surfaceTextureHelper: SurfaceTextureHelper? = null
    private var capturerObserver: CapturerObserver? = null
    private var disposed = false

    private var rotation = 0
    private var width = 0
    private var height = 0

    private val stateLock = Any()

    private var surface: Surface? = null

    override fun initialize(
        surfaceTextureHelper: SurfaceTextureHelper,
        context: Context,
        observer: CapturerObserver,
    ) {
        synchronized(stateLock) {
            this.surfaceTextureHelper = surfaceTextureHelper
            this.capturerObserver = observer
            surface = Surface(surfaceTextureHelper.surfaceTexture)
        }
    }

    private fun checkNotDisposed() {
        check(!disposed) { "Capturer is disposed." }
    }

    override fun startCapture(width: Int, height: Int, framerate: Int) {
        synchronized(stateLock) {
            checkNotDisposed()
            checkNotNull(surfaceTextureHelper) { "BitmapFrameCapturer must be initialized before calling startCapture." }
            capturerObserver?.onCapturerStarted(true)
            surfaceTextureHelper?.startListening { frame -> capturerObserver?.onFrameCaptured(frame) }
        }
    }

    override fun stopCapture() {
        synchronized(stateLock) {
            surfaceTextureHelper?.stopListening()
            capturerObserver?.onCapturerStopped()
        }
    }

    override fun changeCaptureFormat(width: Int, height: Int, framerate: Int) {
        // Do nothing.
        // These attributes are driven by the bitmaps fed in.
    }

    override fun dispose() {
        synchronized(stateLock) {
            if (disposed) {
                return
            }

            stopCapture()
            surface?.release()
            disposed = true
        }
    }

    override fun isScreencast(): Boolean = false

    fun pushBitmap(bitmap: Bitmap, rotationDegrees: Int) {
        synchronized(stateLock) {
            if (disposed) {
                return
            }

            checkNotNull(surfaceTextureHelper)
            checkNotNull(surface)
            if (this.rotation != rotationDegrees) {
                surfaceTextureHelper?.setFrameRotation(rotationDegrees)
                this.rotation = rotationDegrees
            }

            if (this.width != bitmap.width || this.height != bitmap.height) {
                surfaceTextureHelper?.setTextureSize(bitmap.width, bitmap.height)
                this.width = bitmap.width
                this.height = bitmap.height
            }

            surfaceTextureHelper?.handler?.post {
                val canvas = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
                    surface?.lockHardwareCanvas()
                } else {
                    surface?.lockCanvas(null)
                }

                if (canvas != null) {
                    canvas.drawBitmap(bitmap, Matrix(), Paint())
                    surface?.unlockCanvasAndPost(canvas)
                }
            }
        }
    }
}

https://github.com/livekit/client-sdk-android/blob/c1e207c30fce9499a534e13c63a59f26215f0af4/livekit-android-sdk/src/main/java/io/livekit/android/room/track/video/BitmapFrameCapturer.kt

相关问题