opengl 校正glsl仿射纹理Map

kognpnkq  于 2022-12-03  发布在  其他
关注(0)|答案(7)|浏览(212)

我正在尝试编码正确的二维仿射纹理MapGLSL。
说明:

...这些图像中没有一个对我的目的是正确的。右(标记为正确)有透视校正,我不想。所以这:Getting to know the Q texture coordinate解决方案(没有进一步改进)不是我想要的。
我想简单地“拉伸”四边形内部的纹理,像这样:

而是由两个三角形组成的。2有什么建议吗?

rnmwe5a2

rnmwe5a21#

只要你有一个梯形,并且它的平行边与一个局部轴对齐,这个方法就可以很好地工作。
GLSL:

varying vec2 shiftedPosition, width_height;

#ifdef VERTEX
void main() {
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    shiftedPosition = gl_MultiTexCoord0.xy; // left and bottom edges zeroed.
    width_height = gl_MultiTexCoord1.xy;
}
#endif

#ifdef FRAGMENT
uniform sampler2D _MainTex;
void main() {
    gl_FragColor = texture2D(_MainTex, shiftedPosition / width_height);
}
#endif

C#:

// Zero out the left and bottom edges, 
// leaving a right trapezoid with two sides on the axes and a vertex at the origin.
var shiftedPositions = new Vector2[] {
    Vector2.zero,
    new Vector2(0, vertices[1].y - vertices[0].y),
    new Vector2(vertices[2].x - vertices[1].x, vertices[2].y - vertices[3].y),
    new Vector2(vertices[3].x - vertices[0].x, 0)
};
mesh.uv = shiftedPositions;

var widths_heights = new Vector2[4];
widths_heights[0].x = widths_heights[3].x = shiftedPositions[3].x;
widths_heights[1].x = widths_heights[2].x = shiftedPositions[2].x;
widths_heights[0].y = widths_heights[1].y = shiftedPositions[1].y;
widths_heights[2].y = widths_heights[3].y = shiftedPositions[2].y;
mesh.uv2 = widths_heights;
ih99xse1

ih99xse12#

我最近设法想出了一个通用的解决这个问题的任何类型的四边形。计算和GLSL可能会有帮助。有一个工作演示在java(运行在Android),但紧凑和可读,应该很容易移植到统一或iOS:http://www.bitlush.com/posts/arbitrary-quadrilaterals-in-opengl-es-2-0

hxzsmxv2

hxzsmxv23#

如果还有人感兴趣的话,这里有一个C#实现,它取一个由顺时针屏幕顶点(x 0,y 0)(x1,y1)...(x3,y3)定义的四边形,一个位于(x,y)的任意像素,并计算该像素的u和v。它最初是为了将一个任意四边形渲染为一个纹理而编写的,但很容易将该算法拆分为CPU、顶点和像素着色器;我已经在代码中相应地进行了注解。

float Ax, Bx, Cx, Dx, Ay, By, Cy, Dy, A, B, C;

            //These are all uniforms for a given quad. Calculate on CPU.
            Ax = (x3 - x0) - (x2 - x1);
            Bx = (x0 - x1);
            Cx = (x2 - x1);
            Dx = x1;

            Ay = (y3 - y0) - (y2 - y1);
            By = (y0 - y1);
            Cy = (y2 - y1);
            Dy = y1;

            float ByCx_plus_AyDx_minus_BxCy_minus_AxDy = (By * Cx) + (Ay * Dx) - (Bx * Cy) - (Ax * Dy);
            float ByDx_minus_BxDy = (By * Dx) - (Bx * Dy);

            A = (Ay*Cx)-(Ax*Cy);

            //These must be calculated per-vertex, and passed through as interpolated values to the pixel-shader 
            B = (Ax * y) + ByCx_plus_AyDx_minus_BxCy_minus_AxDy - (Ay * x);
            C = (Bx * y) + ByDx_minus_BxDy - (By * x);

            //These must be calculated per-pixel using the interpolated B, C and x from the vertex shader along with some of the other uniforms.
            u = ((-B) - Mathf.Sqrt((B*B-(4.0f*A*C))))/(A*2.0f);
            v = (x - (u * Cx) - Dx)/((u*Ax)+Bx);

qvsjd97n

qvsjd97n4#

细分四边形顶点可以增加插值像素的提示。
查看此链接。https://www.youtube.com/watch?v=8TleepxIORU&feature=youtu.be

mv1qrgav

mv1qrgav5#

我也有类似的问题(https://gamedev.stackexchange.com/questions/174857/mapping-a-texture-to-a-2d-quadrilateral/174871),在gamedev他们建议使用假想的Z坐标,我使用下面的C代码计算,它似乎在一般情况下工作(不仅仅是梯形):

//usual euclidean distance
float distance(int ax, int ay, int bx, int by) {
  int x = ax-bx;
  int y = ay-by;
  return sqrtf((float)(x*x + y*y));
}

void gfx_quad(gfx_t *dst //destination texture, we are rendering into
             ,gfx_t *src //source texture
             ,int *quad  // quadrilateral vertices
             )
{
  int *v = quad; //quad vertices
  float z = 20.0;
  float top = distance(v[0],v[1],v[2],v[3]); //top
  float bot = distance(v[4],v[5],v[6],v[7]); //bottom
  float lft = distance(v[0],v[1],v[4],v[5]); //left
  float rgt = distance(v[2],v[3],v[6],v[7]); //right

  // By default all vertices lie on the screen plane
  float az = 1.0;
  float bz = 1.0;
  float cz = 1.0;
  float dz = 1.0;

  // Move Z from screen, if based on distance ratios.
  if (top<bot) {
    az *= top/bot;
    bz *= top/bot;
  } else {
    cz *= bot/top;
    dz *= bot/top;
  }

  if (lft<rgt) {
    az *= lft/rgt;
    cz *= lft/rgt;
  } else {
    bz *= rgt/lft;
    dz *= rgt/lft;
  }

  // draw our quad as two textured triangles
  gfx_textured(dst, src
              , v[0],v[1],az, v[2],v[3],bz, v[4],v[5],cz
              , 0.0,0.0,      1.0,0.0,      0.0,1.0);
  gfx_textured(dst, src
              , v[2],v[3],bz, v[4],v[5],cz, v[6],v[7],dz
              , 1.0,0.0,      0.0,1.0,      1.0,1.0);
}

我用软件来缩放和旋转2D精灵,而对于OpenGL 3D应用程序,你需要在像素/片段着色器中进行,除非你能够将这些想象中的az,bz,cz,dzMap到你实际的3D空间,并使用通常的管道。DMGregory给出了OpenGL着色器的确切代码:https://gamedev.stackexchange.com/questions/148082/how-can-i-fix-zig-zagging-uv-mapping-artifacts-on-a-generated-mesh-that-tapers

kd3sttzy

kd3sttzy6#

当我尝试在OpenGL中实现homography warping时,我遇到了这个问题。我发现的一些解决方案依赖于深度的概念,但这在我的情况下是不可行的,因为我正在处理2D坐标。我的解决方案基于this article。它似乎适用于我可以尝试的所有情况。我把它留在这里,以防它对其他人有用,因为我找不到类似的东西。解决方案做出以下假设:

  • 顶点坐标是四边形中的4个点,顺序为右下、右上、左上、左下。
  • 坐标以OpenGL的参考系给出(范围[-1,1],原点位于左下角)。
std::vector<cv::Point2f> points;
// Convert points to homogeneous coordinates to simplify the problem.
Eigen::Vector3f p0(points[0].x, points[0].y, 1);
Eigen::Vector3f p1(points[1].x, points[1].y, 1);
Eigen::Vector3f p2(points[2].x, points[2].y, 1);
Eigen::Vector3f p3(points[3].x, points[3].y, 1);

// Compute the intersection point between the lines described by opposite vertices using cross products. Normalization is only required at the end.
// See https://leimao.github.io/blog/2D-Line-Mathematics-Homogeneous-Coordinates/ for a quick summary of this approach.
auto line1 = p2.cross(p0);
auto line2 = p3.cross(p1);
auto intersection = line1.cross(line2);
intersection = intersection / intersection(2);

// Compute distance to each point.
for (const auto &pt : points) {
  auto distance = std::sqrt(std::pow(pt.x - intersection(0), 2) +
                            std::pow(pt.y - intersection(1), 2));
  distances.push_back(distance);
}

// Assumes same order as above.
std::vector<cv::Point2f> texture_coords_unnormalized = {
  {1.0f, 1.0f},
  {1.0f, 0.0f},
  {0.0f, 0.0f},
  {0.0f, 1.0f}
};

std::vector<float> texture_coords;
for (int i = 0; i < texture_coords_unnormalized.size(); ++i) {
  float u_i = texture_coords_unnormalized[i].x;
  float v_i = texture_coords_unnormalized[i].y;
  float d_i = distances.at(i);
  float d_i_2 = distances.at((i + 2) % 4);
  float scale = (d_i + d_i_2) / d_i_2;

  texture_coords.push_back(u_i*scale);
  texture_coords.push_back(v_i*scale);
  texture_coords.push_back(scale);
}

将纹理坐标传递给着色器(使用vec 3)。然后:

gl_FragColor = vec4(texture2D(textureSampler, textureCoords.xy/textureCoords.z).rgb, 1.0);
brtdzjyr

brtdzjyr7#

谢谢你的回答,但是经过实验我找到了一个解决办法。

左边的两个三角形具有根据this的UV(strq),右边的两个三角形是该透视校正的修改版本。
数字和着色器:

tri1 = [Vec2(-0.5, -1), Vec2(0.5, -1), Vec2(1, 1)]
tri2 = [Vec2(-0.5, -1), Vec2(1, 1), Vec2(-1, 1)]

d1 = length of top edge = 2
d2 = length of bottom edge = 1

tri1_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(d2 / d1, 0, 0, d2 / d1), Vec4(1, 1, 0, 1)]
tri2_uv = [Vec4(0, 0, 0, d2 / d1), Vec4(1, 1, 0, 1), Vec4(0, 1, 0, 1)]

使用此glsl着色器仅渲染直角三角形(左侧为固定管道):

void main()
{
    gl_FragColor = texture2D(colormap, vec2(gl_TexCoord[0].x / glTexCoord[0].w, gl_TexCoord[0].y);
}

所以只有U是透视的V是线性的

相关问题