android – 如何使用OpenGL模拟OpenCV的warpPerspective功能(透视变换)

我在Python和C中使用OpenCV完成了图像变形,看到可口可乐标志在我选择的角落里扭曲了:

android  – 如何使用OpenGL模拟OpenCV的warpPerspective功能(透视变换)

使用以下图像:

android  – 如何使用OpenGL模拟OpenCV的warpPerspective功能(透视变换)

还有这个:

android  – 如何使用OpenGL模拟OpenCV的warpPerspective功能(透视变换)

Full album with transition pics and description here

我需要做到这一点,但在OpenGL中.我会:

>角落里面我要映射扭曲的图像
>单应矩阵,用于映射徽标图像的变换
进入最终图像内部的徽标图像(使用OpenCV
warpPerspective),像这样:

[[  2.59952324e+00,   3.33170976e-01,  -2.17014066e+02],
[  8.64133587e-01,   1.82580111e+00,  -3.20053715e+02],
[  2.78910149e-03,   4.47911310e-05,   1.00000000e+00]]

>主图像(此处为跑道图像)
>叠加图像(可口可乐图像在这里)

可能吗 ?我已经阅读了很多并开始了OpenGL基础知识教程,但它可以从我拥有的内容中完成吗? OpenGL实现会更快,比如大约10ms左右吗?

我现在正在玩这个教程:
http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html
我正朝着正确的方向前进吗?总的OpenGL新手在这里,请承担.谢谢.

解决方法:

在尝试了这里和其他地方提出的一些解决方案之后,我通过编写一个复制“warpPerspective”的片段着色器来解决这个问题.

片段着色器代码类似于:

varying highp vec2 textureCoordinate;

uniform sampler2D inputImageTexture;

// NOTE: you will need to pass the INVERSE of the homography matrix, as well as 
// the width and height of your image as uniforms!
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;

void main()
{
   // Texture coordinates will run [0,1],[0,1];
   // Convert to "real world" coordinates
   highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);

   // Determine what 'z' is
   highp vec3 m = inverseHomographyMatrix[2] * frameCoordinate;
   highp float zed = 1.0 / (m.x + m.y + m.z);
   frameCoordinate = frameCoordinate * zed;

   // Determine translated x and y coordinates
   highp float xTrans = inverseHomographyMatrix[0][0] * frameCoordinate.x + inverseHomographyMatrix[0][1] * frameCoordinate.y + inverseHomographyMatrix[0][2] * frameCoordinate.z;
   highp float yTrans = inverseHomographyMatrix[1][0] * frameCoordinate.x + inverseHomographyMatrix[1][1] * frameCoordinate.y + inverseHomographyMatrix[1][2] * frameCoordinate.z;

   // Normalize back to [0,1],[0,1] space
   highp vec2 coords = vec2(xTrans / width, yTrans / height);

   // Sample the texture if we're mapping within the image, otherwise set color to black
   if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
       gl_FragColor = texture2D(inputImageTexture, coords);
   } else {
       gl_FragColor = vec4(0.0,0.0,0.0,0.0);
   }
}

请注意,我们在这里传递的单应矩阵是逆向自然矩阵!你必须反转你将传递给’warpPerspective’的单应矩阵 – 否则这段代码将无效.

顶点着色器除了通过坐标外什么都不做:

// Vertex shader
attribute vec4 position;
attribute vec4 inputTextureCoordinate;

varying vec2 textureCoordinate;

void main() {
   // Nothing happens in the vertex shader
   textureCoordinate = inputTextureCoordinate.xy;
   gl_Position = position;
}

传入未改变的纹理坐标和位置坐标(即textureCoordinates = [(0,0),(0,1),(1,0),(1,1)]和positionCoordinates = [(-1,-1),( -1,1),(1,-1),(1,1)],对于三角形条),这应该工作!

上一篇:java – Opencv – Features2D Homography结果不正确


下一篇:【转载】salesforce 零基础开发入门学习(二)变量基础知识,集合,表达式,流程控制语句