给定一组2D点,如何应用undistortPoints
的相反值?
我有相机的内在和distCoeffs
,并希望(例如)创建一个正方形,并扭曲它,就像相机通过透镜看到它。
我在这里发现了一个“扭曲”补丁:http://code.opencv.org/issues/1387,但似乎这只对图像有好处,我想在稀疏点上工作。
给定一组2D点,如何应用undistortPoints
的相反值?
我有相机的内在和distCoeffs
,并希望(例如)创建一个正方形,并扭曲它,就像相机通过透镜看到它。
我在这里发现了一个“扭曲”补丁:http://code.opencv.org/issues/1387,但似乎这只对图像有好处,我想在稀疏点上工作。
7条答案
按热度按时间alen0pnh1#
这个问题是相当古老的,但由于我结束了这里从谷歌搜索没有看到一个整洁的答案,我决定回答它无论如何。
有一个名为
projectPoints
的函数可以实现这一点。OpenCV在使用calibrateCamera
和stereoCalibrate
等函数估计摄像机参数时,会在内部使用C版本编辑:
要使用2D点作为输入,我们可以使用
convertPointsToHomogeneous
将所有z坐标设置为1,并使用projectPoints
(无旋转和平移)。w6mmgewl2#
一个简单的解决方案是使用
initUndistortRectifyMap
来获得从未失真坐标到失真坐标的Map:我编辑澄清代码是否正确:
我引用
initUndistortRectifyMap
的文档:对于目标(校正和矫正的)图像中的每个像素(u,v),该函数计算源图像中(即,来自照相机的原始图像中)的对应坐标。
Mapx(u,v)= x ′ ′ f_x + c_x
图y(u,v)= y ′ ′ f_y + c_y
brqmpdu13#
undistortPoint
是项目点的简单反转版本在我的情况下,我想做以下:
取消扭曲点:
这会将点还原到与图像原点非常相似的坐标,但不会发生扭曲。这是cv::undistort()函数的默认行为。
重新扭曲点:
这里的小技巧是首先用线性相机模型将这些点投影到z=1平面上。然后,必须用原始相机模型将它们投影。
我发现这些有用,我希望它对你也有用。
yruzcnhs4#
我也有过同样的需求。下面是一个可能的解决方案:
lf3rwulv5#
对于那些还在搜索的人,这里有一个简单的python函数,可以将点扭曲回去:
示例:
bqjvbblv6#
This question and it's related questions on SO have been around for nearly a decade, but there still isn't an answer that satisfies the criteria below so I'm proposing a new answer that
Preliminaries
It is important to distinquish between ideal coordinates (also called 'normalized' or 'sensor' coordinates) which are the input variables to the distortion model or 'x' and 'y' in the OpenCV docs vs. observed coordinates (also called 'image' coordinates) or 'u' and 'v' in OpenCV docs. Ideal coordinates have been normalized by the intrinsic parameters so that they have been scaled by the focal length and are relative to the image centroid at (cx,cy). This is important to point out because the
undistortPoints()
method can return either ideal or observed coordinates depending on the input arguments.undistortPoints()
can essentially do any combination of two things: remove distortions and apply a rotational transformation with the output either being in ideal or observed coordinates, depending on if a projection mat (InputArray P
) is provided in the input. The input coordinates (InputArray src
) forundistortPoints()
is always in observed or image coordinates.At a high level
undistortPoints()
converts the input coordinates from observed to ideal coordinates and uses an iterative process to remove distortions from the ideal or normalized points. The reason the process is iterative is because the OpenCV distortion model is not easy to invert analytically.In the example below, we use
undistortPoints()
twice. First, we apply a reverse rotational transformation to undo image rectification. This step can be skipped if you are not working with rectified images. The output of this first step is in observed coordinates so we useundistortPoints()
again to convert these to ideal coordinates. The conversion to ideal coordinates makes setting up the input forprojectPoints()
easier (which we use to apply the distortions). With the ideal coordinates, we can simply convert them to homogeneous by appending a 1 to each point. This is equivalent to projecting the points to a plane in 3D world coordinates with a linear camera model.As of currently, there isn't a method in OpenCV to apply distortions to a set of ideal coordinates (with the exception of fisheye distortions using
distort()
) so we employ theprojectPoints()
method which can apply distortions as well as transformations as part of its projection algorithm. The tricky part about usingprojectPoints()
is that the input is in terms of world or model coordinates in 3D, which is why we homogenized the output of the second use ofundistortPoints()
. By usingprojectPoints()
with a dummy, zero-valued rotation vector (InputArray rvec
) and translation vector (Input Array tvec
) the result is simply a distorted set of coordinates which is conveniently output in observed or image coordinates.Some helpful links
Difference between undistortPoints() and projectPoints() in OpenCV https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga1019495a2c8d1743ed5cc23fa0daff8chttps://docs.opencv.org/3.4/da/d54/group__imgproc__transform.html#ga55c716492470bfe86b0ee9bf3a1f0f7eRe-distort points with camera intrinsics/extrinsicshttps://stackoverflow.com/questions/28678985/exact-definition-of-the-matrices-in-opencv-stereorectify#:~:text=Normally%20the%20definition%20of%20a,matrix%20with%20the%20extrinsic%20parametershttps://docs.opencv.org/4.x/db/d58/group__calib3d__fisheye.html#ga75d8877a98e38d0b29b6892c5f8d7765https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html#ga617b1685d4059c6040827800e72ad2b6 Does OpenCV's undistortPoints also rectify them?
Removing distortions in rectified image coordinates
Before providing the solution to recovering the original image coordinates with distortions we provide a short snippet to convert from the original distorted image coordinates to the corresponding rectified, undistorted coordinates that can be used for testing the reverse solution below.
The rotation matrix
R1
and the projection matrixP1
come fromstereoRectify()
. The intrinsic parametersM1
and distortion parametersD1
come fromstereoCalibrate()
.Re-distorting and unrectifying points to recover the original image coordinates
We will need three mats to reverse the rectification: the inverse of the rectification rotation matrix from stereoRectify
R1
, and two others to 'swap' theP1
andM1
projections that happen inundistortPoints()
.P1_prime
is the rotation matrix sub-portion of the projection matrix andM1_prime
converts the rectification rotation matrix into a projection matrix with no translation. Note this only works if the output of stereoRectify has no translation, i.e. the last column ofP1
is zeros which can be easily verified.With these mats, the reversal can proceed as follows
To test the results, we can compare them to the benchmark values
uqjltbpv7#
这是main.cpp。它是自给自足的,除了opencv不需要其他任何东西。我不记得我在哪里找到它的,它工作,我在我的项目中使用它。程序吃了一组标准的棋盘图像,并生成json/xml文件与所有的相机扭曲。