opencv 如何从相机的内外参数中获取真实的世界的点?

xoshrz7s  于 2023-04-21  发布在  其他
关注(0)|答案(1)|浏览(119)

我使用OpenCV来测量物体的长度,使用摄像头。(物体依赖于平面)
对于摄像机校准,我使用了棋盘。

// position array of checker corners in real world coordinate
3d_checker_position 

// possition array of checker corners in image coordinate
2d_checker_position

// find camera matrix, distortion coefficients, 
// camera rotation vector, camera translation vector
camera_mat, dist_coeffs, rvec, tvec = cv.calibrateCamera(3d_checker_position, 2d_checker_position)

// 3D -> 2D
test_2d_checker_position 
  = cv.projectPoints(3d_checker_position, rvec, tvec, camera_mat, dist_coeffs)
// test_2d_checker_position == 2d_checker_position

// 2D -> 3D
test_3d_checker_position 
  = what_function_should_I_use(2d_checker_position, rvec, tvec, camera_mat, dist_coeffs)

如何实现what_function_should_I_use,或者OpenCV中有任何函数吗?

编辑23.04.07

为了充分理解世界到图像坐标的转换,我首先手工实现了3D -〉2D,这是基于下面的针孔模型方程。

sP = I * E * W

其中P是2通道(x,y)图像点,I是内部参数矩阵(相机矩阵),E是外部参数矩阵(变换),W是3通道(x,y,z)世界点。
下面是从3D到2D的实现代码。s是用于将齐次坐标转换为图像坐标的加权因子。

// Firstly, assumed that intrinsic parameters(camera matrix), 
// extrinsic parameters(rotation and translation vector), 
// and distortion parameters are pre-calculated from 
// cv::calibrateCamera function.
cv::Mat intrinsic; // (3x3)
cv::Mat rvec; // (3x1)
cv::Mat tvec; // (3x1) 
std::vector<float> distortion;
// ...

// Secondly, make a extrinsic matrix which is [R | t].
cv::Mat extrinsic= cv::Mat(cv::Size(4, 3), CV_64F);
cv::Mat rot_mat;
cv::Rodrigues(rvec, rot_mat);
rot_mat.copyTo(extrinsic(cv::Rect(0, 0, 3, 3)));
tvec.copyTo(extrinsic(cv::Rect(3, 0, 1, 3)));

// Thirdly, make a pinhole camera model matrix 
// by merging intrinsic and extrinsic parameters.
cv::Mat pinhole_model = intrinsic * extrinsic;

// Finally, convert 3D point to 2D point.

cv::Mat world_point_homogeneous = cv::Mat(cv::Size(1, 4), CV_64F);
world_point_homogeneous.at<double>(0, 0) = world_point.x;
world_point_homogeneous.at<double>(1, 0) = world_point.y;
world_point_homogeneous.at<double>(2, 0) = world_point.z;
world_point_homogeneous.at<double>(3, 0) = 1;

cv::Mat projected_homogeneous 
  = pinhole_model * world_point_homogeneous;

// Convert homogeneous coordinate to image coordinate.
// This equals dividing 's' from 'P' in the above equation.
const double w = projected_homogeneous.at<double>(2, 0);
cv::Point2d image_point;
image_point.x = projected_homogeneous.at<double>(0, 0) / w;
image_point.y = projected_homogeneous.at<double>(1, 0) / w;

结果

上面的代码确实将3D点转换为2D点,这与cv::projectPoints()的结果非常接近。

问题

1.何时以及如何将distortion参数应用于流程?
1.回到原点问题,在将2D点转换为3D点之前,我需要w因子(在等式中,s)。我如何获得它?

gojuced7

gojuced71#

我实现了这个解决方案。正如@fana所提到的,如果不指定z值,则无法从2D获得3D点。在我的例子中,图像中的目标对象与棋盘平行于同一平面,这意味着在执行相机校准时z值与棋盘的z值相同。
具体地,通过以下针孔相机模型公式将2D点转换为3D。

其中(u,v)是图像坐标中的点,(X,Y,Z)是世界坐标中的点,左侧矩阵是内参数,并且右侧矩阵是外参数。
简单地说,称内、外矩阵的乘积为P,

因为Z是常数(在这种情况下为0),所以建立X和Y联立线性方程。

// Assume world z is 0
std::vector<cv::Point3d> unproject(
    const std::vector<cv::Point2d>& points,
    cv::Mat rvec, cv::Mat tvec,
    cv::Mat& intrinsic,
    const std::vector<float>& distortion) {

  // Reconstruct pinhole-camera model
  cv::Mat transformation = cv::Mat(cv::Size(4, 3), CV_64F);
  cv::Mat rot_mat;
  cv::Rodrigues(rvec, rot_mat);
  rot_mat.copyTo(transformation(cv::Rect(0, 0, 3, 3)));
  tvec.copyTo(transformation(cv::Rect(3, 0, 1, 3)));
  cv::Mat new_intrinsic = cv::getOptimalNewCameraMatrix(intrinsic, distortion, cv::Size(800, 600), 0);
  cv::Mat pinhole_model = new_intrinsic * transformation;

  std::vector<cv::Point3d> world_points(points.size());
  for (int i = 0; i < points.size(); i++) {
    const double a = pinhole_model.at<double>(2, 0) * points[i].x - pinhole_model.at<double>(0, 0);
    const double b = pinhole_model.at<double>(2, 1) * points[i].x - pinhole_model.at<double>(0, 1);
    const double c = pinhole_model.at<double>(0, 3) - pinhole_model.at<double>(2, 3) * points[i].x;
    const double d = pinhole_model.at<double>(2, 0) * points[i].y - pinhole_model.at<double>(1, 0);
    const double e = pinhole_model.at<double>(2, 1) * points[i].x - pinhole_model.at<double>(1, 1);
    const double f = pinhole_model.at<double>(1, 3) - pinhole_model.at<double>(2, 3) * points[i].y;

    double world_x, world_y;
    world_y = (a * f - c * d) / (a * e - b * d);
    world_x = (c - world_y * b) / a;
    world_points[i].x = world_x; 
    world_points[i].y = world_y;
    world_points[i].z = 0;
    std::cout << "image point " << points[i] << " to " << world_points[i] << std::endl;
  }
  return world_points;
}

cv::getOptimalNewCameraMatrix函数通过失真系数更新固有矩阵,但尚未验证其在此解决方案中是否正常工作。

相关问题