我正在用c++写一个光线跟踪器,我遇到了麻烦,弄清楚如何正确地绘制多个对象(这里是球体)。我将光线投射到设置的屏幕分辨率的每个像素上,并为每条光线迭代场景的每个对象。代码如下:
for (unsigned int y = 0; y < m_resY; y++) {
for (unsigned int x = 0; x < m_resX; x++) {
if (x % 2 == 0 && y % 2 == 0) {
m_buffer[y * m_resX + x].color = Color(-1, -1, -1);
continue;
}
double u = (double(x) * 2 / (m_resX - 1)) - 1;
double v = (double(y) * 2 / (m_resY - 1)) - 1;
RayTracer::Ray r = m_camera.ray(u, v);
double closestDist = std::numeric_limits<double>::max();
for (auto &obj : m_objects) {
if (obj->hits(r)) {
if (std::abs(r.getHit().dist) < std::abs(closestDist)) {
closestDist = r.getHit().dist;
r.setClosestObj(obj.get());
r.getClosestHit() = r.getHit();
}
} else if (m_buffer[y * m_resX + x].computed == false) {
m_buffer[y * m_resX + x].color = Color(50, 50, 50);
}
}
if (r.getClosestObj() != nullptr) {
m_light->computeLight(r, *r.getClosestObj(), m_objects);
m_buffer[y * m_resX + x].color = r.m_finalColor;
m_buffer[y * m_resX + x].computed = true;
}
}
}
下面是hits方法:
bool Sphere::hits(Ray &ray) const
{
Math::Vector3<double> ocp = ray.m_origin - m_center;
Math::Vector3<double> oc = Math::Vector3<double>(ocp.x, ocp.y, ocp.z);
double a = ray.m_direction.dot(ray.m_direction);
double b = 2.0f * oc.dot(ray.m_direction);
double c = oc.dot(oc) - m_radius * m_radius;
double discriminant = b * b - 4.0f * a * c;
if (discriminant < 0)
return false;
ray.getHit().dist = (-b - sqrt(discriminant)) / (2.0f * a);
ray.getHit().hitPosition = ray.m_origin + ray.m_direction * ray.getHit().dist;
ray.getHit().normal = ray.getHit().hitPosition - m_center;
return true;
}
并且输出:
我已经尝试将射线方向向量归一化,使用判别式作为比较,将[-B - sqrt...]与[-b + sqrt...]进行比较,但仍然存在相同的问题
1条答案
按热度按时间uxh89sit1#
我发现了一些有用的东西,不知何故反转了物体的z(深度位置),距离被正确计算出来了。
我的相机是在0 0 0和拍摄光线与z = 1,和对象是在-1,现在的对象是在1和它产生正确的渲染。