Screen RayTrace/Selection/Culling

Okay everyone…I once again have another problem…yes…involving OpenGL

What I am trying to do is convert 2D screen coordinates to a 3D vector…you might say that I should use the selection buffer (or whatever its called) to select primitives, but what I want to do is implement my own selection code so that I can get a 3D vector and trace rays, etc

The problem I am having is…for e.g. my FOV is set to 45.0f but if I build a frustum (using my own software code for the frustum itself), it just does not work right…I mean I project 2 rays.
One is 45 degrees to the left, and the other is 45 degrees to the right…and it just does not look right…I mean, objects that are just a tiny bit still inside the screen are getting culled!!!

Do I have to multiply the 45 degrees by the aspect ratio (1.3333) then do the projection to get the correct angle??

And is the fovY the same as fovX ??

I know the above example is not of selection, but it was an attempt at project a ray at a certain angle from the origin, which is what I will be trying to do later…

If I am trying project a ray in relation to 2D screen coords, do I have to project a ray from the screen back to the origin (negative zNear) ?? I am clueless

I don’t know if this makes sense to anyone, but if anyone can shed some light (or a few) I would be very grateful,

Thanks in adv

[This message has been edited by drakaza (edited 07-14-2000).]

[This message has been edited by drakaza (edited 07-14-2000).]

[This message has been edited by drakaza (edited 07-14-2000).]

>>What I am trying to do is convert 2D screen coordinates to a 3D vector…you might say that I should use the selection buffer (or whatever its called) to select primitives, but what I want to do is implement my own selection code so that I can get a 3D vector and trace rays, etc

The problem I am having is…for e.g. my FOV is set to 45.0f but if I build a frustum (using my own software code for the frustum itself), it just does not work right…I mean I project 2 rays.
One is 45 degrees to the left, and the other is 45 degrees to the right…and it just does not look right…I mean, objects that are just a tiny bit still inside the screen are getting culled!!!<<<

Huh? 45 degrees to the left and 45 degrees to the right gives a field of view of 90 degrees!?

>>Do I have to multiply the 45 degrees by the aspect ratio (1.3333) then do the projection to get the correct angle??
And is the fovY the same as fovX ??<<

Haven’t looked up gluperspective but aspect is defined as width/height so I would assigne the FOV to the width, means for y take FOV/aspect.

>>If I am trying project a ray in relation to 2D screen coords, do I have to project a ray from the screen back to the origin (negative zNear) ?? I am clueless<<

Haven’t done this with OpenGL yet, so my idea would be:

If the Projection matrix does not contain some camera stuff (do it with the modelview, if the answer is yes!) then shooting rays in LEFT-handed coordinates from the origin (0,0,0) though pixels on the positive zNear plane should solve the problem with the projection automatically for you.

[This message has been edited by Relic (edited 07-15-2000).]

I thought the FOV you send to gluPerspective is supposed to be half of the FOV (90 degrees divided by 2 = 45) as far as i know

Sending 90 to gluPersp looks pretty bad

So what you are saying is that the far right of the screen IS 45 degrees to the right of the origin (camera), is that correct?

Because I don’t quite understand the FOV/aspect thing you said…

I would really appreciate any more info

Thank you,

-Drakaza

Ok, I looked it up now.
The correct parameter description of gluPerspective says ‘fovy’ and it means the field of view angle in degrees in the y direction.

“Sending 90 to gluPersp looks pretty bad”

Your interpretation would mean the maximum value you could use for FOV is 90,
Try sending 120 and it should be worse

I hacked this small routine which might help you.

With this initialization in mind,

  glViewport(0, 0, nWidth, nHeight);
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();
  gluPerspective(45.0, (double) nWidth / (double) nHeight, 1.0, 5.0);
  // glFrustum(-1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 3.0f);
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();
  glTranslatef(0.0f, 0.0f, -2.0f);

here is an experimental routine which calculates the positions of screen points
on the zNear projection plane and the vector from the origin to that plane in a
left handed coordinate system like the projection uses it.

// x and y are Windows screen coordinates with origin top-left.
void ScreenToRay(int x, int y, float ray[3])
{
// Calculate a ray from origin to the screen pixel.
float fov;
float aspect;
float fWidthHalf;
float fHeightHalf;
float xNear;
float yNear;
float zNear;
float invNorm;

fov = 45.0f; // Insert value from gluPerspective.
aspect = (float) nWidth / (float) nHeight;

// Viewer is at origin.
// Camera is looking down the z-axis.
// Projection plane is at (0, 0, zNear).
// gluPerspective() field of view is for y direction that means
// the position of the top right point on the screen is at:
zNear = 1.0f; // Insert value from gluPerspective.
xNear = (float) sin(DEG2RAD * fov * 0.5f * aspect) * zNear;
yNear = (float) sin(DEG2RAD * fov * 0.5f ) * zNear;

// Calculate the position of the pixel on the zNear plane.
fWidthHalf = (float) nWidth * 0.5f;
fHeightHalf = (float) nHeight * 0.5f;
ray[0] = xNear * ((float) x - fWidthHalf) / fWidthHalf;
ray[1] = yNear * (fHeightHalf - (float) y) / fHeightHalf;
ray[2] = zNear;

// Normalize this position vector to get the direction from origin to that point.
invNorm = 1.0f / (float) sqrt(ray[0] * ray[0] + ray[1] * ray[1] + ray[2] * ray[2]);
ray[0] *= invNorm;
ray[1] *= invNorm;
ray[2] *= invNorm;

// Now this is in a left handed system.
// To convert the vector to world coordinates translate to the viewer origin and invert the z component.
}

[This message has been edited by Relic (edited 07-16-2000).]

Wow, that is one hell of a post.
I read it and it makes total sense!!!

I just did not know how to convert it from concept to formulae. I usually stuff up with the LH/RH coordinate calculations.

THANK you so very much!!

-Drakaza

I know this topic has been put away for some time, but I came across it and wondered if someone could clear something up for me.

I’m fairly familiar with most of the code from this post, but I haven’t heard much about left/right handed coordinate systems. Will the fact that a vector/ray is in a left handed coordinate system interfere with any algorithims that use it to do ray-plane/poly intersections with world objects? In other words do I need to change it to world coords to use it for selecting objects?

Thanks.

Hi Jo,

vectors with xyz-components from a left handed coordinate system point in a different direction in a right handed coordinate system (mirrored at xy-plane).

If you want your algorithm to work properly, the coordinate system for ray and poly should be the same.

Do the famous three-finger computer graphics aerobics and you’ll see.

I tried this solution paired with an algorithm to intersect a triangle with a ray. My problem now, is that when I try rotating the ray with the camera I get unpredictable results.

Everything works fine if I’m only doing camera translations, but rotations cause “problems”. My aim here is to be able to get the 3d coords of the intersection point of the ray and the first poly it encounters.

There also seems to be a slight margin of error between where the ray intersects the plane and the place that is actually clicked with the mouse. Do I need a ray that’s perdendicular to the viewing plane, rather than one that passes through the origon?

Thanks in advance.

[This message has been edited by Jo (edited 10-09-2000).]

Perhaps it helps if you consider that all camera movements can also be expressed in world transformations, which actually is the way OpenGlL is designed.
If you want to rotate your virtual camera in one direction, the world has to be rotated in the opposite direction to achieve the same effect.
If the transformations are always applied to the modelview matrix and never to the projection matrix, the small raycast algorithm from above should work.

>>
There also seems to be a slight margin of error between where the ray intersects the plane and the place that is actually clicked with the mouse. Do I need a ray that’s perdendicular to the viewing plane, rather than one that passes through the origon?
<<

If you click on a pixel you select a volume in form of a frustum starting with the pixel’s area getting larger the farther you go. With the raycast method your hit position in world coordinates largely relies on the starting position you have chosen on the pixel’s area (e.g. lower left corner if you took the integer coordinates) if objects are far away.
Using the center of the pixel could help.

But that is what the selection method in OpenGL is used for. It uses an area to track if fragments would have been generated there.
With the information which polygon was hit, the calculation of the real world coordinates can be done with ray to plane intersections.

Depending on what you’re doing, selection is probably not fast enough. I wouldn’t do it in shooters, but it’s great for CAD, modeling.