fjeronimo

12-14-2002, 05:30 AM

Hello everybody,

I'm currently trying to implement a target sight system. I'm planning to use screen coordinates to move the mouse around and then map them to world coordinates whenever needed. Here are the steps I'm doing (or attempting to do):

* Convert screen coordinates, in pixels, to normalized coordinates, with an origin at the center of the viewport and values on each axis ranging from -1.0 to 1.0.

* Scale the normalized screen coordinates to the field of view. The X and Y values attained will define the slope of the ray away from the center of the frustum in relation to depth.

* Calculate two points on the line that correspond to the near and far clipping planes. These will be expressed in 3D coordinates in view space.

* Obtain the inverse of the current view matrix.

* Multiply these coordinates with the inverse matrix to transform them into world space.

However, I'm not obtaining the expected results. The relevant portion of code follows (I decomposed it in several parts for easier understanding). I also think the function names are explicit enough to be understandable:

Screen to world transform:

// Get screen dimensions

int xx, yy, width, height;

getViewport(xx, yy, width, height);

// Aspect ratio and half-dimensions

float aspect = (float)((float)width/(float)height);

float widthDiv = (float)width * 0.5f;

float heightDiv = (float)height * 0.5f;

float ViewX, viewY;

Matrix44f viewMatrix, invViewMatrix;

// Horizontal fov

float hfov = igDegreesToRadiansd((float)_camera->getHFov());

// Normalized screen coordinates

float normX = (_targetSightPosX/widthDiv-1.0f)/aspect;

float normY = (1.0f-_targetSightPosY/heightDiv);

// View coordinates

viewX = tanf(hfov * 0.5f) * normX;

viewY = tanf(hfov * 0.5f) * normY;

// Calculate view matrix. Just to show you how the

// view matrix is calculated elsewhere...

Vector3f camEye, camView, camUp;

camEye = _camera->getEye();

camView = _camera->getView() + camEye; // View is relative to eye in our implementation

camUp = _camera->getUp();

float nearPlane = _camera->getNearPlane();

float farPlane = _camera->getFarPlane();

viewMatrix.makeLookAt( camEye, camView, camUp );

// Invert view matrix

if(invViewMatrix.invert(viewMatrix) != kSuccess)

printf("Error inverting\n");

// Two points on the line we want. In our coordinate system, Z is up,

// so I think we must reflect this in here (swapping Z with Y).

Vector3f nearPoint(viewX*nearPlane,nearPlane,viewY*nearPlan e);

Vector3f nearPoint(viewX*farPlane,farPlane,viewY*farPlane);

// The final world coordinates

Vector3f worldTargetSightNear, worldTargetSightFar;

worldTargetSightNear.transformPoint(nearPoint, invViewMatrix);

worldTargetSightFar.transformPoint(farPoint, invViewMatrix);

NOTE: I'm not doing this directly in OpenGL, but rather through an engine running above it. So if you have a solution to this, please provide a generic one.

Do you have any ideas of what I might be doing wrong? Thanks in advance for your help,

fjeronimo

I'm currently trying to implement a target sight system. I'm planning to use screen coordinates to move the mouse around and then map them to world coordinates whenever needed. Here are the steps I'm doing (or attempting to do):

* Convert screen coordinates, in pixels, to normalized coordinates, with an origin at the center of the viewport and values on each axis ranging from -1.0 to 1.0.

* Scale the normalized screen coordinates to the field of view. The X and Y values attained will define the slope of the ray away from the center of the frustum in relation to depth.

* Calculate two points on the line that correspond to the near and far clipping planes. These will be expressed in 3D coordinates in view space.

* Obtain the inverse of the current view matrix.

* Multiply these coordinates with the inverse matrix to transform them into world space.

However, I'm not obtaining the expected results. The relevant portion of code follows (I decomposed it in several parts for easier understanding). I also think the function names are explicit enough to be understandable:

Screen to world transform:

// Get screen dimensions

int xx, yy, width, height;

getViewport(xx, yy, width, height);

// Aspect ratio and half-dimensions

float aspect = (float)((float)width/(float)height);

float widthDiv = (float)width * 0.5f;

float heightDiv = (float)height * 0.5f;

float ViewX, viewY;

Matrix44f viewMatrix, invViewMatrix;

// Horizontal fov

float hfov = igDegreesToRadiansd((float)_camera->getHFov());

// Normalized screen coordinates

float normX = (_targetSightPosX/widthDiv-1.0f)/aspect;

float normY = (1.0f-_targetSightPosY/heightDiv);

// View coordinates

viewX = tanf(hfov * 0.5f) * normX;

viewY = tanf(hfov * 0.5f) * normY;

// Calculate view matrix. Just to show you how the

// view matrix is calculated elsewhere...

Vector3f camEye, camView, camUp;

camEye = _camera->getEye();

camView = _camera->getView() + camEye; // View is relative to eye in our implementation

camUp = _camera->getUp();

float nearPlane = _camera->getNearPlane();

float farPlane = _camera->getFarPlane();

viewMatrix.makeLookAt( camEye, camView, camUp );

// Invert view matrix

if(invViewMatrix.invert(viewMatrix) != kSuccess)

printf("Error inverting\n");

// Two points on the line we want. In our coordinate system, Z is up,

// so I think we must reflect this in here (swapping Z with Y).

Vector3f nearPoint(viewX*nearPlane,nearPlane,viewY*nearPlan e);

Vector3f nearPoint(viewX*farPlane,farPlane,viewY*farPlane);

// The final world coordinates

Vector3f worldTargetSightNear, worldTargetSightFar;

worldTargetSightNear.transformPoint(nearPoint, invViewMatrix);

worldTargetSightFar.transformPoint(farPoint, invViewMatrix);

NOTE: I'm not doing this directly in OpenGL, but rather through an engine running above it. So if you have a solution to this, please provide a generic one.

Do you have any ideas of what I might be doing wrong? Thanks in advance for your help,

fjeronimo