PDA

View Full Version : So far from world origin



devdept
02-03-2010, 04:54 AM
Hi All,

As you all know if you have the model close to the origin you see it perfectly:
http://www.devdept.com/close.gif
But when it is very far from the origin you see it like:
http://www.devdept.com/far.gif
This issue can be easily fixed doing:

glMatrix(PROJECTION);
glLoadIdentity();
glPerspective(...);
glMatrix(MODELVIEW);
glLoadIdentity();
>>>>>>>>>>>>>> glTranslate(-1000000, -1000000, 0) // moves the model onto the origin
gluLookAt(...);

But what happen when you do screen to client conversion like:

gluUnProject(mousePos.X, height - mousePos.Y, 0, modelViewMatrix, projectionMatrix, viewport, P0.X, P0.Y, P0.Z);
gluUnProject(mousePos.X, height - mousePos.Y, 1, modelViewMatrix, projectionMatrix, viewport, P1.X, P1.Y, P1.Z);

intPoint = LinePlaneIntersection(P0, P1, planeEquation)
We get the wrong 3D point right?

Shall we use the following solution or a better/more accurate approach exists?
gluUnProject(mousePos.X, height - mousePos.Y, 0, modelViewMatrix, projectionMatrix, viewport, P0.X, P0.Y, P0.Z);
gluUnProject(mousePos.X, height - mousePos.Y, 1, modelViewMatrix, projectionMatrix, viewport, P1.X, P1.Y, P1.Z);

intPoint = LinePlaneIntersection(P0, P1, planeEquation)

intPoint.X += 1000000; // moves the 3D point back to the actual position
intPoint.Y += 1000000;

Thanks,

Alberto

Abdallah DIB
02-03-2010, 06:44 AM
All depends on ur projection matrix. for objects far from ur eye position they will look smaller with a perspective projective but not distorded (as opengl doesnt simulate camera lens distortion).

Now if u are creating a custom perspective that doenst respect angles and/or distance maybe u will get the effect on ur second image.

with orthographic projection the object will not look smaller even if they are so far from the eye.

devdept
02-03-2010, 07:07 AM
Abdallah,

You probably don't know this old & famous issue, it is related to limited float precision.

Thanks,

Alberto

overlay
02-03-2010, 07:56 AM
You are exactly in the limit of single floating-point precision (7 decimal digits).

"Single precision [...] is a binary format that occupies 32 bits (4 bytes) and its significant has a precision of 24 bits (about 7 decimal digits)."
ref: wikipedia (http://en.wikipedia.org/wiki/Floating_point#IEEE_754:_floating_point_in_modern_ computers)

Here is a nice blog post discussing this issue:
http://blogs.agi.com/insight3d/index.php/2008/09/03/precisions-precisions/

devdept
02-03-2010, 08:00 AM
Hety guys, I know exactly what the problem is. My question was about find a trick to bypass it.

Thanks,

Alberto

Alfonse Reinheart
02-03-2010, 11:42 AM
Hety guys, I know exactly what the problem is. My question was about find a trick to bypass it.

The "trick" to bypass it is to change where the "world" space is and keep it near your objects.

devdept
02-04-2010, 01:24 AM
Looks like nobody knows this issue here, I will try to post it again in the OpenGL coding: Advanced forum.

Thanks,

Alberto

Aleksandar
02-04-2010, 05:40 AM
It is a well known issue. Did you look at the link overlay have posted? There should be find some solution, but generally it depends on the problem you have to solve. Also, you have to take into account possibility to calculate all transformations using doubles instead of single precision floating point numbers. I have totally ceased to use standard transformations, even with fixed functionality. But still have to solve problem of coordinate system moving through large scene.

devdept
02-04-2010, 06:00 AM
Aleksandar, you can use as much precision as you want, sooner or later you'll always reach the limit.

Alberto

Aleksandar
02-04-2010, 06:10 AM
Sure, but it is unlikely that you need to express distances between electrons using light-years. ;)
Everything has its context. What do you exactly want to achieve?

devdept
02-04-2010, 09:22 AM
Aleksandar,

Loading certain AutoCAD DWG files you get the plan of a building with lines with coords like X=323,032.33; Y=908,099.32 and when you try to zoom close to the building plan you get all the entities dancing here and there because of precision limitations.

What we want to achieve is to get the same steady entities during zoom either for plans close to the world origin or for those very far from it.

Thanks,

Alberto

Alfonse Reinheart
02-04-2010, 11:40 AM
As I said: change your world coordinates based on what you're trying to draw.

The coordinates relative to the natural world may be "X=323,032.33; Y=908,099.32", but the coordinates relative to (300,000, 900,000) are much smaller.

It takes some effort, but it's really not that hard. I would suggest changing your world coordinates based on the camera, in 1,000 unit increments (that gives you 4 digits before the decimal, with 3 digits after).

peterfilm
02-04-2010, 06:31 PM
a simple solution to your problem is to make the position of every object relative to the camera. So, in your application you may store an objects position using double precision floats, but when you go to render it, and you're about to upload the matrix to opengl, just subtract the cameras position from the translation part of the matrix. You also need to ensure that you've set your initial modelview matrix translation to zero. Most implementations of OpenGL (nvidia/amd) downcast doubles to floats even when you use the GLdouble entry points of the API, so you need to get your numbers smaller to maintain precision. If you think about it, you really don't need double precision if you render stuff relative to the camera.
When doing your selection (or "screen to client" as you're wrongly calling it), just add on the cameras position to your result.
If you don't like the idea of subtracting from the translation, because it seems hacky, then by all means just multiply the objects matrix by the inverse of the camera matrix using your own double precision matrix function, then just upload this matrix directly (i.e. using glLoadMatrix, because you no longer need the camera matrix in the modelview). I use this method, but the first method is simpler to hack in to your existing code, I would imagine.
Some scenegraphs have a special double precision transform node for 'zones'. It holds a double precision local matrix to hold a huge translation. This means you don't need every position or object matrix to be double precision, just the root node of 'zones' that are very far from the world origin.

devdept
02-05-2010, 01:06 AM
Thanks Peterfilm,

Yes I meant 'screen to world' sorry.

I don't see any benefit in multiplying by an inverse matrix instead of doing a translation, at the end they are two 4x4 matrix multiplications, aren't they?

Thanks,

Alberto

Aleksandar
02-07-2010, 04:46 AM
I have implemented GPU RTE, and ... it works like magic!
Thank you overlay for the link!

Devdep, if you don't like subtracting the origin of the scene from objects world coordinates (which is optimal for the scenes with diameter less than 100km with cm precision, or few km with sub-cm precision, which is enough for displaying buildings in an urban area (speaking about CAD application I think it is suitable)), than GPU RTE is an excellent choice.

devdept
02-07-2010, 08:25 AM
Thanks Aleksandar, I will make some tests this week.

Alberto

devdept
02-08-2010, 03:49 AM
Aleksandar,


We are trying to implement your suggestion:


... subtracting the origin of the scene from objects world coordinates...

Normally we have the model center at (1000000, 1000000, 0) and the camera location and target as well at (1000000, 1000000, 100) and (1000000, 1000000, 0) where is the best place to add the glTranslate(-1000000, -1000000, 0) call?

Using the code below the visual jitter is still present!


glMatrixMode(GL_MODELVIEW);
gluLookAt(location-modelCenter,
target-modelCenter,
0,0,1);
glTranslate(-modelCenter.X, -modelCenter.Y, -modelCenter.Y);
DrawModel();

Why?


Thanks,

Alberto

Aleksandar
02-08-2010, 05:09 AM
You didn't understand me. I said to subtract center of the scene from all coordinates. Translating to (1000000, 1000000, 0) you still deal with extremely large numbers. The coordinates of your model should be less than 100000 in order to have 1m precision (in fact a little bit less, but it is not important at this point).

If you want to deal with so large numbers turn to GPU RTE, or similar technique.

devdept
02-08-2010, 05:30 AM
Aleksandar,

At this point I am forced to use something else, subtracting those coords from each model vertices for us is not possible, users need to work on real coordinates.

I gave a look to the GPU RTE, it didn't look so easy for me. Is this the easier to implement approach to this issue?

Thanks,

Alberto

Aleksandar
02-08-2010, 07:10 AM
The users will work with real coordinates. You will accept and display in the way expected. But it should be internally converted to smaller numbers before passing to GPU. I'm using custom functions Scene2World() and World2Scene() for that purpose.

I spent one day for implementing GPU RTE in my algorithm. It is, so far, the best method I have tried that enables working with double precision coordinates.

devdept
02-08-2010, 07:39 AM
I know, I know but our program gives to the user the access to object vertices and we can't all the times convert them. I will try to study deeper this magic GPU RTE approach...

Thanks again,

Alberto

devdept
02-08-2010, 07:56 AM
Looks like the GPU RTE approach needs VBO & shaders. Am I right? We need something that work on any OpenGL implementation. I would try the links suggested at the beginning of the article...

Alberto

Aleksandar
02-08-2010, 08:39 AM
Yes, it requires shaders! Unfortunately, it cannot be implemented on CPU. In the same article you can find also a CPU based solution, but it is not "so magical".

I don't see the problem enabling users to change real coordinates, but before sending them to rendering call World2Scene() function and vise versa.

devdept
02-08-2010, 08:41 AM
Can you point me to the CPU based approach?

Thanks,

Alberto

Aleksandar
02-08-2010, 08:46 AM
The same article as for GPU RTE (@ http://blogs.agi.com/insight3d/index.php/2008/09/03/precisions-precisions/)

Section The Center of All Things is a CPU approach.

devdept
02-23-2010, 01:17 AM
Aleksandar,

I tried to implement the
The Center of All Things is a CPU approach
without success an we are sure that something is missing.

This article suggest to alter the MV matrix from
0.000000 -0.976339 0.216245 -13.790775
0.451316 -0.192969 -0.871249 -7,527,123.004836
0.892363 0.097595 0.440638 -14,883,050.114944
0.000000 0.000000 0.000000 1.000000
to
0.000000 -0.976339 0.216245 -13.790775
0.451316 -0.192969 -0.871249 0.406574
0.892363 0.097595 0.440638 -81.089615
0.000000 0.000000 0.000000 1.000000

But this of course is not enough.

Can you please help me discover what is missing? Thanks.


Alberto

Aleksandar
02-24-2010, 01:42 PM
I have already told you that this method is not "a world solving" one. First, there are errors in the example. Just X coordinate is correct. But that changes nothing.

The problem with this solution is that it assumes all objects are drawn in their local coordinate-systems. Replacing last column of the model-view matrix, only a huge translation is eliminated. So, the shuttle is drawn correctly, because it is in the coordinate-system's origin, and the viewer is moved just a little bit from it.

In order to be able to use this method, you have to express coordinates of your objects locally (relatively to some origin that is in the object or near it), and save displacement of that origin in world coordinates. Remember, fixed functionality (thus far) does not permit using double precision values. A single precision floats have only 5-6 significant decimal figures. Having UTM or LatLon coordinates does not permit even a meter resolution, and that is not enough for CAD applications. Consider revising the way coordinates of the objects are stored, or turn to the method I've proposed earlier.

devdept
02-24-2010, 11:48 PM
Aleksandar,

I wish I could find a single triangle example that draws without jitter at 1,000,000 from origin so I can study how to apply it to our code...

Thanks,

Alberto

Alfonse Reinheart
02-24-2010, 11:59 PM
I wish I could find a single triangle example that draws without jitter at 1,000,000 from origin so I can study how to apply it to our code...

The problem is that no such case exists, because numbers passed to OpenGL are never that big. The way to do this means never giving OpenGL large coordinates. So you have to restructure the code you use to set up rendering so that it doesn't pass OpenGL large coordinates.

Let's say that your camera is at (50234294.3282, 93947293.2983) You want to draw something that is in sight of the camera, and it is at (50234227.3282, 93947219.2983).

All of the coordinates you pass to OpenGL must be relative to the camera's position. So given the above, the camera matrix you give OpenGL should be at (0.0, 0.0). The matrix for the something above should be at (-67, -74).

That's all. Your coordinates in your code should be in doubles, but when you convert them to floats you first convert them into offsets from the camera (also in doubles).

Dark Photon
02-26-2010, 05:49 AM
http://www.devdept.com/far.gif
...But what happen when you do screen to client conversion like...We get the wrong 3D point
For the magnitudes you're talking about, just do all MODELVIEW computations in doubles (fp64) in your code. Do not use GL/GLU to do MODELVIEW computations. They can only handle float (fp32), and therein lies your problem.

Or resort to local origins as described.

devdept
02-26-2010, 07:22 AM
Thanks Dark Photon, we made a step ahead, please read this:

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=272879#Post2728 79

Now the problem is to recover the camera space. Can you help me on this?


Thanks,

Alberto

Dark Photon
02-26-2010, 07:28 AM
Suggest you just use doubles. Then there's no need to "recover the camera space".

devdept
02-26-2010, 07:48 AM
Dark Photon,

Did you check the C++ sample provided by wSpace?

It includes the following trick, but when you need the camera space you cannot use the conventional eye, target, up as before because of the additional matrix multiplication. We need it to compute the bounding volume of the model aligned with the camera frame. What is your opinion? We even tried to subtract the gBoxCenter from both the gEye and gBoxCenter points but still is not correct.


//
// RTC method
//
// Here, we are doing the math in double precision and
// then submitting the result to OpenGL as floats.
//
double modelViewMatrix[16];
lookAt(gEye, gBoxCenter, gUp, modelViewMatrix);
double boxTranslationMatrix[16] = {
1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
gBoxCenter.x, gBoxCenter.y, gBoxCenter.z, 1.0};
double boxModelViewMatrixd[16];
matrixXMatrix(modelViewMatrix, boxTranslationMatrix, boxModelViewMatrixd);

//
// Note that while we called it this way, we could have called
//
// glLoadMatrixd(boxModelViewMatrixd);
//
// OpenGL would have downcast the doubles to float. We did it below
// with floats just to emphasize the fact that in the end, we
// are giving OpenGL float values.
//
float boxModelViewMatrixf[16] = {
(float)boxModelViewMatrixd[0], (float)boxModelViewMatrixd[1], (float)boxModelViewMatrixd[2], (float)boxModelViewMatrixd[3],
(float)boxModelViewMatrixd[4], (float)boxModelViewMatrixd[5], (float)boxModelViewMatrixd[6], (float)boxModelViewMatrixd[7],
(float)boxModelViewMatrixd[8], (float)boxModelViewMatrixd[9], (float)boxModelViewMatrixd[10], (float)boxModelViewMatrixd[11],
(float)boxModelViewMatrixd[12], (float)boxModelViewMatrixd[13], (float)boxModelViewMatrixd[14], (float)boxModelViewMatrixd[15]};
glLoadMatrixf(boxModelViewMatrixf);