View Full Version : Too far from origin (again)

devdept

02-24-2010, 01:28 PM

Hi All,

We are still struggling with this issue despite the following discussion:

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=271405#Post2714 05

In the following paper:

http://www.floatingorigin.com/pubs/thorneC-FloatingOrigin.pdf

We read:

As shown in Figure 3, reverse transforming the

world can be achieved by placing a top level transform,

the world transform (WT), over the entire set of objects.

Whenever a viewpoint is selected, the inverse of the

viewpoint's coordinate is applied to the WT. The result

is that the objects are shifted in reverse towards the

viewer who stays at the origin.

What exaclty need we to do in our code? Is there a working sample in C/C++ to study somewhere? Any tutorials on the subject?

Thanks,

Alberto

Mikkel Gjoel

02-25-2010, 07:41 AM

Going over the old thread, I think the basic thing you're missing is:

Never use glTranslate.

You need to use a matrix library in your own code, and only send the final matrices to the gpu (using glMultMatrix). Also, use doubles in these matrices.

So rather than having OpenGL compute ObjectSpace->WorldSpace->ViewSpace, do the calculation yourself (in double-precision), so OpenGL only computes ObjectSpace->ViewSpace (in single-precision), which should give you much better precision.

Hope this helps.

devdept

02-25-2010, 07:46 AM

Thanks Mikkel,

I believe that beyond the float 7 digits of precision you can do whatever you want on the CPU but always loose precision in the final step to GPU.

The only reliable solution I know so far is to subtract the viewpoint from all the viewport vertices but it's impracticable, even more because when you decide to change viewpoint you need to do all the work again.

Thanks,

Alberto

Mikkel Gjoel

02-25-2010, 09:16 AM

I believe that beyond the float 7 digits of precision you can do whatever you want on the CPU but always loose precision in the final step to GPU.

True, but the point of doing it in double-precision is that your end result (viewspace coordinates) will suffer less from the conversion to single-precision (as they are now "2m from the eye" rather than "at this very distant position in the world").

The only reliable solution I know so far is to subtract the viewpoint from all the viewport vertices but it's impracticable, even more because when you decide to change viewpoint you need to do all the work again.

Your OpenGL-implementation would likely be doing all this work "again" for you anyway - so you shouldn't be loosing any performance from it, you just have to do it yourself rather than have the driver handle it. Of course you do loose performance from doing it in double-precision rather than float though.

devdept

02-25-2010, 09:48 AM

Mikkel,

What about display lists, shall we recompile everything all the times the viewpoint changes?

Thanks,

Alberto

Mikkel Gjoel

02-25-2010, 10:18 AM

Yes. You can create a display-list containing only the matrix-multiplication, and call it from your "drawObject" displaylist. That way you only have to recompile a very small list.

Side note: If you want your program to run on "everything", you should probably consider moving forward away from displaylists.

devdept

02-25-2010, 11:38 AM

Yes. You can create a display-list containing only the matrix-multiplication, and call it from your "drawObject" displaylist. That way you only have to recompile a very small list.

I don't fully understand what you mean. If I need to subtract the view point from all vertices, what matrix do I need? If I put a matrix inside a display list it would also be of low precision, isn't it?

Thanks,

Alberto

Mikkel Gjoel

02-25-2010, 02:26 PM

As you say: If you put a matrix inside a display list, it will be of low precision - it makes no difference. So you still need to only put the "final" transformation-matrices into lists.

What I mention about a display-list with only the matrix-multiplication in it (that you then call from your draw-display list), is to decrease the overhead of updating the lists.

wSpace

02-25-2010, 08:23 PM

Alberto,

This GLUT example (http://blogs.agi.com/insight3d/wp-content/uploads/2010/02/rtcDemo.zip) demonstrates the visual jitter and one possible solution, the RTC method described in my blog post (http://blogs.agi.com/insight3d/index.php/2008/09/03/precisions-precisions/). The display() function comments describe the problem and solution.

The example is a Visual C++ solution. As it is a GLUT example, you should be able to easily convert it to any other platform. I hope it is applicable to your problem.

devdept

02-26-2010, 12:55 AM

Wow, I'm honored to speak to the author of that amazing article on visual jitter!

I will for sure checkout your sample and come back with some questions. Thanks so much for your effort to eliminate visual jitter from all our newbie applications!

Alberto

devdept

02-26-2010, 02:11 AM

wSpace,

I really want to thank you so much for the C++ sample provided, in a few lines of code it explain everything so well! Please consider linking it to you amazing article on the subject.

The first issue we faced during integration in our app is the recovery of the camera space (also called camera frame or camera coordinate system). Of course, using the gEye, gBoxCenter and gUp points is not correct anymore. We also tried adding or subtracting the gBox center from those points without success.

Do you know what we need to do to recover the real camera space after appling RTC?

Thanks so much again, if one day you come to Italy you have a eat as much as you like pizza dinner on me, you can bet on it! :)

Alberto

rexguo

02-27-2010, 06:23 AM

Something related to this..

If you want a robust understanding of floating-point,

check out this recent article, written in the context

of networking in multi-player games:

http://gafferongames.com/networking-for-game-programmers/floating-point-determinism/

wSpace

02-27-2010, 03:12 PM

Alberto, thanks for the kind words.

I'm not certain what you mean by camera space in your context. In what frame is camera space? In my example code, is it the box's local coordinates; is it the world coordinates? Are you trying to determine the camera up vector, eye position, and box center in a particular coordinate system?

Out of curiosity, what are you using "real camera space" for?

Regards,

Deron

devdept

02-28-2010, 09:29 AM

Hi wSpace,

What I mean is that in our program we need to create a camera-aligned bounding box of the model (with camera-aligned I mean a bounding box with the Z dir along the camera upVector the Y dir along the camera [target - eye] and the X dir alogn the cross product of the two).

This was strightforward without the RTC additional matrix multiplication using the code below:

public void GetFrame(Point3D& camOrigin, Vector3D& camX, Vector3D& camY, Vector3D& camZ)

{

camOrigin = eye;

camZ = Vector3D.Subtract(eye, target);

camZ.Normalize();

camY = upVector;

camX = Vector3D.Cross(camY, camZ);

}

This code doesn't work correcty anymore and we don't know where to put our hands. We tried to add/remove the gBoxCenter from the camOrigin without success.

Thanks,

Alberto

devdept

02-28-2010, 12:00 PM

wSpace,

It was my fault, as I should imagine there are so many reference in the code to update and shift about the gBoxCenter that at first look seemed the wrong approach, but it isn't...

Thanks again for you help.

Alberto

devdept

03-01-2010, 06:17 AM

Hi Deron,

To tell the truth, there is still something I cannot fully understand. Can you please try changing the following two funcs in you rtcDemo?

In practice we are loading a land in DXF format (frequently located very far from the world origin) and need to zoom close to some building details. Here is where the jitter start showing. The problem is that like we do in the code below, we have most of the land curves compiled in display lists with their big/large coordinates.

Can you please try to solve the Jitter issue using the model in the init() func below using your rtcDemo sample?

What is the best approach to get rid of jitter here?

Thanks,

Alberto

void init()

{

//

// State

//

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

glDisable(GL_DEPTH_TEST);

//

// Box

//

gBox = glGenLists(1);

glNewList(gBox, GL_COMPILE);

glColor3f(1.0f, 1.0f, 1.0f);

// ADDED THESE LINES

glBegin(GL_TRIANGLES);

glVertex3d(gBoxCenter.x,gBoxCenter.x,gBoxCenter.x) ;

glVertex3d(gBoxCenter.x+2,gBoxCenter.y,gBoxCenter. z);

glVertex3d(gBoxCenter.x+2,gBoxCenter.y+2,gBoxCente r.z);

glEnd();

// COMMENTED THIS LINE

// glutWireCube(1.0);

glEndList();

//

// Set the eye position

//

setEye();

//

// Help

//

printf("press 'r' to toggle between OpenGL and RTC modelview matrix computation modes.\n");

}

and:

if (!gUseRTC)

{

//

// OpenGL method

//

// This uses OpenGL to do the matrix math. While we are sending in

// double values, OpenGL is doing all the math in float precision.

// Because of that, we will get precision errors as float are only

// good for about 7 decimal digits of precision.

//

glLoadIdentity();

gluLookAt(gEye.x, gEye.y, gEye.z, gBoxCenter.x, gBoxCenter.y, gBoxCenter.z, gUp.x, gUp.y, gUp.z);

// COMMENTED THIS LINE

// glTranslated(gBoxCenter.x, gBoxCenter.y, gBoxCenter.z);

}

Mikkel Gjoel

03-01-2010, 10:53 AM

The problem is that like we do in the code below, we have most of the land curves compiled in display lists with their big/large coordinates.

This is not going to work - you need to change the coordinates to something that will fit reasonably into floats.

If I understand you correctly, you have a chunk of land "far away". One way to solve the problem would be to make all vertex-positions relative to some local coordinate system, for example the center (average position) of your land-chunk. If the extents of the chunk is too big, you could dice it up in several chunks.

Hope this helps.

devdept

03-01-2010, 11:01 PM

Mikkel,

I knew, that sooner or later I needed to subtract the viewpoint from the model vertices. The problem is that if I add things here and there I need everytime to re-compile all the objects to adjust to the new local origin...

Thanks,

Alberto

Mikkel Gjoel

03-02-2010, 04:51 AM

You don't need to subtract the viewpoint from the model-vertices.

You need to create a single origin for your model-vertices (e.g. the average), and make your vertices relative to this origin.

A translation from model->world is then put in a matrix. The matrix needs to be of doubles, because the position of you chunk of land is far away from the world-origin.

devdept

03-02-2010, 09:32 AM

The problem is we cannot change the original land coords for many reasons. The only option is to make a display list that contains the each land point subtracted from the local origin...

yooyo

03-03-2010, 05:37 AM

Usually AutoCAD drawings is in some units (m, cm, mm, inch, ...) Why dont you convert units to some more usefull to your application. Mult or divide all vertex positions from ACAD drawings during import.

Next thing you can do is to adjust your perspective matrix. Dont put near = 0.001f, far = 1000000.0f. Instead of that find closest and farest object from camera and adjust near and far according to those values. Learn to love your z-buffer.

Im not sure is this helpful for you, because I dont understand your problem, except that you have to deal with objects far far away from camera. What is exactly your problem? Z-fighting? Selection & picking? Lighting?

devdept

03-03-2010, 06:18 AM

Hi yooyo,

Thanks for joining the discussion.

The problem is 'shaking' entities when zooming very close to some detail of a land far away from the world origin.

And yes, we adjust near and far planes based on the vertices inside frustum.

After discussing this topic so many times it looks like the best approach is to recmpile object display list on demand subtracting the best local origin from the objects vertices...

Aleksandar

03-03-2010, 06:46 AM

And if you remember the very first advices, it was exactly they stated. :)

The only solution that really works with huge numbers is GPU RTE. I'm very grateful for Deron's blog!

devdept

03-03-2010, 07:01 AM

Aleksandar,

The small difference I didn't grasp is the fact you don't change the geometry coords but only what you pass to the display list.

Thanks,

Alberto

Dark Photon

03-03-2010, 07:09 AM

The only solution that really works with huge numbers is GPU RTE. I'm very grateful for Deron's blog!

It's good that he wrote it up, but it's a lot of verbage for this:

Use doubles to compute your MODELVIEW on the CPU, then and only then thunk down to float and give it to the GPU.

Why does this fix it? MODELING transform contains a "huge" translate. VIEWING contains a "huge" negative translate. Multiply to get MODELVIEW, and for things close to the eye, these translates largely cancel. Float loses too much precision with this big_number1-big_number2 game, so use doubles. Doesn't "solve" it, but pushes your error out to where it's often acceptable for earth scales.

Dark Photon

03-03-2010, 07:11 AM

The small difference I didn't grasp is the fact you don't change the geometry coords but only what you pass to the display list.

Actually, neither one, assuming you build geometry-only display lists. Same display list. You just change how you compute the MODELVIEW transform that's active when you render the display list to ensure greater precision.

devdept

03-03-2010, 07:31 AM

DarkPhoton,

We need to keep the vertices in memory as big numbers (1000000), this is why I say we need to compile display list subtracting this big number from the eye point.

Thanks,

Alberto

Aleksandar

03-03-2010, 07:34 AM

You didn't get the point, Dark!

There is a difference in the way the coordinates are generated. They are split into two floats, so it is not the same display list (or VBO in my case). First number serves for large distance viewing and has almost no contribution to objects that are near to the viewer. The second one is a fraction that has no contribution to distant objects but is only relevant for close ones.

I agree that this method creates errors in the real values of the coordinates, but who cares if it is not visible?

Aleksandar

03-03-2010, 07:40 AM

We need to keep the vertices in memory as big numbers (1000000), this is why I say we need to compile display list subtracting this big number from the eye point.

It is not a problem, because you have to rebuild DLs only when coordinates are changed, and I hope it is not very often. In any case, if coordinates are changed you should rebuild it. My advice is to keep "local" (small) coordinates and "add" displacement only for displaying/storing (I mean displaying in the interface while the cursor is being moved across objects or something similar).

Dark Photon

03-03-2010, 08:20 AM

We need to keep the vertices in memory as big numbers (1000000), this is why I say we need to compile display list subtracting this big number from the eye point.

How about keeping the vertices relative to a local object origin, and have that object positioned into the world by a MODELING transform. Then the vertices and thus your display list never need to change.

But maybe I'm missing something about your problem... (?)

Dark Photon

03-03-2010, 08:23 AM

...this is why I say we need to compile display list subtracting this big number from the eye point

It is not a problem, because you have to rebuild DLs only when coordinates are changed...

Doing that in realtime however is problematic. Display list compilation is expensive!! Snap some timing calipers on it and see!

devdept

03-04-2010, 12:43 AM

Dark Photon,

Yes, I was thinking about offering a regen commnad when jitter appears to allow a full regen to the new best local origin, what do you think?

Thanks,

Alberto

Dark Photon

03-04-2010, 06:03 AM

Yes, I was thinking about offering a regen commnad when jitter appears to allow a full regen to the new best local origin, what do you think?

Sorry, I'm just not getting the purpose of the whole regen thing. Just use the same one.

More detail:

Take each one of your scene entities. If you center it about the origin, can you represent the vertex positions with sufficient accuracy using float. No? Stop. Reorg your entities such that this is the case. When done, each entity has a local origin. Now, position these entities into the world using a MODELING transform in double (float64). Don't worry about big numbers at this point. Chances are you don't care about millimeter precision for something the scale of the planet. Now position the eye into the world using the VIEWING transform (actually inverse VIEWING, but you get my drift) in double. Again, don't worry about big numbers. Same caveat. Now when rendering the scene, multiply your MODELING and VIEWING matrices in double on the CPU, and only then once you have the aggregate MODELVIEW for a batch in double do you hand it to the GPU and let it thunk down to float32.As you can see, with this method if you use geometry-only display lists (that is, only store batches in your display lists, never MODELING transforms), then there's no reason to ever rebuild the display list after you generate it (assuming the batch data is static of course).

If it's not static, you probably would be using VBOs instead of display lists anyway...

There are other approaches too -- using ints for instance. See below.

Further reading:

A matter of precision (Tom Forsyth) (http://home.comcast.net/~tom_forsyth/blog.wiki.html#%5B%5BA%20matter%20of%20precision%5 D%5D) How to Scroll the OpenGL World (http://hacksoflife.blogspot.com/2010/02/how-to-scroll-opengl-world.html) Huge world, little precision (GDAlgorithms) (http://sourceforge.net/search/index.php?group_id=7932&search_subject=1&type_of_s earch=mlists&all_words=&exact_phrase=Huge+world&so me_word=&without_words=&ml_name%5B%5D=gdalgorithms-list&posted_date_start=&posted_date_end=&form_subm it=Search%20link) The Continuous World of Dungeon Siege (http://www.floatingorigin.com/mirror/continuous-world.htm) Precisions, Precisions (AIG Dev Blog) (http://blogs.agi.com/insight3d/index.php/2008/09/03/precisions-precisions/)

dorbie

03-04-2010, 06:21 AM

We need to keep the vertices in memory as big numbers (1000000), this is why I say we need to compile display list subtracting this big number from the eye point.

How about keeping the vertices relative to a local object origin, and have that object positioned into the world by a MODELING transform. Then the vertices and thus your display list never need to change.

But maybe I'm missing something about your problem... (?)

You are correct. If you maintain viewing and model matrices with high precision and have local object space coordinates, then the modelview result automatically produces a transformation matrix with low numbers for objects around the viewer. Object space numbers then transform to eyespace with high precision and never see large numbers.

You are not missing anything, this is the right way to do things, and with shader based implementation and software matrix stacks passed in as uniforms there's nothing left for developers to complain about here.

devdept

03-04-2010, 06:57 AM

Dark Photon, Dorbie,

I like your approach and a small C/C++ sample would be great to understand completely this approach. Do you know where I can find one?

I find it diffcult to understand the concept because I have always lived with only PROJECTION and MODELVIEW matrices not with PROJECTION, MODELING and VIEWING.

Let me check if I understood well:

1) I have the Pluto class compiled with coordinates that refers to the planet center in the drawList display list

2) the pluto class include a MODELING matrix that I use with glMultiplyMatrix() to place the planet to its solar system position

3) PLEASE HELP ME HERE !

4) I multiply the pluto class MODELING matrix with the VIEWING matrix at point 3) on the CPU and load it as GL_MODELVIEW for each object in the scene.

Thanks,

Alberto

Dark Photon

03-04-2010, 07:49 AM

I like your approach and a small C/C++ sample would be great to understand completely this approach. Do you know where I can find one?

Check out the references I posted. They may have one. But it's really not hard. Just do not use OpenGL for your matrix math. It only supports float32 internally, and that's what kills you. Do your own math on the CPU, in double where necessary.

I find it diffcult to understand the concept because I have always lived with only PROJECTION and MODELVIEW matrices not with PROJECTION, MODELING and VIEWING.

Very simple really:

* clip-space = PROJECTION * MODELVIEW * object-space

* clip-space = PROJECTION * (VIEWING * MODELING) * object-space

^ world-space is here!

It's the big world-space coords that're killing you.

Note that here I use OpenGL's operator-on-the-left notation.

Let me check if I understood well:

1) I have the Pluto class compiled with coordinates that refers to the planet center in the drawList display listp

If you're never gonna get close to Pluto maybe. But if you're gonna be flying the surface, that's doubtful. Can you really represent Pluto to the accuracy you need with float32 precision vertex coords? Pluto is 2400km in diameter! With float32, you've got maybe ~1 meter accuracy. For flying the surface, you're probably going to have to bust Pluto up.

2) the pluto class include a MODELING matrix that I use with glMultiplyMatrix() to place the planet to its solar system position

No! Absolutely not! Again, don't use OpenGL for your matrix math (it's glMultMatrix* BTW). It only supports float32, and big numbers + float32 kills your available precision. This causes the jitter.

If you use doubles to compute your MODELING and VIEWING matrices on the GPU, the only GL MODELVIEW matrix API you should use is glLoadMatrixd.

ViolentHamster

03-04-2010, 08:28 AM

If you use doubles to compute your MODELING and VIEWING matrices on the GPU, the only GL MODELVIEW matrix API you should use is glLoadMatrixd.

.. or glMatrixLoaddEXT.

Dark Photon

03-04-2010, 08:35 AM

.. or glMatrixLoaddEXT.

Touché! ;) Yeah getting rid of selectors is a good thing for readability and reusability (EXT_direct_state_access (http://www.opengl.org/registry/specs/EXT/direct_state_access.txt))

Aleksandar

03-04-2010, 10:24 AM

You are correct. If you maintain viewing and model matrices with high precision and have local object space coordinates, then the modelview result automatically produces a transformation matrix with low numbers for objects around the viewer. Object space numbers then transform to eyespace with high precision and never see large numbers.

You are not missing anything, this is the right way to do things, and with shader based implementation and software matrix stacks passed in as uniforms there's nothing left for developers to complain about here.

Do you really don't understand or don't want to understand? :(

Everything what you have said is known since the dawn of computer graphics, and nobody denies that. But, there are some cases when it is expensive to rebuild lists, buffers or whatever technology that is used. Building a whole planet is such case. I still firmly claim that the proposed method is VERY useful in some particular cases (not for CAD drawings, certainly), and that cannot be reproduced in fixed functionality.

I'm sorry for the late answer...

devdept

03-04-2010, 01:47 PM

Dark Photon,

In my example Pluto is something far away from world origin not necessary the real Pluto planet. Our problem is to zoom close to a CAD building plan far away from the world origin (something like 800000, 1999900, 0).

Please make a one triangle pseudocode for me, I still find it difficult to fully understand without seeing some code lines...

Thanks so much for sharing your experience.

Alberto

Dark Photon

03-04-2010, 06:01 PM

Our problem is to zoom close to a CAD building plan far away from the world origin (something like 800000, 1999900, 0).

Define "zoom". Do you mean, move the eyepoint up near that CAD building so it fills a large portion of the field-of-view?

devdept

03-05-2010, 12:42 AM

Yes, exactly.

Alberto

Dark Photon

03-05-2010, 09:36 AM

In my example Pluto is something far away from world origin not necessary the real Pluto planet. Our problem is to zoom close to a CAD building plan far away from the world origin (something like 800000, 1999900, 0).

Ok, let's just think for a second about what this means. Suppose your units are meters just for ease of discussion.

Suppose you model the building around it's own local origin (center of building is 0,0,0). The tallest building on earth is 828 meters. That means representing the building positions in float32 (where you get ~7 decimal digits of precision), your coordinates are gonna be accurate to 0.001m-0.0001m (i.e. ~1mm or maybe slightly better) when represented in float32 (assuming you compute them exactly and then just store them in float32).

So representing the building to the required accuracy with float32 is no problem! Take the vertex 1,1,1 in building space. We can represent that pretty much exactly, right?

What about transforms? the MODELING transform (which positions this building, modeled about its own local orig, into the world) is going to have a translate component of (800000, 1999900, 0) (these are your numbers):

MODELING =(( . . . 800000 )

( . . . 1999900 )

( . . . 0 )

( 0 0 0 1 )

Similarly, since you've just said that the eye is close to the building, the VIEWING transform is going to have a translate component of very near (-800000, -1999900, 0).

VIEWING = (( . . . -800000 )

( . . . -1999900 )

( . . . 0 )

( 0 0 0 1 )

So what's the problem? See, those big numbers just ate up all or nearly all of the 6-7 decimal digits of precision we have with float32 representing the shear magnitude of the numbers, leaving little or nothing for sub-meter accuracy. As you're computing MODELVIEW, right after you stacked the VIEWING transform on with that huge translate (e.g. with gluLookAt), you've just trashed any accuracy your MODELVIEW can have for preserving sub-meter accuracy.

In other words, your "world coordinates" are huge (compared to float32 precision), and that's what causes the problem.

Take your 1,1,1 building object-space point. After you transform by the MODELING transform, you get 800001, 1999901. You've got 6-7 digits to the left of the decimal so you've only got 0-1 left to the right of the decimal. So when you represent this in float32, what you actually get is maybe accurate to the nearest meter or so if you're lucky -- you've just lots all your submeter precision.

In diagram form:

eye-space = MODELVIEW * object-space

eye-space = (VIEWING * MODELING) * object-space

^ small ^ HUGE!!! ^ small

So looking at this diagram, you can see that the two HUGEs tend to cancel each other out (for objects close to the eye anyway, which is all you care about). If only there was a way to compute the aggregate MODELVIEW transform more accurately so that you didn't lose all that precision in computing it....

Well, there is. Use doubles, if that provides you sufficient precision (it has ~15 decimal digits of precision instead of ~7 decimal digits for float32) which should be enough for your example. If doubles don't offer enough, use 64-bit integers or something -- whatever you have to to compute the aggregate transform accurately.

devdept

03-05-2010, 10:23 AM

Thanks Dark Photon,

I will make a sample to check if I have understood everything well.

Thanks again,

Alberto

dorbie

03-06-2010, 10:08 PM

You are correct. If you maintain viewing and model matrices with high precision and have local object space coordinates, then the modelview result automatically produces a transformation matrix with low numbers for objects around the viewer. Object space numbers then transform to eyespace with high precision and never see large numbers.

You are not missing anything, this is the right way to do things, and with shader based implementation and software matrix stacks passed in as uniforms there's nothing left for developers to complain about here.

Do you really don't understand or don't want to understand? :(

Everything what you have said is known since the dawn of computer graphics, and nobody denies that. But, there are some cases when it is expensive to rebuild lists, buffers or whatever technology that is used. Building a whole planet is such case. I still firmly claim that the proposed method is VERY useful in some particular cases (not for CAD drawings, certainly), and that cannot be reproduced in fixed functionality.

I'm sorry for the late answer...

I understand perfectly well.

It is well known and it works in any system including CAD and planets.

There is no need to rebuild lists etc. The vertex numeric positions never change. Only the view_matrix * model_matrix result changes. You can use a fixed offset, but that's a bit outmoded IMO, you can simply use a continuous double precision view and model matrix and it'll work beautifully. (casting to single before you send in your uniforms).

You can store coordinates as double precision, but in fact that is overkill and there is no hardware support. All you really need is to maintain double precision matrix offsets.

If you have a complaint you're certainly not articulating it well.

You mention display lists, I say fix your code and use VBOs, the days of writing war and peace with branch bloat in your dispatch and using display lists to sort out the mess should be over. At a minimum you can call your display lists with no transforms or only local transforms in there (if that) and it'll still work.

People have been asking for double precision graphics hardware for a long time, I hope the HW guys are not foolish enough to listen, at least not for a few more generations.

Alfonse Reinheart

03-07-2010, 02:36 AM

If you have a complaint you're certainly not articulating it well.

He's talking about cases where the geometry itself has large numbers that must be represented by doubles rather than floats.

Imagine a single mesh that has millimeter precision that must extend out +/- 10,000 kilometers from the origin. The vertices themselves must be represented by doubles.

Of course, the right thing to do in that case is to break up the mesh into pieces.

Aleksandar

03-07-2010, 05:59 AM

If you have a complaint you're certainly not articulating it well.

I have to make some things clearer, obviously. Imagine that you have to model Earth. A semi-major axis is 6,378,137.0 m. Using floats for calculation or displaying does not allow any object less than few hundred meters to be displayed at all. The only way to handle that problem is to restrict minimal hight of the viewer to at least 2000m, or to divide a planet into blocks. Each block can have its own coordinate system, with the origin in the center of the block. Thus far everything perfectly fits into our story of using single precision...

The size of the blocks depends on the resolution we want to achieve. For example, if we want a decimeter precision, we need to confine one block to a diameter less than, let's say, 150km. So, in order to implement our Virtual Earth, we have to deal with hundreds or thousands of local coordinates systems. As long as we are inside the boundary of the single block, that is not important. But when we are crossing the border, we have to deal with many blocks. In which coordinates system we should draw all of them? If we use a single coordinate system we have to rebuild all visible blocks except one (which CS we are using). On the other hand, we can draw each block in its own coordinate system, but on this way we can have a large translation (the thing we wanted to avoid) and gaps at the boundaries (because of differences in calculations).

To make things even worse, blocks of 150km in diameter cannot be monolithic. In order to use spatial coherency of the terrain we are walking through, we have to subdivide them. The only solution is to juggle with multiple coordinate systems and gaps filling.

So, if we have to deal with huge objects, we must divide them into many subobjects, each with its own CS. It is tricky and error prone (and even slow if we have to refill VBOs). In my tests I have proved that overhead of sending two floats instead one for each vertex representing such huge object is not significant (about few percent) and the implementation is clean and fast. Of course, small objects should not use two floats for each coordinate-value. Objects inside the terrain block are represented in the way you have explained.

You mention display lists, I say fix your code and use VBOs, the days of writing war and peace with branch bloat in your dispatch and using display lists to sort out the mess should be over. At a minimum you can call your display lists with no transforms or only local transforms in there (if that) and it'll still work.I have ceased to use DLs two years ago, when the new spec claimed that they are deprecated. But even before I used DLs as VBOs, just for storing vertices not transformations. The only reason for using DLs was their speed (and they are still faster than VBOs).

People have been asking for double precision graphics hardware for a long time, I hope the HW guys are not foolish enough to listen, at least not for a few more generations. Double precision support exists, since GeForce GTX 260 (or to be more precise with CUDA compute capability 1.3 devices (GTX260, GTX280, GTX285, GTX295, Tesla S1070, Tesla C1060, Quadro Plex 2200 D2, Quadro FX 5800, FX 4800)). The problem is that DP operations are expensive for these GPUs. I hope Fermi will change it. (For the broader audiences, OpenGL still does not support DP operations. Everything mentioned above considers CUDA and OpenCL. But it is just the matter of time when it will be included)

devdept

03-08-2010, 02:13 AM

Hi Dark Photon,

Can you please check this essential GLUT example (http://blogs.agi.com/insight3d/wp-content/uploads/2010/02/rtcDemo.zip) and confirm that it uses the approach you recommended?

Thanks,

Alberto

Dark Photon

03-08-2010, 08:18 AM

Can you please check this essential GLUT example (http://blogs.agi.com/insight3d/wp-content/uploads/2010/02/rtcDemo.zip) and confirm that it uses the approach you recommended?

Exactly! You got it.

(And after hacking away the Windows-isms, I can confirm it works perfectly here on NVidia/Linux.)

Aleksandar

03-11-2010, 09:20 AM

People have been asking for double precision graphics hardware for a long time, I hope the HW guys are not foolish enough to listen, at least not for a few more generations.

Dorbie, as you can see, "a long time" lasted just 5 days, because OpenGL 4.0 supports 64-bit double precision!!! :)

The revolution is realy started!

devdept

03-12-2010, 02:05 AM

Does it mean that doing:

glVertex3d(x,y,z);

will pass real doubles?

Thanks,

Alberto

Alfonse Reinheart

03-12-2010, 03:01 AM

will pass real doubles?

It will if you happen to be running a GL 4.0 implementation. And it will do so at half performance. And while the HD 5xxx cards have sold reasonably well, they're far from the majority at the moment.

Also, ATI isn't exactly known for quality drivers, and right now, they're the only GL 4.0 game in town. When NVIDIA finally gets around to releasing Fermi, you could expect some reliability. Though it'll still cost you half performance.

Or, you know, you could do some simple subtraction on the CPU and get it all to work on any GL implementation.

Aleksandar

03-12-2010, 05:07 AM

When NVIDIA finally gets around to releasing Fermi, you could expect some reliability. Though it'll still cost you half performance.

Cutting performance to a half is very frivolous estimation. How fast it well be, we will see when Fermi finally comes. Current hardware has a serious problem with doubles because the number of DP computation units is very small (apart from the fact that DP operations are generally slower than SP).

Alfonse Reinheart

03-12-2010, 12:20 PM

Cutting performance to a half is very frivolous estimation.

It's the only estimation we have. NVIDIA says that all double-precision operations happen at half the speed of single-precision. Sure, they may well be lying, but I'd wait for benchmarks to come out before deciding on that.

It should also be pointed out that the best chance for NVIDIA to survive the coming GPU/CPU merger (since they don't make CPUs, that pushes them out of the game) is to make GPUs that are useful to as many people as possible. And that means pushing features like double-precision and IEEE-754-2008, which are things that scientific analysis and such really, really want.

dorbie

03-12-2010, 02:33 PM

People have been asking for double precision graphics hardware for a long time, I hope the HW guys are not foolish enough to listen, at least not for a few more generations.

Dorbie, as you can see, "a long time" lasted just 5 days, because OpenGL 4.0 supports 64-bit double precision!!! :)

The revolution is realy started!

Yea, I noticed that too :-)

Groan! It's not what I'd call a revolution, this will be the red headed stepchild for a long time to come.

If you've been tracking the GPGPU stuff you'll know that some hardware already had a few double precision floating point units, but they're vastly outnumbered by single precision units. At best the rest would be left to emulation on single precision if it's even possible. Consider yourself saved by the GPGPU war, but they may only have given you enough rope to hang yourself with.

It's still advisable to use DP transforms in software and cast to single precision modleview matrices.

dorbie

03-12-2010, 02:39 PM

Does it mean that doing:

glVertex3d(x,y,z);

will pass real doubles?

Thanks,

Alberto

Oh God, you're making me ill. Stop it.

I hope nobody sees a shiny new OpenGL 4 DP feature and anticipates dispatching their DP verts one at a time to it.

One thing to note here is that real DP uniforms and matrix transformation are just as important as attributes for this class of problem, and even moreso for some applications because you can always promote attributes for multiple DP xforms.

dorbie

03-12-2010, 02:41 PM

If you have a complaint you're certainly not articulating it well.

He's talking about cases where the geometry itself has large numbers that must be represented by doubles rather than floats.

Imagine a single mesh that has millimeter precision that must extend out +/- 10,000 kilometers from the origin. The vertices themselves must be represented by doubles.

Of course, the right thing to do in that case is to break up the mesh into pieces.

Thanks for the explanation, but this is known, the pertinent part is your last sentence.

Aleksandar

03-13-2010, 03:48 AM

If you've been tracking the GPGPU stuff you'll know that some hardware already had a few double precision floating point units, but they're vastly outnumbered by single precision units. At best the rest would be left to emulation on single precision if it's even possible. Consider yourself saved by the GPGPU war, but they may only have given you enough rope to hang yourself with.

I like your temper! ;)

Of course that I'm interested in GPGPU. All my previous posts make it obvious. By splitting some uniforms and only position coordinates into two floats per DP value, I have solved several problems with my terrain. If next generation hardware really can perform fast DP operations, I'll "transfer" more calculation (e.g. Geographic to Cartesian transformation) to GPU.

According to NVIDIA's Next Generation CUDA Compute Architecture: Fermi (http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIAFermiComputeArchitectureWhitepaper.pdf)

The Fermi architecture has been specifically designed to offer unprecedented performance in double precision; up to 16 double precision fused multiply-add operations can be performed per SM, per clock, a dramatic improvement over the GT200 architecture.

The chart on the page 9 shows 4.2x speed gain compared to GT200 architecture. I cannot claim that it is true, but we will see...

devdept

03-15-2010, 02:39 AM

Sorry Dorbie,

I'm not an OpenGL expert as you are, btw it was only an example we are also not passing one vertex at time in our program.

Considering the performance drop of using GPU DP we are not interested in this precision any more.

Thanks,

Alberto

Pierre Boudier

03-15-2010, 07:36 AM

Considering the performance drop of using GPU DP we are not interested in this precision any more.

if you are only interested in using double precision in your object * mvp, then performance on high end hardware will not be bad. you might even not notice any drop at all.

on HD5870 (amd):

- you have ~550 Gflops of DP

- your primitive rate is 850M triangles

- then you have ~650 flops per triangle

- with non indexed vertices, you have 250 flops / vertex before you are ALU bound

in practice, many other part of the GPU will impact performance, but it is pretty rare to be limited by the length of the vertex shader.

devdept

03-16-2010, 02:34 AM

Thanks Pierre!

Powered by vBulletin® Version 4.2.3 Copyright © 2018 vBulletin Solutions, Inc. All rights reserved.