back/front cliping plane ratio

I use an api called java3d that works on top of opengl. In the api docs it says that the ration between the distance of the back clipping plane and the front clipping plane must be 3000 to acomodate for 16bit zbuffer.
I made my calculations and think that 4 bits of the 16bit z value must be used for other things since 2^12 ~ 4000.

The problem is that i need to render vast landscapes the size of an entire planet like earth and still have a precision of 1 cm.
So i thought that with the new cg cards with a 32 bit zbuffer it would give me 2^32 ~ 4e9cm(assuming cm has the unit since opengl doesnt have units) or 40000km which is barely enough for what i want.
Probably opengl will cut 4 bits or more from these 32 bits and then i am in trouble.

Is there any way to overcome this limitation ? Like a special perspective matrix that scales everything faraway to a short range. Imagine an aplication that would allow the user to navigate a realistic representation of the universe. That is you would be seeing objects that in theory are farther away than what any combination of back/front cliping plane would allow. For planets the size of earth at least 40000km is required i think.

Remember that the Z-buffer is not linear.

Also, note that, even on recent graphics cards, you don’t get a 32-bit z-buffer. You only get a 24-bit one; the other 8-bits are reserved for a stencil buffer. Even if you don’t use, or even ask for, the stencil buffer, these 8-bits will remain unusable.

As such, your depth precision is only going to be 24-bits. That, coupled with the inheriant non-linearity of the depth buffer, makes what you’re trying to do impossible.

However, you could use a fragment program to compute the linear depth and use that as the fragment’s depth. Just keep in mind that this is expensive, because depth-replacing fragment programs can’t use the advanced depth culling hardware that modern graphics cards have.

Don’t forget that most video cards don’t even support a 32 bits ZBuffer. Generally they’re using 24 bits for Z and 8 bits for stencil. So i’d suggest simply forgetting about a quick pixel format fix.

If you’re trying to fight against Z-fighting (i know, bad jobe), a solution is to dynamically adjust your ZNear plane depending on the distance to the “closest feature” that is being rendered. If your camera is 100 meters above the ground, maybe a ZNear distance of 1 meter is good enough? And if you’re standing on the ground, 10 cm ? At ground level, due to natural hills/mountains, you will hardly see at more than a few kilometers anyway. That’s what i did for my own planet renderer and it seems to work quite well.

If you’re fighting against the resolution/precision problem for vertex positions that are too big for the 32 bits floating points range, i’d suggest splitting your world in “chunks” (whatever they mean) and generate them “locally”.

A world chunk that is 10000 km away from the world origin (maybe: the center of the planet?), with a camera inside that chunk (which is also 10000 km away from the origin), is basically equivalent to generating that chunk at the origin and placing the camera at origin, kindda like you do for a sky box/dome. Except that everything: your vertex positions, your camera positions, and all the intermediary matrixes, never handle “big numbers”.

Hope that helps.

Y.

I think this comes from the opengl manual:

“The greater the ratio of zFar to zNear is, the less effective the depth buffer will be at distinguishing between surfaces that are near each other. If
r = zFar / zNear
roughly log2® bots of depth buffer precision are lost. Because r approaches infinity as zNear approaches 0, zNear must never be 0”.

I dont understand very well what log2 has to do with clipping ranging and loosing bits. There is obviously some more info hidden in this.

Does this means that the non-linearity of the zbuffer values depends on opengl code or is related directly to the hadware ? In the first case then DirectX could handle the situation in a different manner, right ?

[This message has been edited by zingbat (edited 11-04-2003).]

The depth buffer is, as said, non-linear. The greater the ratio, the greater the non-linearity. The non-linearity shifts the precision towards the near clip plane, leaving less precision in the rest of the view volume. This non-linearity comes from the projection matrix, which does a non-linear mapping from a value between the near and far plane, to the range [0, 1]. This is how the projection matrix is defined to work by the spec. Not sure about this, but there are projection matrices that can do linear mappings (ortho ones can, but talking perspective ones), but will screw up interpolation for, say, texture coordinates and color, which depends on this non-linearity. But as I said, not really sure about it.

The formula is just an approximation of what is happening in general to the precision. Think or it as, if r=5, then the precision in general would be as if the mapping is linear, and 5 bits was removed (from 16 to 11 bits). But you still have all bits, just with a different distribution of precision.

To give you a general idea of what this non-linearity really do to the precision.
With a ratio of 1000, 99% of the precision is located in the nearest 10% of the view volume.
With a ratio of 10000, 99% of the precision is located in the nearest 1% of the view volume.
With a ratio of 10000 and a 16 bit deth buffer, you only have about 8 out of 65000 different values that can map to the far 50% of the view volume.
With a ratio of 100000, 99% of the precision is located in the nearest 0.1% of the view volume.

And the distribution keeps going the same way as the ratio increases. As you can se, at a ratio of 10k, a 16-bit depth buffer is literally useless. I got these values from a Matlab script I have written to simulate the depth buffer, and are estimated from a graph.

I have often thought that the solution to this problem might be to sort your geometry based on distance, lets say in 2 (or more) bins: that which is farther than 1000 units away, and that which is 1000 units away or closer (for example). Then set your view frustum with a znear of 1000 and a zfar of whatever huge number you need, then draw all the far geometry. Then reset the view frustum with a znear of 0.5 and a zfar of 1000 and draw the close geometry. For certain applications this might present a viable solution.

Good luck.
-Mike

I think i understand the zbuffer now. That explanation makes a lot more sense.

About the solution i want to avoid doing several passes for rendering or messing with view frustrum. Don’t know how much it would affect performance.

But i think a similar solution could be arranged by creating 9 containers for geometry. Heres my idea:

These are terrain tiles forming a grid of 3x3.

123
456
789

Suposing the ratio between the front/back plane is 1000 and all the containers together forms a terrain region about 10,000,000 meters.
Container 5 holds the geometry arround the viewer and its size is something like 500 meters and its centered arround the entire region.
Container 1 is about 10,000,000/2 - 500 square and container 2 is 500 wide by 10,000,000/2-500 long, etc.
By using lod techniques i would ensure that in all containers the required lod level for every object would be selected so that each container would no more than a maximum number of triangles to render.
So the final trick would be to scale down containers 1,3,7,9 both horizontaly and verticaly until they fit in 500x500 square and scale down containers and scaling 2,4,6,8 either horizontaly or verticaly to achieve the same objective.

Dont know what is faster. If it is doing some multi-pass rendering while changing the view frustrum or doing this.

I think i understand the zbuffer now. That explanation makes a lot more sense.

About the solution i want to avoid doing several passes for rendering or messing with view frustrum. Don’t know how much it would affect performance.

But i think a similar solution could be arranged by creating 9 containers for geometry. Heres my idea:

These are terrain tiles forming a grid of 3x3.

123
456
789

Suposing the ratio between the front/back plane is 1000 and all the containers together forms a terrain region about 10,000,000 meters.
Container 5 holds the geometry arround the viewer and its size is something like 500 meters and its centered arround the entire region.
Container 1 is about 10,000,000/2 - 500 square and container 2 is 500 wide by 10,000,000/2-500 long, etc.
By using lod techniques i would ensure that in all containers the required lod level for every object would be selected so that each container would no more than a maximum number of triangles to render.
So the final trick would be to scale down containers 1,3,7,9 both horizontaly and verticaly until they fit in 500x500 square and scale down containers and scaling 2,4,6,8 either horizontaly or verticaly to achieve the same objective.

Dont know what is faster. If it is doing some multi-pass rendering while changing the view frustrum or doing this.