Depth buffer resolution

Hi there,

can anybody help me getting a formula that translates the z-buffer resolution back to model view coordinates? I.e. I’d like to calculate a 3d vector that, when added to some geometry in model space, offsets this geometry by one z-buffer-unit in window space.

thanks, roland

What keeps you from using glPolygonOffset?

I don’t want to offset polygons. I want to offset line primitives. These line primitives are deliberately drawn on top of each other.

  1. Draw a line raster on screen
  2. Draw some lines on top of this raster with some style
  3. Draw even more lines on top of both with yet another line style.

AFAIK, polygon offset only works for drawing polygons.

The easiest thing would be to look up how a SW OpenGL implementation like MESA handles the “units” parameter in the glPolygonOffset call and revert the operations.
You need at least the glDepthRange parameters, the frustum’s znear and zfar values and the number of depth buffer bits. Implementations may differ, offset a little more to be on the sure side.

Note the OpenGL’s line rendering for different styles doesn’t need to be invariant. If you want to draw colored stippled patterns into each other, make sure you don’t mix with solid lines or you might get artefacts.
Also drawing a line from A to B is not the same as from B to A due to the diamond exit rule.

Hello,

What about using two triangles or a quad to draw a line?

I checked in SGI’s sample implementation. They maintain a minResolution value for the z-buffer (initialized upon context creation) and whenever a polygon is drawn, the polygons vertices are biased by polygonOffset.Units*dephtBuffer.minResolution.
So this seems to be a dead end - at least to me.

Using triangles/quads - hmmm, I don’t like this idea. But I’ll keep it in mind.

Meanwhile I found some article in the OpenGL FAQs:
http://www.opengl.org/resources/faq/technical/depthbuffer.htm,
Question 12.050. However, this only deals with perspective viewing and I’m not familiar enough with all the math to convert this to the parallel viewing case.

Alright, also keep in mind that triangles and quads are usually accelerated (and often faster than lines on hardware). Also if you will be using an orthographic projection then it will be even easier to use them. However, the tables may be reversed for software implementations (in terms of speed).

Dorbie just posted some useful links here: http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_to pic;f=3;t=012358

More specifically, to go from depthbuffer z-value to eye space z, you have 2 steps:

  1. Remap depthbuffer z to normalized device coords (NDC), which has a range of {-1,1}. Assuming you have the default depthbuffer range of {0,1}, this comes out to

NDCz = bufferz * 2.0 - 1.0;

  1. Convert NDCz to eyez. The equation for going from eyez to NDCz is available, so you just have to invert it and you get this:

eyez = (2.0 * f * n) / ((f-n)NDCz - (f+n))

where f = far clip plane and n = near clip plane