Calculating the camera -> pixel distance

I need to reconstruct the real distance from the camera to each pixel rendered (eg, NOT just the value stored in the zbuffer) for a holography application. Is there anyway to get the graphics board to do the calculation for me? I need 24 or 32 bit precision.

Scan conversion hardware is likely not going to do the job for you. You might be able to come up with some special register combiner sequence to do it, but I doubt it.

This sort of thing pretty much requires ray tracing.

In the worst case, I can just read back the zbuffer and calculate dist = z Sqrt(x^2 + y^2 + 1) where the center of the screen is at x=y=0, using a table. It seems that there should be some exploit to calculate this function.

It’s not that easy due to some non-linear property of the z-buffer. Don’t tell me why though…

There may be a z-scale related to the field of view. I think a 90 degree (45?) would be neutral (no scale).

Originally posted by Michael Steinberg:
It’s not that easy due to some non-linear property of the z-buffer. Don’t tell me why though…

The Z buffer has far more precision for nearby fragments than it does for far away ones. This is because of the perspective division:

D = Z / W

where D is the fragment depth written to the Z buffer, Z is the incoming depth value (in world space) of the point that you’re rendering, and W is the W coordinate of the point that you’re rendering. (Z is in the [zNear, zFar] range)

Usually, W will be 1, but after multiplying the incoming point with the projection matrix, this will no longer be the case.

If you want to see for yourself, plot a graph of Z/W values for points that you multiply with the projection matrix, with Z going from zNear to zFar.

The solution is to multiply with a projection matrix as used for W-buffering. This will create depth values that go linearly from zNear to zFar.

I have some code to illustrate this (with graphs). If anyone wants it, I’ll clean it up and dump it on my site.

Returning to the original question, you can use what I just explained to calculate actual fragment depth, but I don’t know if you can just get OpenGL to calculate it for the entire framebuffer. The problem is that the GL will clamp your values to the depth range. It might work if you set glDepthRange(zNear, zFar), but I’m not sure.

  • Tom

Hmm… I just had another wacky idea that might actually work.

Clear the framebuffer to white, and set the current color to black. Draw your scene with no textures, no lighting and no blending. Enable range-based linear white fog so that the amount of fog goes from 0% to 100% between zNear and zFar.

The framebuffer now contains the distance between each pixel and the eye?

  • Tom

Well, the problem is that you only get precision up to your color depth, because no matter what you select your fog color to be, all 3 channels get interpolated in parallel. If you could somehow set your fog distance paramter to eye_radial AND index a 1D texture, then you can get that extra precision. I have no idea how this might be accomplished, however…

–Won

Oops. Forgot about the precision issue

Originally posted by Won:
If you could somehow set your fog distance paramter to eye_radial AND index a 1D texture, then you can get that extra precision.

If I understand you correctly, your idea is to use the fog percentage of a given pixel as a 1D texture coordinate? This could be done with a vertex program, but how would that give you extra precision? Aren’t you still stuck with R, G, B and A being separately interpolated at 8-bit precision?

The depth buffer (24 bits) and the accumulation buffer (16 bits) are the only places I can think of where you can really store numbers with more than 8-bit precision. Those, and system RAM, of course.

  • Tom

It’s kinda weird to explain, but you can have a 1D texture that’s a “Rainbow” where each color (not color component) corresponds to a different depth. A naive implementation could be able to distinguish as many colors as the max 1d texture size (what is this, typically?).

You can try to be more clever by taking advantage of interpolation between “unique” texture colors to get extra precision, or just to save texture memory.

–Won

How about setting the primary color of every vertex according to the distance?

vertexColR = vertexDistanceBits0To7;
vertexColG = vertexDistanceBits8To15;
vertexColB = vertexDistanceBits16To23;

Draw the scene with smooth shading, no lighting, no textures, no fog, no nothing

That way you’d only have to do distance computation for each vertex, then read the interpolated distance out of the frame buffer and decode the distance value from the color.
I’m not sure how perspective correction influences that, though… am I overlooking something?

I guess you don’t need that 1d texture read after all. This would be a good use for a vertex program (emulated, even), tho.

–Won

Originally posted by Dodger:
[b]How about setting the primary color of every vertex according to the distance?

vertexColR = vertexDistanceBits0To7;
vertexColG = vertexDistanceBits8To15;
vertexColB = vertexDistanceBits16To23;

Draw the scene with smooth shading, no lighting, no textures, no fog, no nothing

That way you’d only have to do distance computation for each vertex, then read the interpolated distance out of the frame buffer and decode the distance value from the color.
I’m not sure how perspective correction influences that, though… am I overlooking something?[/b]

Yeah, because the interpolation will be done in parrallel… it would break that 24bit variable.
You can’t decompress it again.

If I’m mistaken, we could toss in the Alpha and get 32bit resolution.

You could use a 3D texture with a precalculated value of sqrt(s^2 + t^2 + r^2) instead of your 1D texture. Then you just have to generate your texture coordinates equal to the axis (s=x, t=y, r=z).

You can also use multitexturing with a 2D texture containing the values s^2 + t^2 and a 1D texture with s^2 and add the textures. Then you have to do the sqrt() manually.

With the 3D texture you can get up to 32 bit precision (when using RGBA textures). I’m not sure about the multitexture approach because the color components will get clamped independantly, not as a single 32 bit number.