How to get depth to fragment program?

I’m trying to draw a scene in first pass and then draw a full screen quad which modifies color based on depth (fog-like effects).

The problem is creating a depth texture from the original scene, that can be accessed as a real float depth in the second pass.

Note: I’m trying to get this working with GeforceFX, so NVidia only methods are ok.

glCopyTex2D with GL_DEPTH_COMPONENT works, except that reading the texture only gives 8-bit accuracy, not a float (shadow map computations would work with full accuracy, but I need the depth, not a shadow compare result).

glReadPixels with GL_DEPTH_COMPONENT/GL_FLOAT and glTexImage2D with GL_FLOAT_R_NV works, but is slow.

What would also work, is if only I could access the GL_DEPTH_COMPONENT texture as RGBA in the fragment program. Then a simple DP4 could be used to convert the depth data (which I assume is stored in unsigned int or similar format) to a full accuracy float. I tried this by doing a glReadPixels with GL_DEPTH_COMPONENT/UNSIGNED_INT_24_8_NV followed by a glTexImage2D with GL_RGBA/UNSIGNED_CHAR. Works great, except it’s slow because the data again goes over the bus.

I know I could draw a separate float-pbuffer, into which I’d store the depth as float. But this adds an extra pass, and there is a lot of geometry.

The functionality I’m looking for sounds like something that would be very useful, so I’m hoping there is a way and I’m just overlooking it. Any help would be really appreciated!

(EDIT: deleted useless junk, shouldn’t reply before having properly read the post)

Use fragment.position.z?

– Tom

[This message has been edited by Tom Nuydens (edited 04-16-2003).]

I guess I didn’t explain it too well.

In the first pass I want to draw a normal scene writing to color & Z.

In the second pass I want to modify the colors based on the Z stored in the depth buffer in the first pass.

The purpose is to calculate fog/visibility effects in a separate pass, where I would calculate distance/direction to each pixel and apply effects based on that. Direction comes directly from pixel coordinates, distance would come from the depth buffer (or depth texture, since reading the depth buffer directly is probably too much to hope for).

fragment.position.z would work, if I would draw all geometry again in the second pass. But the whole purpose of this is to avoid having to process the geometry twice.

Hi,

I have a similar problem, but instead of using a fragment program (which is only availabe for GeForce FX chips in OpenGL), I need to know the fragments depth (more precisely, the distance of a fragment from the light source) in a register combiner program.

I have tried to calculate the distance per vertex in a vertex program program and store
the result in o[COL0]. I have clamped the
distance to the range [0;1]. My question is
now: When I use the col0 register in the
combiner program, does it contain the
correct interpolated (to [0;1] clamped)
depth of the fragment? The result I get looks
rather strange. Is there any mistake in my
approach or have I made an error in reasoning?

Mako

Nitpick: Radeon 9500s and better can do fragment programs, too. . .

Back on topic: You could render depth to a texture then perform an orthographic projection onto the scene from the camera position. That way, each projected depth pixel -should- land on the position that it represents – sorta like how depth shadow mapping is done.

[This message has been edited by Ostsol (edited 04-16-2003).]

The purpose is to calculate fog/visibility effects in a separate pass, where I would calculate distance/direction to each pixel and apply effects based on that. Direction comes directly from pixel coordinates, distance would come from the depth buffer (or depth texture, since reading the depth buffer directly is probably too much to hope for).

First of all, the z-depth of a pixel is not the distance from the eye. As such, using it for fog is not the best way to go about fogging.

Second, if all you want to do us use the z-depth as the distance, 2 passes is not necessary and wastes fillrate. A fragment program can access the depth value and use it in a fog computation.

If you want the direction to factor into the computation, this can happen in one pass as well. Simply interpolate the direction to the eye as a texture coordinate, and use that in your computations.

Sorry I didn’t make this clearer. When I said to use fragment.position.z, that obviously implied collapsing everything to a single pass.

BTW, what makes you think that sampling a depth texture in a fragment program only gives you 8-bit precision? That sounds very unlikely, as it would make any fragment program technique that involves depth textures completely worthless!

– Tom

Ostsol: I could render a separate depth texture, but that would mean rendering the geometry twice. And since the info is already in the depth buffer and can be easily copied to a shadow texture, it seems such a small step to get it out of there.

Korval: I know it’s 1-1/Z or somesuch, but it’s possible to convert that back to real z or even worldspace in few instructions. I guess I’ll have to try the single pass approach next, as you suggested.

Tom: I also expected to get a float from a depth texture. What I did, was to read from the texture to a float-register, then multiply the value by 16 and output to color. There were only 16 shades in the image, so it looks like the original was 8-bit. I think the specs say that if shadow testing is not enabled, shadow textures are converted to luminance-etc texture, and those are typically 8-bit, so this might be the correct (but unfortunate) behavior. The Z-info is in 24-bit unsigned format, so it is not directly usable as a float (although conversion would be simple).

If somebody has gotten a true float by reading a depth texture, I’d like to know! It is possible I’m doing something stupid which disables this capability.

These are the reasons I’m trying to get dual pass working:

  1. fog-stuff only done for visible pixels (useful when there is much overdraw)

  2. would make it easy to combine various techniques in drawing the original scene. Going to 1 pass means converting everything to fragment programs.

  3. volumetric/localized fog might be easier (although I was starting first with just layered/radial fog). Instead for each fragment having to check if it is in any of the many fog volumes, I could just draw each volume once in the second pass and use the depth-buffer info to see how much of the fog is in front of each pixel.

Ah, you’re right – re-reading the ARB_depth_texture spec, I see that it explicitly warns you that using a depth texture as GL_LUMINANCE can (and probably will) cost you precision.

I guess one possible workaround would be to use NV_pixel_data_range to get the Z-buffer, put that (raw) data in an RGB texture, and reconstruct the Z value from the RGB in the fragment program. I don’t know whether this will still be fast enough, though.

If not, something else you could consider is to render the scene to a floating-point RGBA texture, and to put the depth in the alpha channel. You can then do your second pass (fullscreen quad) pretty much exactly like you do now. This gives you items (1) and (3) on your list, but not (2) – you’d have to use a fragment program while drawing your scene the first time.

– Tom