Depth Images

Something that has always interested me and at the same time confused me has been using images with depth information to replace geometry in certain places, like background rendering, or for volumetric type stuff.

What has always confused me though, is this: Your depth image and depth buffer are essentialy LUMINANCE buffers, storing values on [0.0, 1.0] The depth buffer is filled (depth mask permitting) with values that are generated based on the projection matrix, such that objects i[barely]i not culled are at ~0.0 or ~1.0.

You have a depth image of a sphere, which is visually a grayscale image with intensities ranging from 0.0 to 1.0. If you were to render this as a depth image, how would the [0.0, 1.0] range of the depth image be converted into the depth buffer? Is it a straight copy? Are the values changed by the depth at which you render (For instance two quads with two different eye Z values) (I doubt this is true, unless you can do mipmapping with depth images.)

I would very much like to play around with depth images, as they are one of the more interesting aspects of OpenGL that I have yet to play with, and I can think of some really cool effects to accomplish with them.

(For matters of discussion, if you’re going to say, “Dude, just use fragment programs they do everything.” let’s pretend I don’t have fragment programs. I learn everything the straight OGL way before I want to translate that into a fragment program.)

Without shaders, I think the only way is to use glDrawPixels, in which case glPixelMap has an effect, plus the current raster position.

(I doubt this is true, unless you can do mipmapping with depth images.)
Nope, I’m talking about glDrawPixels.

Dude just use fragment programs they do everything.

:smiley: