Depth texture usage and multitexturing

I’m rendering 512x512 colour and depth data from an external source. Right now I’m trying to use multitexturing to get both the depth and colour data into the frame buffer. I create a colour texture in GL_TEXTURE0_ARB and a depth texture in GL_TEXTURE1_ARB.

Right now, I’m getting the colour buffer to draw ok, but the depths aren’t making it into the frame buffer. How can I get the depths from the depth texture into the framebuffer? (If I try to use the same method as the colour texture: a quad and glMultiTexCoord, then I just get depth values equal to whatever Z-coord I set for the quad.)

It looks like I could at least get the depths into alpha values by setting:

 glTexParamteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_ALPHA) 

and then I could use alpha test instead of depth test. But I really want it in the depth buffer.

i know of no way to do this. if you rasterize a primitive, you’ll get the primitive’s depth. if you use draw pixels, you’ll get the raster position’s depth. it’s a no-win scenario. this may change in the future. what are you trying to do that you need to do this?

for the second part of the question, i think you’re just a bit confused about the meaning of GL_DEPTH_TEXTURE_MODE. that really just gives the depth texture a viewable format - for debugging and what not. here’s the whole song and dance:

http://oss.sgi.com/projects/ogl-sample/registry/ARB/depth_texture.txt

:slight_smile:

Thanks for the answer.

To answer your question: my ultimate goal is to render the back portion of my background scene opaque and the front transclucent, and draw three pipe-like models into the result. The models may appear in front of the translucent part, behind the opaque part or anything in between.

The big constraint is that my scene is coming from an external renderer, and I get output arrays with colour and depth data for a single view at a time (ie no information about anything that’s obscured).

To get enough information to support translucency I’ll need two passes from my renderer, one for the solid portion and one for the translucent portion. I’m looking for the best way to combine the two scenes with the models such that the depths of the models are handled correctly when it’s all put together.

There’s no problem drawing my solid background and the models. I’ve been trying to find a way to draw the translucent part such that it only blends with things that are actually behind it. The background should always be ok, but some portion of the models may be in front of the translucent part.

I was hoping there was something tricky I could do with textures to control the blend based on the relative depths of the models and the translucent parts of my scene.

I guess I can always draw the translucent part to a separate pBuffer and then glCopyPixels into the frame buffer with blend and depth test enabled, but I’m not happy about what that’ll do to performance - especially since I’ll have to redo that copy whenever the models move.

Ah, poking around with probes in other people’s bodies again? :wink:
You need the depth information of the transparent front or this won’t work!
If you have the depth information of both the background and the foreground, you can use a stencil algorithm to mask the “inside” space of your probes.
The key here is the glStencilOp zfail and zpass operations. With these you can count if the probe has passed the depth test against the background depth in the first drawing pass, and fails the depth test against the front depth in a second pass.
In the final drawing you can choose the stencil comparison to select the “probe inside” to only blend the front with the probe there.
The composition should be, draw background and depth, draw probes with depth test and stencil test in the inside are, disable stencil test, draw transparent front. Now you have two method, either move the front depth into the depth buffer and draw the probes (outside, no blending) or keep the depth buffer from the background and draw with stencil test masking previously rendering inside and render the probes without blending. As the stencil has previously decided which parts of the probe are in front or behind, this should work in both case.
Happy probing.

BTW, I’ve not tried it yet, but with fragment programs you can write color and depth, so if you have a suitable GL_DEPTH_COMPONENT24 texture setup, you can write the data from there.
A fragment shader having access to both front and back, color and depth textures simultaneously should be able to render the probes inside and outside in one pass.

Ah, poking around with probes in other people’s bodies again?
Yep - what can I say, everybody needs a hobby…

Since I got things working fairly quickly the first time 'round (with your help!) they decided to throw some more requirements at me… figures!

Thanks Relic!