24b Depth from texture to ARB_FP vector component

Hi!

I’d like to do this:

  1. Render my scene to the depth buffer
  2. Store the depth in a texture (24bits)
  3. Access the depth of a pixel in a ARB_FP, i.e put the value in a TEMP vect.z component.

Would this work?

Do I have to use ARB_depth_texture?

Which is the fastest/best way of render-to-texture?

Any help appreciated.

/Chris


Would this work?

It worked for me but not exactly like that. Depth textures are looked up as luminance only, so the lookup value is not just in .z but in .xyz, just as the spec says luminance only lookups should be done.

Do I have to use ARB_depth_texture?

Yes, to get better resolution, unless you use a FP to store depth in multi-digit form or you have higher color depth textures.

Which is the fastest/best way of render-to-texture?

Sadly enough, looks like using pbuffers to render-to-texture is pretty slow, as stated some time ago in a forum thread… I guess it’s better to render to backbuffer and then copy to texture.
This however is implementation dependant. I know for sure NV video cards does not like the context change too much… maybe it goes better with ATi since in my tests radeons where much faster.

Both glCopyTextureImage and P-Buffers are fast without FSAA enabled on ATI cards. Once FSAA is enabled, though, glCopyTextureImage slows to a crawl when extracting the depth buffer. Apparently P-Buffers are still fast, but I was under the impression that there’s no pixel format that combines multisampling and P-Buffers. . . :stuck_out_tongue: