View Full Version : wgl_render_texture clarification

01-03-2005, 07:21 AM
According to the spec, depth buffers cannot be bound as textures. Although it does indicate that this would be "an interesting additional extension" and easily added.

Are there any plans for adding this?

Thanks in advance for any response

Daniel Wesslen
01-03-2005, 07:57 AM
WGL_NV_render_depth_texture (http://oss.sgi.com/projects/ogl-sample/registry/NV/render_depth_texture.txt)

01-03-2005, 08:26 AM
Thanks Daniel - That is what I was looking for. I assume ATI does not support it - is there a similar ATI extension? Or better yet, plans for an ARB version?

Thanks again for any help....

01-03-2005, 09:03 AM
I haven't heard of any, but most probably I'm not up to date:(
However you can write depth into color buffer(through shaders), and then use it as depth texture. Works pretty good.

01-03-2005, 09:25 AM
ATI doesnt suppport it, as mentioned above you need to encode it into a RGBA texture instead (code can be supplied in GLSL format if needed).

Hopefully however, once the framebuffer_object extension gets into drivers this shouldnt be an issue as I would hope we could just bind one texture to RGBA and another to depth and have it work on all hardware.

01-03-2005, 05:18 PM
Lurker - I like that idea. Do you know of any samples or tutorials that do that? Should it be a float texture? Bobvodka, glsl would be greatly appreciated... ;-)

Also, is it slower to do the depth comparison in my own fragment shader as opposed to using the hardware extension (like ARB_SHADOW)?

Thanks again for any feedback.....

01-03-2005, 10:42 PM
Personally I use floating point (16 or 32 bit) pbuffer (na clamp, no need to encode the number into 4 separate channels). I set color mask to write to only one channel. I write the length of the transformed vertex.
However,as bobvodka said, it can be done with regular 8 bit per channel RGBA texture.
I didn't compare the performance, but I *guess* the extension would be faster(although I read somewhere in these forums that it's emulated through programmable pipeline on ATI hardware).
Still, through shader you can get soft shadows and by writing the depth manually you can do depth peeling (is the term correct?) - write depth of back faces to red, depth of front faces to green, and then average. It gets rid of some ugly artifacts:)
As for the tutorials, I don't remember where are these:( However I can post (If you ask) my GLSL shaders - they are pretty simple. But if you analize ARB_SHADOW examples doing similar thing with GLSL and color pbuffers should be easy.

Hope my wordiness helps:)

01-04-2005, 02:00 AM
Originally posted by azcoder:
Also, is it slower to do the depth comparison in my own fragment shader as opposed to using the hardware extension (like ARB_SHADOW)?
I don't know if it is faster, but this example (glsl shaders) runs pretty fast in my gf6800gt


Hope this helps.

01-04-2005, 04:30 AM
Originally posted by azcoder:
Bobvodka, glsl would be greatly appreciated... ;-)
As requested :)
Pack factors are from a Humus demo, which is also where i converted the code from :D

// fragment program which packs the z-value into a rgba8 texture
const vec4 packFactors = vec4( 1.0, 256.0, 65536.0, 16777216.0 );
void main()
gl_FragColor = vec4(fract(packFactors*gl_FragCoord.z));
} to extract, do a normal projective lookup in the fragment program and then do

float shadow = dot(shadowValue,extract); where extract is

const vec4 extract = vec4( 1.0, 0.00390625, 0.0000152587890625, 0.000000059604644775390625 ); The encode/extact values can be worked out at compile time rather than using those values above, but i cant remember off the top of my head how to make them :D

01-04-2005, 07:19 AM
As far as I remember:
you multiply by
1.0 = 2^0
256.0 = 2^8
65536.0 = 2^16

so to reverse you have to divide
1.0 = 1/2^0
0.00390625 = 1/2^8
0.0000152587890625 = 1/2^16

the exponents are k8 because you have 8 bits of precision per channel.

I hope I haven't messed anything up:)

01-04-2005, 08:23 AM
ah yes, thats it :)