wgl_render_texture clarification

According to the spec, depth buffers cannot be bound as textures. Although it does indicate that this would be “an interesting additional extension” and easily added.

Are there any plans for adding this?

Thanks in advance for any response

WGL_NV_render_depth_texture

Thanks Daniel - That is what I was looking for. I assume ATI does not support it - is there a similar ATI extension? Or better yet, plans for an ARB version?

Thanks again for any help…

I haven’t heard of any, but most probably I’m not up to date:(
However you can write depth into color buffer(through shaders), and then use it as depth texture. Works pretty good.

ATI doesnt suppport it, as mentioned above you need to encode it into a RGBA texture instead (code can be supplied in GLSL format if needed).

Hopefully however, once the framebuffer_object extension gets into drivers this shouldnt be an issue as I would hope we could just bind one texture to RGBA and another to depth and have it work on all hardware.

Lurker - I like that idea. Do you know of any samples or tutorials that do that? Should it be a float texture? Bobvodka, glsl would be greatly appreciated… :wink:

Also, is it slower to do the depth comparison in my own fragment shader as opposed to using the hardware extension (like ARB_SHADOW)?

Thanks again for any feedback…

Personally I use floating point (16 or 32 bit) pbuffer (na clamp, no need to encode the number into 4 separate channels). I set color mask to write to only one channel. I write the length of the transformed vertex.
However,as bobvodka said, it can be done with regular 8 bit per channel RGBA texture.
I didn’t compare the performance, but I guess the extension would be faster(although I read somewhere in these forums that it’s emulated through programmable pipeline on ATI hardware).
Still, through shader you can get soft shadows and by writing the depth manually you can do depth peeling (is the term correct?) - write depth of back faces to red, depth of front faces to green, and then average. It gets rid of some ugly artifacts:)
As for the tutorials, I don’t remember where are these:( However I can post (If you ask) my GLSL shaders - they are pretty simple. But if you analize ARB_SHADOW examples doing similar thing with GLSL and color pbuffers should be easy.

Hope my wordiness helps:)

Originally posted by azcoder:
Also, is it slower to do the depth comparison in my own fragment shader as opposed to using the hardware extension (like ARB_SHADOW)?

I don’t know if it is faster, but this example (glsl shaders) runs pretty fast in my gf6800gt

http://www.ampoff.org/modules.php?name=Forums&file=viewtopic&t=15

Hope this helps.

Originally posted by azcoder:
Bobvodka, glsl would be greatly appreciated… :wink:

As requested :slight_smile:
Pack factors are from a Humus demo, which is also where i converted the code from :smiley:

 // fragment program which packs the z-value into a rgba8 texture
const vec4 packFactors = vec4( 1.0, 256.0, 65536.0, 16777216.0 );
void main()
{	
	gl_FragColor = vec4(fract(packFactors*gl_FragCoord.z));
} 

to extract, do a normal projective lookup in the fragment program and then do

 float shadow = dot(shadowValue,extract); 

where extract is

 const vec4 extract = vec4( 1.0, 0.00390625, 0.0000152587890625, 0.000000059604644775390625 ); 

The encode/extact values can be worked out at compile time rather than using those values above, but i cant remember off the top of my head how to make them :smiley:

As far as I remember:
you multiply by
1.0 = 2^0
256.0 = 2^8
65536.0 = 2^16

so to reverse you have to divide
1.0 = 1/2^0
0.00390625 = 1/2^8
0.0000152587890625 = 1/2^16

the exponents are k8 because you have 8 bits of precision per channel.

I hope I haven’t messed anything up:)

ah yes, thats it :slight_smile: