View Full Version : About Render to depth texture in ATI Cards?
09-13-2003, 05:16 PM
As we all know, In Nvidia Cards, rendering to depth texture with pbuffer can use WGL_NV_render_depth_texture OpenGL Extension which can bind the pbuffer to depth texture directly. In the Shadow Application with shadowMap method, It is very useful. But ATI Cards do not support this extension.
So the question is how should I do Rendering to depth texture in ATI cards? And which OpenGL Extension can replace it?
09-13-2003, 05:25 PM
There is no way to do this on ATi hardware AFAIK. But this feature is pretty essential in order to do shadowmapping in OpenGL efficiently.
I wrote to ATi's developer relations about this (firstname.lastname@example.org), asking if they planned to implement it. The response I got back basically said they had no idea, but suggested that they weren't sure it was possible on current hardware. They said something about contacting the driver guys about it, but I haven't heard anything yet.
If you want to see something done about this, I suggest you contact ati yourself.
[This message has been edited by bunny (edited 09-13-2003).]
09-13-2003, 06:32 PM
But this feature is pretty essential in order to do shadowmapping in OpenGL efficiently.
Says who? All you need to do is render the depth to the render target (which can be a 32-bit luminance texture). Since ARB_fp allows you to read the depth, you can easily just write this value.
[This message has been edited by Korval (edited 09-13-2003).]
09-13-2003, 11:34 PM
All you need to do is render the depth to the render target (which can be a 32-bit luminance texture).
I mean In OpenGL How to solve this problem. I have not learn the D3D. In OpenGL, there are no Render Target and what I know is PBuffer.
09-14-2003, 03:30 AM
uhm.. a render target is just a target you can render at. in your case, a pbuffer.
he IS talking about opengl.
09-14-2003, 03:51 AM
Just use the standard ARB_Depth_Texture extension on ATI, and do a glTexCopySubImage, to copy your pbuffer into the depth texture.
09-14-2003, 11:29 AM
Last time I checked it was faster on my GeForce FX to use glTexCopy(Sub)Image than binding the buffer to a depth texture. I know there were performance issues some time ago with pbuffers and NVidia hardware, can somebody confirm that (at least for depth textures) it's still faster to copy the texture?
09-14-2003, 03:24 PM
what about this:
render to floating-point texture, using a fragment program which uses the depth-value as color output ?
when you don't like floatingpoint textures, you can even use an 32-bit rgba buffer as a rendertarget. check this code out:
[This message has been edited by AdrianD (edited 09-14-2003).]
Powered by vBulletin® Version 4.2.2 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.