PDA

View Full Version : GL_EXT_depth_bounds_test



_arts_
12-09-2011, 08:29 AM
Hi,

it seems this extension (GL_EXT_depth_bounds_test) is not supported on ATI cards. It was the case several years ago (cf this topic: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=230355#Post2303 55) and it seems that they still do not support it (can't find it in the extension string).

Regarding the specs (http://www.opengl.org/registry/specs/EXT/depth_bounds_test.txt), it has been done (and copyrighted) by nVidia. So why then it is an EXT and not an NV extension ? And even thought, there are several NV extensions supported by my ATI card (haven't checked but certainly those with no nvidia copyrights I guess).

Does this mean depth bounds tests absolutely can't be done on ATI ? Are there any similar functionalities/extensions (I haven't found some).

PS: Regarding this (old thread): http://www.gamedev.net/topic/436169-depth-bounds-test-on-ati/ it seems there's no chance to have it on ATI cards.

Thanks.

V-man
12-09-2011, 10:57 AM
Copyright applies to documentation and such. It is not the same as a patent.
Such as in this case http://www.opengl.org/registry/specs/S3/s3tc.txt

There use to be a GL_NV_depth_bounds_test
http://www.nvnews.net/vbulletin/archive/index.php/t-12405.html

With a EXT, I imagine it was an invitation for others to implement it.

_arts_
12-09-2011, 01:32 PM
Thanks for the answer. Yes I was mistaken between copyrights and patents.

When you say an invitation for others to implement it, and since this was done in 2003, I think I will conclude that noone else wanted to implement it. It's a bit a pitty since I can see here and there on Internet several applications with this functionality.

Of course we can do it ourselves with shaders (it seems quite easy to send some uniform values and check it in the fragment shader) but it also seems this extension, when implemented by a constructor, provides some optimizations a normal shader code can't afford.

Alfonse Reinheart
12-09-2011, 03:39 PM
when implemented by a constructor

Constructor of what?

In any case, there's no guarantee that modern NVIDIA hardware doesn't simply implement this by inserting a few opcodes in front of your shader.

arekkusu
12-09-2011, 06:18 PM
FWIW, Apple's software renderer supports this extension. So at least two vendors support it.

_arts_
12-09-2011, 11:50 PM
Constructor of what?

I meant IHV.


In any case, there's no guarantee that modern NVIDIA hardware doesn't simply implement this by inserting a few opcodes in front of your shader.

OK. It's a good thing to know.


Apple's software renderer supports this extension. So at least two vendors support it.

AFAIK Apple software renderer is not supported by the harware. And by the way, isn't it simply Mesa with a disguised name ?

arekkusu
12-10-2011, 05:35 PM
AFAIK Apple software renderer is not supported by the harware. And by the way, isn't it simply Mesa with a disguised name ?
It's a CPU implementation. But no, it has nothing to do with Mesa.

Eric Lengyel
12-10-2011, 07:56 PM
In any case, there's no guarantee that modern NVIDIA hardware doesn't simply implement this by inserting a few opcodes in front of your shader.

Depth bounds test is supported directly in hardware on all GeForce GPUs in existence from the GeForce FX 5900 onward. It is part of the Z-cull hardware where it can operate at tile granularity and reject fragments at a super-fast rate. It is never implemented by adding instructions to your shader.

On Nvidia's SM5 hardware, the command buffer register that enables DBT is 0x066F, and the registers that hold the min and max depth bounds are 0x03E7 and 0x03E8.

By making uninformed guesses about how OpenGL features are implemented, you're not really helping anyone.

Alfonse Reinheart
12-10-2011, 08:05 PM
I also didn't say that it wasn't implemented in hardware. Absence of evidence is not evidence of absence.

Also, there's no guarantee it will be implemented in future hardware.

In any case, it's certainly not exposed on any ATI or Intel hardware. So as with everything else, if your needs allow you to use only NVIDIA hardware, then feel free to use this extension. Otherwise, you'll have to do it with opcodes.

_arts_
12-11-2011, 02:56 AM
It's a CPU implementation. But no, it has nothing to do with Mesa.

Oh OK. Thanks for this.


Depth bounds test is supported directly in hardware on all GeForce GPUs in existence from the GeForce FX 5900 onward. It is part of the Z-cull hardware where it can operate at tile granularity and reject fragments at a super-fast rate. It is never implemented by adding instructions to your shader.

This is what makes the lack of this extension on other IHV graphic cards very pity. If one wants to perform this, she either has to do it all in the shaders, and forgets about this extension, or make two different sets of shaders (or do some kind of ubershaders...).

Thanks.