Small thing about depth test.

There’s always a point in which it comes to disable depth testing to do something. I usually do that do draw hud-like things. Now, the conventional wisdom tell us this is as easy as calling

Disable(DEPTH_TEST);

Now, when depth testing is disabled, all fragments will pass the test. Fine. I was pretty sure it was equivalent to leave the depth test enabled with

DepthFunc(ALWAYS);

However, I get different results from one and the other. The first works correctly while the second does not: objects can be “hidden” behind other “normal” objects which is obviously wrong.
If I move the camera correctly, I can make the hud-like thing intersect the normal things and get clipped.
So, I guess those two things are not the same. Is this a bug or I was just wrong since the beginning?

Thank you in advance.

No, they should be the same (visually).
What`s your video? drivers?

Hello .

Disabling depth test DOES NOT MAKE ANY TEST.
making the depth function allways pass first takes the test and then passes it. I don’t know what abnormalities can result from that since no one clearly stated what happens when OpenGL does take the depth test yet makes it constantly pass.

As for drawing your Head Up Display, I don’t know if disabling the depth test is that good. IF you will have a complex hud (in which dynamic tapes have to come on top of each other and setncils must cut out regions of unwanted information like I build) and will need to display one symbol on top of the other , if disabling the depth test will do you good. I would recommend a more common approach of attempting to draw things using the close Z buffer (after all Z buffer is exponential and is very good at close ranges) and for anomalies not to happen use glDepthMask(GL_FALSE) when drawing begins.

good luck !

Disabling depth testing also disables depth writing. If you want to write depth, but always pass the depth test, you need to enable the depth test, enable depth writing, and set the depth test to GL_ALWAYS.

As for the HUD display - if the HUD is really an overlay on top of what’s already been rendered, you can simply clean the depth buffer (but not the color buffer!) and simply render the HUD, possibly with an orthographic matrix. Alternativaly, you can also set the depth test to always pass and draw your HUD this way.

Originally posted by V-man:
No, they should be the same (visually).
What`s your video? drivers?

So I have to point my finger at (another) bug in nvidia’s 44.03 drivers?

nv44.03 - GeForce4Ti4400 win2000 SP3
I don’t think the rest is important.

Originally posted by phoenix_wrath:
Disabling depth test DOES NOT MAKE ANY TEST.
making the depth function allways pass first takes the test and then passes it.

… which is equivalent to disabling the depth test. You could argue that the hardware implementation should care about this distinction, but then I would argue back that it’s a perfectly valid optimization to turn off the depth test if it’s GL_ALWAYS. This saves the bandwidth required for reading in depth. Good thing.

Originally posted by al_bob:
Disabling depth testing also disables depth writing.
No.

quote:

Originally posted by al_bob:
Disabling depth testing also disables depth writing.

No.

It does … i ran into the same problem.

Klaus

But what about the almighty spec?

The depth buffer can be enabled or disabled for writing z/w values using
void DepthMask( boolean mask );
If mask is non-zero, the depth buffer is enabled for writing; otherwise, it is
disabled. In the initial state, the depth bu
er is enabled for writing.

Where’s the exception to this rule pertaining to GL_ALWAYS as the depth test, in the spec I mean?

As long as no such exception is found, let’s just consider it a driver bug.

You know, it’s a legal optimization to turn off depth writes when the depth test is GL_EQUAL.
Maybe something just got mixed up there …

API Doc: glEnable


GL_DEPTH_TEST
If enabled, do depth comparisons and update the depth buffer. Note that even if the depth buffer exists and the depth mask is non- zero, the depth buffer is not updated if the depth test is disabled. See glDepthFunc and glDepthRange.

A better source is the GL spec itself:

When [the depth buffer test is] disabled, the depth comparison
and subsequent possible updates to the depth buffer value are bypassed and
the fragment is passed to the next operation.
It looks like al_bob was right. I could swear I’ve seen the depth buffer written to even if DEPTH_TEST was disabled. Must have been a driver bug.

But I noticed a slight difference between glDepthFunc(GL_ALWAYS) and glDisable(GL_DEPTH_TEST): The former updates the z buffer for each fragment drawn and the latter never updates the z buffer. As such, it would be better to use glDisable, because it prevents unnecessary depth writes.

Obli: are you sure you’re rendering the HUD last?

Originally posted by Aaron:
A better source is the GL spec itself:

Better how exactly? Personally I find the docs much easier to read.

Originally posted by roffe:
Better how exactly? Personally I find the docs much easier to read.

Maybe it’s the difference between a normative and a non-normative document.

I didn’t mean that the 3dLabs API docs are a bad source, I just meant that the spec is the authority on OpenGL. If the spec says one thing, and every book and every driver say and do another, then the spec is still right and all of the books and drivers are wrong, because the spec defines OpenGL. All other documents just describe it, and they might be wrong. Anyways, I don’t mean to be argumentative; I’m just never satisfied until I actually see it in the spec.

Oh my, you really never stop learning …
My apologies.

Originally posted by Aaron:
I didn’t mean that the 3dLabs API docs are a bad source, I just meant that the spec is the authority on OpenGL. If the spec says one thing, and every book and every driver say and do another, then the spec is still right and all of the books and drivers are wrong, because the spec defines OpenGL. All other documents just describe it, and they might be wrong. Anyways, I don’t mean to be argumentative; I’m just never satisfied until I actually see it in the spec.

This is kind of a silly discussion but, anyways here I go…

First of all, every copy of the OpenGL man pages I’ve ever seen(opengl.org,3dlabs,sun…) are the same. I’ve never seen vendors adding things on their own to them. Maybe I’m wrong?

And to my knowledge, the man pages are derived directly from the spec(no surprise there). It contains a crunched down version of the spec aimed for users of the OpenGL API.
As a user of the API I prefer this version because it’s short and quickly accessed.The man pages of course only covers OpenGL1.1 which is one reason to look in the spec for a more “complete” description. I just don’t like the idea of people thinking that the man pages are any less accurate, at least regarding gl1.1. And of course, anyone with uber knowledge in this area are more than welcomed to correct me if I’m wrong

[EDIT] typo

[This message has been edited by roffe (edited 08-06-2003).]

I just don’t like the idea of people thinking that the man pages are any less accurate, at least regarding gl1.1
I think it is highly likely that most OpenGL API references (especially 3dlabs’) are 100% correct. But there is the slight possibility that the person(s) writing the document might misinterpret the spec or otherwise make an unintentional error. Thus I feel that it is better, when there is a disagreement about the behavior of OpenGL, to go to the one authoratative document to settle it: the specification. I really have no problem with 3dLabs’ or any other GL API reference.

Please let’s keep on subject.
I have checked it again. The ${thing} getting rendered is the last thing which goes to the frame buffer. It really does not matter what it is (for this reason I used a perl-like variable scripting - change this string with whatever you want), it doesn’t matter if depth buffer writes are enabled or not since what happens happens during rendering of that ${thing}.

The point is that part of ${thing}'s fagments are someway getting discarded when DepthFunc(ALWAYS) while this does not happen when I do Disable(DEPTH_TEST).

How the depth buffer is left by those two operations is an important thing but this is not the point right now. Maybe I will just do some zbuffer dumps and check, but not now… one problem at time!

Here’s another very simple description of what’s happening with DepthFunc(ALWAYS):

You have your frustum which is working fine. Most of the time, ${thing} renders on top of other things. Let’s not give focus to what ${thing} is, it really does not matter. For who’s curious, it’s a bunch of billboarded quads in ortho projection.
Now, there’s a quad somewhere with depth writes on.
If the camera approaches the quad, there’s a certain point in which some ${thing}'s fragments are discarded. It’s also possible to “hide” ${thing} behind the quad. This is wrong.

Using Disable(DEPTH_TEST) makes all the fragments always pass. ${thing} is always above the other things. Fine.

Now, besides getting updated z or not, the two should be equivalent - all the fragments should pass as far as I know.

EDIT: small (but ugly) typo.

[This message has been edited by Obli (edited 08-07-2003).]

From your description, this certainly sounds like a driver bug. I wrote a simple test app, and I did not see this behavior on my system (WinME, nForce). You should probably submit some code that produces this erroneous behavior to nVidia.

So I was right on that.
Lame. Lame. Lame.

I will do a bugreport to them as soon as my linuxbox will boot again (email client is installed only under linux and I don’t want to mess up my archives using win).

Thanks for the confirmation (sigh) - now I heard another opinion, I am 99% sure it’s a bug (before it was 80%).