Depth value in a fragment program

Sorry if this is a trivial question (and I am quite sure it is), but I am asking it anyway (and doing this here as the advanced forum I guess it would rather belong to seems inaccessible).

How do I obtain the depth value of a fragment in an NV_fragment_program (NV, not ARB!!!)? I tried f[WPOS].z as it is proposed in the specs but this simply doesn’t work… f[WPOS].z always seems to be 1.0.

So, what is going wrong?

thx
Jan

It should just work.
“What’s going wrong” is not an advanced question if you don’t explain what the heck you’re doing.
What are you drawing at which coordinates while it’s 1.0?
How do you know it’s always 1.0?
What’s your hardware?
More fragment code please.

ok… I wroten an higly advanced fragment program for testing purposes:

MUL H1, f[WPOS].zzzz, 0.5;
MOV o[COLH], H1;

this makes every fragment have the color grey. If I use 0.1 instead of 0.5, it becomes nearly dark, if I use 0.9, it becomes nearly white, which obviously means that every fragment gets colored with the number in the first line of code, which means that f[WPOS].zzzz is 1/1/1/1.

hardware obviously is Geforce FX (or does NV_fragment_program run on ati?), 5700 non-ultra to be more specific.

Thanks
Jan

And what’s the geometry setup?
Matrices, vertex programs, primitves rendered?

does this matter? as everything looks perfectly fine (if not using this fp but the right one) and depth buffering seems to work just perfect, I thought everything would be ok with geometry setup. it is triangle strips rendered with display lists and a vertex program is used for transformation (and several other things). But as I said if everything looks a) fine and b) the same as when not using vp and fp, and c) depth buffering works fine, I guess in general, the program is ok. So I have no idea what could be wrong.

does this matter?

Yes! The depth value comes from interpolating the post-transformed vertex positions. The matrices and vertex data, therefore, are vital for solving the problem.

For example, if your depth happened to actually be 1.0, you don’t have a problem.

yes of course but I find it strange that in a totally normal looking scene, the f[WPOS].z value of all fragments of the ground seem to be 1.0. The scene looks like this:
http://de.geocities.com/westphj2003/refl.html

and as you see, depth buffering seems to work and not all fragments have the same z coordinate. But nonetheless, if I use my test fp above, the whole ground gets entirely grey/white/black, namely the number I use in the MUL statement. So I think this is strange, or I am missing something obvious.

What near/far plane values are you using?

Window space Z values are very non-linear, and will often look like white if you copy them to color.

Originally posted by JanHH:
[b]yes of course but I find it strange that in a totally normal looking scene, the f[WPOS].z value of all fragments of the ground seem to be 1.0. The scene looks like this:
http://de.geocities.com/westphj2003/refl.html

and as you see, depth buffering seems to work and not all fragments have the same z coordinate. But nonetheless, if I use my test fp above, the whole ground gets entirely grey/white/black, namely the number I use in the MUL statement. So I think this is strange, or I am missing something obvious.[/b]

near: 1.0
far: 15000.0

see how depth buffer evolve with distance: http://www.sgi.com/software/opengl/advanced98/notes/node25.html
when you got distant objects, they’ll all look the same in term of color (that is, near 1.0)

you should maybe do a more discriminative test (like: color=(depth>.8?1:0)) or, if you can, rectify the non linearity of depth buffer (however it’s maybe not so easy).

Anyway let me know results of your investigations, I’ll have soon to use depth in my fragment shader and I would be interested to know if you encounter other difficulties

[This message has been edited by divide (edited 03-17-2004).]

everything looks exactly the same. the nearest parts of the ground have exactly the same color as the ones that are most far away, there is not the slightest difference in color visible. it’s really exactly the same. if you still do not believe me I will post some screenshots .

At the moment I think about computing the distance from the viewer to each vertex in the vertex program and pass that as texture coordinate and then use that instead… or stick with the fog coordinate.

Is there no one around who got that working and is able to tell me what the heck is going on?

Thanks
Jan

Originally posted by JanHH:
Is there no one around who got that working and is able to tell me what the heck is going on?

I’ve only used fragment depth with ARB_fragment_program. Maybe you can try to substitute your shader for an ARB one to rule out driver issues?

– Tom

sigh… ok I will do so. maybe I should substitute my graphics board for an arb one, too .

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.