Depth Buffer issue?

So, I’ve rendered an engine. I’ve uploaded the pictures here: pictures. The casing is supposed to cover up the internals of the engine. The top is a front view, and in the lower two pictures I’ve rotated the engine clockwise, as you look from the top.
I’m using shaders, and I suspect I have a problem with the depth buffer and depth testing. l don’t really understand how the shaders interact with the depth buffer. What I’m doing now is setting gl_FragDepth to gl_Position.z/gl_Position.w.
I also suspect that the problem may be caused by face culling, though I turned that off…

Any help, or just a pointer to something that describes how shaders and the depth buffer work would be great.

As long as you do not write to gl_FragDepth, at all, you will get the default behaviour, which is what you usually want.

I don’t see how you can set gl_FragDepth with values contained in gl_Position:
[ul][li]gl_FragDepth is in the output of a fragment shader (section Shader Inputs, spec 2.1 page 198) , whereas gl_Position is in the output of a vertex shader (section Varying Variables, spec 2.1 page 84).[*]The input of a fragment shader is gl_FragCoord which is very different from what gl_Position is.[/ul][/li]
Syntaxically, your code should not even compile.

Look what happen to gl_Position (see 2.15.4 Shader Execution, “The following operations are applied to vertex values that are the result of executing the vertex shader”,spec 2.1 page 85):

gl_Position contains the clip coordinates of a vertex. Let’s note these coordinates (x_c,y_c,z_z,w_c). It means gl_Position.xyzw=(x_c,y_c,z_c,w_c).
[ul][li]1 The perspective division divides the clip coordinates by w_c, and gives the normalized device coordinates (x_d,y_d,z_d)[*]2 the viewport transformation followed by an optional polygon depth offset is applied. The result of this transformation are the window coordinates of the vertex (x_w,y_w,z_w)[/li]
This is the notation of section 2.11.1, spec 2.1, page 42.[/ul]

At this point we have the position of the vertex expressed in window coordinates.

I know that’s a lot of “w” with different meanings, and that’s why it is confusing:
[ul][li]_w: for a component expressed in window coordinates[].w: for the fourth element of a vector in a shader[]w_c: for the fourth component of vertex position expressed in clip coordinates.[/ul][/li]
It becomes nasty when it comes to the fourth element of the input of a fragment shader gl_FragCoord.w.

Now, what happens during the rasterization of some primitive.

Let’s take a simple line segment for the purpose of the discussion between vertices V0 and V1.

x_w and y_w are linearly interpolated between V0 and V1.
let’s note x_i and y_i these interpolated values.

gl_FragCoord.x=x_i
gl_FragCoord.y=y_i

z_w is linearly interpolated as well: (equation 3.7, spec 2.1, page 104)

gl_FragCoord.z=z_i

see “Shader Inputs” (spec 2.1, page 198)

Finally, here is the really confusing notation: gl_FragCoord.w

It is not an interpolated value expressed in window coordinates, but the interpolated value of the invert of w_c, the fourth coordinate in clip coordinates.

gl_FragCoord.w=(1/w_c)_i

Also, as Jan said, if you don’t write anything in gl_FragDepth, the value of gl_FragCoord.z is assigned to it automatically.

ref: Shader Ouputs, spec 2.1, page 199:

“If the active fragment shader does not statically assign a value to gl_FragDepth, then the depth value generated during rasterization is used by subsequent stages of the pipeline.”

I hope it will make you less confused about all the meanings of “w” and depth values.

I was under the impression that the matrix stack got deprecated, so I transformed the vertices using custom uniforms. Since I’m not using the provided matrix stack, I don’t think that would still work. I know that at this point I could still use the default matrix stack, but I decided to be future-proof and actually understand the process.

I set a varying variable in my vertex shader, then used that in the fragment shader, effectively determining the depth in the vertex shader, so I could use gl_Position.

If I leave gl_FragDepth alone, as you both suggest, nothing changes…

With what I said, what you compute with gl_Position.z/gl_Position.w is actually z_d, the normalized device coordinates.

Doing that, you miss the viewport transformation and the optional polygonal depth offset, which are

z_w=(f-n)/2*z_d+(n+f)/2+o
o: optional depth offset

The interpolated value of z_w is the depth value for a given fragment.

see section 2.11.1 Controlling the Viewport, spec 2.1 page 42.
See section 3.5.5 Depth Offset, spec 2.1 page 112.

This transformation is not deprecated: see section 2.12.1 Controlling the Viewport, spec 3.1, page 74.

Yes, transformations from objects coordinates (gl_Vertex) to clip coordinates (gl_Position) are deprecated in opengl>=3.0. In Opengl<3.0, if you define a vertex shader you bypass the transformations but you still have access to the matrix from the shader with gl_ModelViewMatrix and gl_ProjectionMatrix (see GLSL spec 1.20, Built-in Uniform State, page 50)

Thanks for the detailed responses. They really help.
I think I found the real problem:
When I projected the vertices, they all ended up with the same z-values, and so the depth testing did no good whatsoever.

Thanks.