very simple shader program renders black artefacts

Since Catalyst 7.12, my headlight shader program is broken. It doesn’t just produce a nice round shine, but a lot of black artifacts like the scene somehow gets dithered. I have played around with driver settings, but to no avail.

Finally I reduced the shader program to a very simple form, just setting the fragment color to twice the texture color. I have no bloody idea what is going wrong here. Please help.

Artefacts:

How it should look (w/o headlight):

vertex shader:

void main(void)
{
gl_TexCoord [0]=gl_MultiTexCoord0;
gl_Position=gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor=gl_Color;
}

fragment shader:

uniform sampler2D btmTex;
void main(void)
{
vec4 btmColor = texture2D (btmTex, gl_TexCoord [0].xy);
gl_FragColor = btmColor * 2;
}

The same problem existed with NVidia drivers for quite a while already. With Catalyst 7.10, everything worked.

Before the color pass a depth pass is rendered. If I turn on color write in that pass, I am getting gray artifacts instead. Could I have a problem with client states?

Are stencil shadows involved in some way?

Never heard of gl_TexCoord. Is that a built in variable? Try using your own varyings to pass information from the vertex-shader to the fragment-shader.

Do the black artifacts flicker somehow or are they always fixed at the same position?

Since you only do the standard transformations, you should use ftransform in the vertex-shader, especially when doing multi-pass rendering.

Jan.

That’s the hierarchical Z-buffer of ATI! I recognize it. :slight_smile:

I experienced it when I ran a program rendering both GL and DX scene in parallel, and it seemed like the hierarchical Z (HyperZ?) was shared between at least the two (but more likely all users). The result was 4x4 pixel blocks not rendered properly.

Funny though, that you should encounter it like this. Perhaps my experiences gives some clues? It seems you could have gotten that hierarchical Z-buffer throw a fit.

“The same problem existed with NVidia drivers for quite a while already.” The very same problem? Not z-fighting, but this display of (seemingly) hierarchical-Z, i.e. blocks of 4x4 pixels? On nvidia?

Some more ideas:

Your description sounds much like z-fighting, so check your camera-settings (near and front plane), your depth-comparison mode (GL_LEQUAL or GL_EQUAL it should be) and maybe your depth-buffer precision (24 Bit?).

Do you have alpha-test enabled? You read an alpha-value from your texture (do the textures have an alpha-channel?) and write it as the fragments alpha value, so if alpha-test is enabled, it could discard your fragments.

Jan.

If you’re mixing fixed functionality and shaders in different passes it’s also best to use the ftransform function in the vertex shader to make sure the results are the same.

N.

Thanks for all the replies.

Where/how exactly do I insert the ftransform() call(s)?

The artifacts are moving when I change the view direction or viewer position. Sometimes entire faces are black.

I have found out that the problem is caused by the initial depth only render pass. If I omit it, I don’t get these artifacts.

Just insert:
gl_Position=ftransform();
instead of:
gl_Position=gl_ModelViewProjectionMatrix * gl_Vertex;

By the way, I’ve experienced similar artifacts when dividing by zero in shaders so maybe it’s related.

Cheers,
N.

It has something to do with depth only rendering and client states. Looks like I could fix it using ftransform(), but my program crashes in the ATI driver frequently now - and occlusion queries don’t work anymore, too.

If you can, try to avoid using the fixed function pipeline and use vertex shaders for all passes. Make sure you use the same shader code to produce your output position in each shader (a single vertex.position * mvp would be ideal), and it would be best to put this as the first bit of code in your shader. Any difference in the code used to calculate output position (even just having other unrelated code executed before or in the middle of) can cause a slight difference and result in z-fighting.

The headlight works now, but my occlusion queries do not always anymore.

“I have found out that the problem is caused by the initial depth only render pass.”

That reaffirms my suspicion. Just as a test; could you try pushing the scene out - just the tiniest bit - before running the depth rendering pass, and then return to the actual projection matrix for the rest?

If my suspicion is correct, it could suggest you must run the depth-only pass with a shader program too, just to get the same depth-calculation path. Performance could suffer, and it could if unlucky introduce other artifacts. :frowning:

What hardware do you use anyway? I thought that modern hardware uses the same path for the ffp and shaders.

Jan.

Jan, from experience ATI uses a hierarchical depth buffer. It stores the depth (or something) for 4x4 framebuffer block’s as a “first measure”. If using depth-buffer from a shader, I’m not willing to bet it behaves the same if writing to it from a shader as if letting the hardware as such write to it (for a depth-pass).

As I also experienced only 1) black blocks or 2) proper rendering, I’m almost willing to bet they store the state of those 4x4 pixels in a single bit (meaning they can make a depth-scan pass over a whole framebuffer with 256 times less bandwidth requirements). If that’s the case, I’m again willing to bet there is a hardware (fixed function) path quite different from the software (shader) path.

Modern hardware does not have a fixed function pipeline anymore. The ffp is only an OpenGL legacy construct and under the hood it is emulated through shaders. ftransform was only introduced back in the days, when there were actually two different hardware-pipelines, to make sure you could mix them and get the same depth-values.

Since there is no fixed function pipeline (in the hardware) anymore and everything should (in theory) run through the same transistors, it surprises me, that the depth-values produced can be different.

Therefore the question, what hardware he uses. If it is a Geforce 5, i understand it, but i assume it’s newer hardware.

Jan.

The problem occurred on a lot of different NVidia hardware, up to a Geforce 8800 GTS 320 MB. It also occurs with a Radeon X1900 XT and Catalyst 7.12 (not with Catalyst 7.10 though).