Artifact with deferred shading on ATI cards

I get this strange edge-highlight artifact with my deferred renderer on ATI cards. NVidia cards are unaffected.

It seems to occur at the junction where unwrapped (fake cubemap) shadowmaps texcoords meet, at the junction between CSM stages, and at the edge where a pixel stops being affected by a spotlight, as seen in this image.

It seems to be a function of the screen normal because normal maps affect the artifact.

I had seen something similar with latest ATI drivers in one from my postprocessing shaders. In my case the artifact appeared in pixels at border between two areas. Those two areas differed in the result of if branch put around additional texture sampling as speed optimization (i.e. the calculation would be correct even if the if test was not there). Since the texture in question does not have mipmaps (and thus it should not depend on derivative calculation in the quad) I am currently unsure if there is some legal reason for that artifact to appear or if that is problem with the shader compiler.

Do you use anti-aliasing?

No anti-aliasing is used.

However, a logical operation is causing some pixels to skip lighting since they face away from the light source. The artifact seems to occur at the border where logical results differ.

Maybe you can avoid the artifacts by replacing the test with a step or smoothstep operation.

The purpose of the logical operation (i.e. if branch), at least in my case, is to speed up the rendering of areas which do not need those additional calculations on cards which have dynamic branching capabilities. The mathematical operation will not do that.

Maybe it might be possible to work around that artifact using something like:


result = 0.0 ;
if ( some_condition( x ) ) {
    result = do_calculations() ;
}
result *= matematical_equivalent_of_the_condition( x ) ;
use( result ) ;

I will test it on monday.

ATI is pretty good about fixing errors if you submit a complete bug report. I’ll send something to them soon.

I tried the workaround however simply calculating the result of the if branch into separate variable added after the if branch instead of doing the addition directly inside the branch stopped the artifact from appearing in my case so this is almost certainly a driver problem.

You might be aware of this, in that case, please ignore this, but perhaps it is related to what’s described here: http://www.opengl.org/pipeline/article/vol003_6/ ?

I am using a GL_NEAREST filter on the depth and normal textures, and no downsampling is used, so I do not think that article is describing my problem. But I also didn’t read it very carefully or understand what they were talking about.

Sorry, I think I had a bit of a mental short cirtuit. The article is about varyings, but of course your problem is related to texture filtering somehow. Sorry :stuck_out_tongue:

I think it is some kind of logical problem. I sent a very thorough report to AMD.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.