Derivatives (dFdx/dFdy) not available on ATI (VSM)

Hi,

I’ve got Variance Shadow Maps working in GLSL based on the code in GPUGems3. For example, my code for calculating the moments when generating the shadow map is:

vec2 ComputeMoments(float depth)
{
vec2 moments;
moments.x = depth;
moments.y = depth * depth;

float dx = dFdx(depth);
float dy = dFdy(depth);
moments.y += 0.25 * (dx * dx + dy * dy);

return moments;
}

It works great on my nVidia card but the shader does not compile on ATI cards (Radeon HD 3670 and 3870) with the latest drivers returning the errors:

“error(#202) No matching overloaded function found dFdx”
“error(#202) No matching overloaded function found dFdy”

Can I get this to compile on these cards (maybe by enabling an extension) or are the derivative functions just not available on older ATI cards? Or is there a way to calculate the derivatives manually?

Thanks,
Chris.

http://www.gamedev.net/community/forums/viewreply.asp?ID=3636974

Stepher A, thanks for linking… :wink:

Found the problem! I compiled the ComputeMoments function with both the vertex and fragment shaders. On nVidia this was okay but on ATI it gave the error. Don’t know why, but it works fine now.

I’ve added the “#version 120” declaration anyway since it’s good practice.

dFdx and dFdy are only available in fragment shaders according to the GLSL spec.

frank, I just figured that out, thanks… :slight_smile:

Copied from my ‘other’ post: Just out of interest, can the derivatives be calculated manualy for hardware that doesn’t provide them?

I think I do. We’ve tripped over this.

Pretty sure the issue is that the NVidia compiler (apparently) does dead code elimination up-front. So if you do reference a fragment shader specific identifier or function (gl_FragCoord, dFdx, etc.), and post dead code elimination, it’s only compiled into your fragment shader, NVidia is fine with that. ATI is not. You end up having to #ifdef ubershaders for ATI, which is annoying.

One specific example of this is where you have a function that references dFdx, and while that utility function is only called from your frag shader main, the body of that function is included in both the vertex and fragment shaders. ATI doesn’t even want to see it in the raw vertex shader source, even though you never actually pull it into the assembled shader logic. NVidia’s aggressive dead code elimination will just silently throw it out (which leads to simpler ubershader source code).

Would be helpful if GLSL speced a standard for aggressive dead code elimination to avoid this needless #ifdef ugliness.

This sounds like you want the specs to say: “Code may be illegal as long as it is not referenced by the legal rest of the code.” Wouldn’t that open a can of worms for unspecified behaviour?

I’d like to see some standardized #defines instead, something like GL_VERTEX_SHADER, GL_FRAGMENT_SHADER and so on which are defined by GLSL automatically. Additionally I beg for functionality to define my own preprocesor symbols from within my code without having to manipulate the shader source strings beforehand.

Just out of interest, can the derivatives be calculated manualy for hardware that doesn’t provide them?

And what hardware would that be?

You end up having to #ifdef ubershaders for ATI, which is annoying.

Yeah, having to follow the actual specification is so annoying :wink:

Yeah, isn’t it? You can’t use stuff such as ‘saturate’, implicit casts and whatnot. :wink:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.