Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: AMD Catalyst 13.3 Beta Linux - Interface Block Redeclaration - GLSL Bugs

  1. #1
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,129

    AMD Catalyst 13.3 Beta Linux - Interface Block Redeclaration - GLSL Bugs

    Ahoi. Reading this thread triggered my curiosity once again. So I tried my luck with redeclaration of built-in interface blocks and, if I haven't fallen victim to my own incredible stupidity here, the GLSL compiler seems to violate a multitude of compilation/linkage rules from the spec (all quotes from the GLSL 4.30 Spec):

    The gl_PerVertex block can be redeclared in a shader to explicitly indicate what subset of the fixed pipeline interface will be used. This is necessary to establish the interface between multiple programs. [..] It must be a subset of the built-in members of gl_PerVertex.
    First of all, the wording is awful. The words can and necessary don't go well together. They can be redeclared when using non-separable programs, they are necessary, however, when using a program pipeline. Maybe a spec bug - maybe a driver. At least the driver seems to be sure that can isn't the right wording. Anyway, using the following shaders separably works just fine, even though as per the GLSL spec it shouldn't:

    Code :
    // vertex shader
    #version 420
    layout(location = 0) in vec4 position;
     
    out gl_PerVertex
    {
        vec4 gl_Position;
        vec2 wtf_AMD;
    };
     
    void main()
    {
        gl_Position = position;
        wtf_AMD    = position.xy;
    }
     
    // geometry shader
    #version 420
     
    layout(points) in;
    layout(points, max_vertices = 1) out;
     
    in gl_PerVertex
    {
        vec4 gl_Position;
        vec2 wtf_AMD;
    } gl_in[];
     
    out gl_PerVertex
    {
        vec4  gl_Position;
        float  gl_PointSize;
    };
     
    void main()
    {
        gl_PointSize = 5.f;
        gl_Position  = gl_in[0].gl_Position + vec4(gl_in[0].wtf_AMD, 0.0, 0.0);
     
        EmitVertex  ();
        EndPrimitive();
    }

    No complaints whatsoever - not even when insulting the compiler with a nasty "WTF". Clearly, the vec2 is not a built-in member of gl_PerVertex and therefore does not belong to aforementioned subset.


    If a built-in interface block is redeclared, it must appear in the shader before any use of any member included in the built-in declaration, or a compile-time error will result.
    Unless my interpretation is wrong, this simply means, the block needs to appear before any use of any of its members. However, this compiles and works:

    Code :
    #version 420
    in vec4 position;
     
    void main()
    {
        gl_Position = position;
        amd_WTF  = position.xy;
    }
     
    out gl_PerVertex
    {
        vec4  gl_Position;
        vec2  amd_WTF;
    };

    This seems to violate the spec on two counts... As I understand it, I should not be able to use a member of a redeclared interface prior to its redeclaration and I assumed this goes for gl_Position as well. However, the compiler will only complain about the vec2 not being declared at that point.

    Now, what happens if we do all this with an old-school, linked program? The following will compile fine even though it clearly violates the rules of interface matching (which don't only apply to separable programs):

    Matched block names within an interface (as defined above) must match in terms of having the same number of declarations with the same sequence of types and the same sequence of member names, as well as having the same member-wise layout qualification (see next section).[..] Any mismatch will generate a link-time error.
    Consider the following clearly mismatching interfaces:

    Code :
    // vertex shader
    out gl_PerVertex
    {
        vec4 gl_Position;
        float wtf_AMD_float;
        vec2 wtf_AMD;
        float wtf_Padding;
    };
     
    // geometry shader
    in gl_PerVertex
    {
        vec4 gl_Position;
        vec2 wtf_AMD;
        float wtf_AMD_float;
    } gl_in[];

    The only thing that matches is the number of declarations. Neither the type sequence nor the sequence of names match. Two violations. Does the linker complain is it should? No. Actually it compiles, links and runs but generates no primitives - at least not until wtf_AMD_float is actually used:

    Code :
    // write position in the GS
    gl_Position  = gl_in[0].gl_Position + vec4(gl_in[0].wtf_AMD * gl_in[0].wtf_AMD_float, 0.0, 0.0);

    This will link and run just as fine but, oh wonder, generate incorrect output primitives for which actual fragments are rasterized which can addtionally be controlled by adjusting the point size... It shouldn't even link - and seriously, if for linked programs it also applies that the redeclared members must be a subset of the built-in members of gl_PerVertex, it should even compile IMHO. Too bad the spec doesn't explicitly state that the latter actually should occur.

    Please, someone tell me that I'm making some horrible mistakes above. Otherwise, the whole redeclaration thing seems broken as ****.

    EDIT: Just some of it with 13.1 again. The first example simply flickers, i.e. sometimes it generates primtives and sometimes it doesn't but the compiler/linker still don't complain. The last example, ironically, generates the correct primitives. Obviously, 13.3 has regressed from incorrect to even more incorrect behavior.
    Last edited by thokra; 04-22-2013 at 09:28 AM.

  2. #2
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    They can be redeclared when using non-separable programs, they are necessary, however, when using a program pipeline. Maybe a spec bug - maybe a driver. At least the driver seems to be sure that can isn't the right wording.
    No, you have the right understanding. You have the ability to redeclare it; you must use this ability when using separate programs. It's really quite simple, and the spec seems pretty clear on this. The ability exists, and it must be used in certain cases. That's why it uses "can" to describe the ability, and "necessary" to describe when you have to use it.

    Too bad the spec doesn't explicitly state that the latter actually should occur.
    Except it does. I don't know how you didn't see it, when you quoted the spec where it says exactly that. "It must be a subset of the built-in members of gl_PerVertex." Not only that, in the paragraph above this quote, it shows an example that has your exact issue:

    Code :
    out gl_PerVertex {
      vec4 gl_Position; // will use gl_Position
      float gl_PointSize; // will use gl_PointSize
      vec4 t; // error, only gl_PerVertex members allowed
    }; // no other members of gl_PerVertex will be used

    The spec is clear on this. I have no idea why you're complaining about the quality of the spec.

    That being said, this may not be a driver bug. In accord with section 1.4, Error Handling, the only time that a driver is required to report an error and fail compilation/linking is when the spec explicitly states that a compile-time/link-time error will result. The specification uses "must" here, so violating that condition means the program is ill-formed. But that does not mean that the program won't compile without errors. Since no specific error message is required, the compiler is not required to diagnose this and halt compilation.

    In short, these are the kinds of errors that you need to resolve by not doing the wrong thing. Much like inappropriate casting operations in C++, they can only be resolved by the programmer doing things correctly.

    So stop trying to break the implementation and do things right.

  3. #3
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,129
    Quote Originally Posted by Alfonse
    Except it does. I don't know how you didn't see it, when you quoted the spec where it says exactly that.
    I didn't see what? I said myself right in the sentence before that one:

    Quote Originally Posted by myself
    it should (missing) not (/missing)even compile IMHO.
    Yes, you have the code comment and you have this one word "must" which I actually used to bash the implementation before, but the spec is clear in other places by stating explicitly that a "compile-time error will occur" or something similar. And I actually wish that it did in this case. Maybe that's why the driver doesn't actually complain. Or the implementation is incorrect. In any case, there is a culprit here and it is either the spec still not being clear enough for a driver engineer to understand or the implementors not being thorough enough.

    these are the kinds of errors that you need to resolve by not doing the wrong thing.
    So, why bother writing conformance tests? Yes, I'm not a test suite - but do you care to explain to me why it's a bad thing to manually check for conformance to help improve driver quality, in cases where there is obviously incorrect behavior? Maybe I should apply at AMD as an implementation breaker ...
    Last edited by thokra; 04-23-2013 at 12:23 AM.

  4. #4
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Yes, you have the code comment and you have this one word "must" which I actually used to bash the implementation before, but the spec is clear in other places by stating explcitly that a "compile-time error will occur" or something similar.
    I pointed out the difference. When the spec says that something "must" be the case, it must be the case. However, failure to abide by a rule is only a compiler/linker error if the spec says that it is explicitly or it's a grammar/lexical issue.

    Failure to abide by the rule is still ill-formed, and it will result in undefined behavior. But the compiler/linker are not required to detect that this rule was broken.

    Should the spec require a diagnostic from a failure to abide by this rule? It would probably be a good idea, if for no other reason than to catch typos. File a bug on it if you think a diagnostic should be reported.

    So, why bother writing conformance tests? Yes, I'm not a test suite - but do you care to explain to me why it's a bad thing to manually check for conformance to help improve driver quality, in cases where there is obviously incorrect behavior?
    Because this behavior is conformant. No error is required by the spec. No error was provided by the implementation. This is conformant with the spec. Since the shader is ill-formed according to the spec (ie: violating the rules of the spec), the behavior you get is undefined. But no error needs to result from compiling or linking.

    This is a quality of implementation issue, not a correctness of implementation issue. The driver ought to provide an error, but it does not have to in order to be compliant with the spec.

    In short, this should not be part of any conformance test.

    As for writing "conformance tests", there's no point in it. AMD's not going to care; they don't care about Piglit, and they won't care about what you're doing. Unless someone makes them abide by a test suite, they will only care about specific bugs that are brought to their attention or issues raised by developers that actually matter. And the former are lower priority than the latter.

    This is not a driver bug. So odds are good that they won't care.

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,129
    File a bug on it if you think a diagnostic should be reported.
    Will do. I feel this is something that can be easily caught and reported and it should be.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •