Drawing without attrib array enabled gets "optimized out" by AMD driver?
I recently had to replace my laptop and in the process got something with an AMD GPU to complement my Nvidia "testing hardware". It turned out that two of my examples didn't work. The first one was a clear case of the Nvidia driver letting me get away with something that it shouldn't (I wasn't binding the whole relevante range of a buffer with BindBufferRange), but the second one is puzzling me.
In my tessellation example I draw without any vertex attribs by deriving vertex positions from gl_VertexID and gl_InstanceID and then tessellating the result according to a "displacement texture". On AMD it simply does nothing. Black screen, no errors and a GL_PRIMITIVES_GENERATED query reveals that it doesn't even generate primitives. Just creating a dummy vao+vbo and specifying a AttribArray with stride 0 (that isn't read in the shader) makes it work.
So my interpretation is that the driver "optimizes" out the drawcall when there is no attrib array enabled? I can make it work with that dummy array, but what I really wonder is whether that is valid behavior? I so far couldn't really clarify this by just reading the spec, but I might be looking in the wrong places.
Last edited by JakobProgsch; 10-04-2012 at 04:41 AM.