Explicit Attribute Location on ATI

Hi,

I have a simple vertex shader that works on NVIDIA but not on ATI. I’m trying to determine if it is a driver bug. The attribute inputs are:


layout(location = 1) in vec3 position;
layout(location = 2) in vec3 normal;

GetAttribLocation returns 1 and 2 for position and normal, respectively. Then, I use these indices when preparing a VAO:


glEnableVertexAttribArray(1);
glVertexAttribPointer(1, /* ... */);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, /* ... */);

This works ok on NVIDIA but no geometry is rendered on an ATI Radeon HD 5870 with Catalyst 10.10 on Windows Vista-64. If I change the attribute locations from 1 and 2, to 0 and 1, it works on ATI. I can’t find anywhere in the GLSL or OpenGL spec that says attribute locations must start at zero though. Thoughts?

Regards,
Patrick

Are you using the core profile or the compatibility profile?

According to the compatibility specification, Link Program should fail if you do not bind an attribute to attribute zero (or don’t use gl_Vertex). The core profile says nothing about attribute zero, so it should be fine.

Good call Alfonse. I am using GL 3.3 core profile. glGetString(GL_VERSION) returns:

“3.3.10243 Core Profile Forward-Compatible/Debug Context”

Also, when I switch the locations from 1 and 2 to 0 and 2, it works as I would expect given you what you mentioned. Perhaps this is a driver bug?

Thanks,
Patrick

I don’t know if it is really necessary but start to allocate your attribute locations from 0 and if possible without holes.
I’m not sure whether the spec requires it but I remember some talk about that if attributes are not allocated this way then implementation may not provide the best possible performance. I think that’s why it works like that on ATI.

Strange. I am using location 0 with core profile and I don’t have any issue with AMD drivers. Something tells me that the problem comes from somewhere else.

I am using location 0 with core profile

Did you miss a “not” in that sentence? Because using location 0 does in fact work. The problem is that not using location 0 doesn’t work, and the spec doesn’t say that this should be a problem.

Ahhh… I tried without 0 and I reproduced the issue.

This is certainty one option, and I’ll do it if I have to (or perhaps, if there is a performance advantage), but being able to assign arbitrary vertex locations is nice for software design. You can have C++ and GLSL constants like:


#define position 0
#define normal 1
#define texturecoord 2

Which allow consistent locations throughout an engine, and is particularly useful if the same VAO is used with more than one shader.

If you are curious why I am not using position, it is because I am passing a 64-bit position as two 32-bit attributes; it is cleaner to not use “position”, when we really want positionHigh and positionLow. Of course, I can just define positionHigh to be zero, and move on, but I think it is important to see if this is a bug or not.

Regards,
Patrick

Good to know. Thanks. I’ll send an email to devrel@amd.com.

Patrick

Update: ATI confirmed this as a bug and said they are working on it.

Patrick

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.