VertexAttribute Components Mixup

Hi. I’m seeing some strange behavior accessing a vertex attribute in my shader. In particular, the y-component appears to be identical to the x-component when I swizzle .yy. ,xx and .yy are no different. When I swizzle .xy, the y-component appears to be correct.

I’ve worked up as contained an example as I can:


#include <stdio.h>
#include <string>
#define GL_GLEXT_PROTOTYPES
#include <GL/glut.h>

GLuint hands_aid;
GLuint hands_pid;

void render() {
  glViewport(0, 0, 512, 512);
  glClear(GL_COLOR_BUFFER_BIT);

  glUseProgram(hands_pid);
  glBindVertexArray(hands_aid);
  glDrawArrays(GL_POINTS, 0, 3);
  glBindVertexArray(0);
  glUseProgram(0);

  glutSwapBuffers();
}

int main(int argc, char **argv) {
  glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
  glutInitWindowSize(512, 512);
  glutInit(&argc, argv);
  glutCreateWindow("Test");
  glutDisplayFunc(render);

  glPointSize(3.0f);

  float positions[] = {
    -0.2f, 0.1f, 0.7f, 1.0f,
    -0.3f, 0.5f, 0.7f, 1.0f,
    -0.4f, 0.9f, 0.7f, 1.0f,
  };

  // Create a vertex array to bundle all vertex state.
  glGenVertexArrays(1, &hands_aid);
  glBindVertexArray(hands_aid);

  // Create a buffer for the spatial coordinates.
  GLuint position_bid;
  glGenBuffers(1, &position_bid);
  glBindBuffer(GL_ARRAY_BUFFER, position_bid);
  glBufferData(GL_ARRAY_BUFFER, 3 * 4 * sizeof(float), positions, GL_STATIC_DRAW);
  glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
  glEnableVertexAttribArray(0);
  glBindBuffer(GL_ARRAY_BUFFER, 0);
  glBindVertexArray(0);

  // Compile vertex shader.
  std::string src =
    "#version 330
"
    "
"
    "in vec4 position;
"
    "
"
    "void main() {
"
    "  gl_Position = vec4(position.yy, 0.0, 1.0);
"
    "}
";
  // If I change position.yy to position.xy, the values seem correct. But yy
  // is no different than xx.
  GLuint vertex_sid = glCreateShader(GL_VERTEX_SHADER);
  const char *buffer = &src.c_str()[0];
  glShaderSource(vertex_sid, 1, (const GLchar **) &buffer, NULL);
  glCompileShader(vertex_sid);

  // Compile fragment shader.
  src =
    "#version 330
"
    "
"
    "out vec4 fragColor;
"
    "
"
    "void main() {
"
    "  fragColor = vec4(1.0);
"
    "}
";
  GLuint fragment_sid = glCreateShader(GL_FRAGMENT_SHADER);
  buffer = &src.c_str()[0];
  glShaderSource(fragment_sid, 1, (const GLchar **) &buffer, NULL);
  glCompileShader(fragment_sid);

  // Link.
  hands_pid = glCreateProgram();
  glAttachShader(hands_pid, vertex_sid);
  glAttachShader(hands_pid, fragment_sid);
  glBindAttribLocation(hands_pid, 0, "position");
  glLinkProgram(hands_pid);

  glutMainLoop();
  return 0;
}

This image shows that x and y are correctly retrieved from the VBO when I swizzle xy. The x’s are negative, y’s positive:

Here I swizzle xx; x is used for both the x and y position of the vertex:

And here I swizzle yy; y is used for both the x and y position of the vertex, and it looks just like xx:

Any ideas where I’m going wrong? I appreciate your help!

  • Chris

The code looks correct to me. Try outputting the position into the point color to see if something changes between xx & yy cases.

When I do that, the positions resolve correctly with yy. It seems that as long as I reference position.x in some fashion, position.y is correct. If I don’t, position.y takes on position.x’s value. Perhaps a compilation problem in the NVIDIA driver? I’m using the latest, 256.44.

  • Chris

Also, I’m on Linux. I’ll go figure out how to submit a bug report to NVIDIA.

Sure, if the facts are exactly as you described, it looks like a bug. There is always something weird with vertex attribute 0 :wink:

http://nvdeveloper.nvidia.com

Have you tried changing:
gl_Position = vec4(position.yy, 0.0, 1.0);

to:
gl_Position = vec4(position.y, position.y, 0.0, 1.0);

It’s not as concise, but it’s as efficient.

Yes, I’d tried that. It made no difference.

You know, the easiest way to test this effect on a variety of hardware is just to share the program binaries/sources here. I can test it on ATI.

I think I did that in my first post. You only need glut installed. You can change the swizzling in the literal shader source.

  • Chris

Verified: works correctly on ATI Catalyst 10.7 under Windows XP.

Confirmed: .yy works incorrectly here on NVidia 256.44 under Linux. Points should be in the upper-right (Y is positive), and they’re in the lower-left (X is negative, suggesting .xx is happening).

It’s good to know I’m not crazy. I submitted a bug report to the linux-bugs address listed by nvidia-bug-report.sh.

  • Chris

If you don’t hear back soon, just FYI I’ve had the best success with bug reports filed through https://nvdeveloper.nvidia.com

Chris –

Your bug report has been brought to my attention. You didn’t mention it in your report, but I assume that you are running with a pre-Fermi GPU? I was able to reproduce the failure with your test case on my laptop (GeForce 9500M), but was had no problems on my Fermi-based Quadro board.

I root-caused it to a code generation bug related to your specific corner case with only a single live input component (other than attribute 0’s X component). You should be able to avoid this issue by ensuring that at least two components of input would be needed by your shader, or that the single component you use is the X component of attribute zero. If you end up reading an extra component as a workaround, you’ll have to outsmart the compiler – make sure it can’t figure out that the input isn’t really needed. (For example, if you copy the extra component to a vertex shader output that isn’t read by the fragment shader, the GLSL compiler should be smart enough to figure out that the extra input is irrelevant.)

I’ve passed the bug along to our compiler team to fix.

Excellent. For my application I only needed a 1-component attribute, but I was playing around with multiple components when the problem arose. And yes, I have GeForce GTX 260M. Thanks for getting this fixed!

  • Chris

No problem. I’m sorry for any difficulty this bug may have caused you.

Having a similar problem using EXT_transform_feedback for updating VBOs.