Converting to 3.3

I’m attempting to convert some code I’ve got to OpenGL 3.3, core profile, with little success.

Currently, I’m trying to render a 2D grid of triangles. Just that; nothing fancy, no texture-mapping, nothing. I’ve got a vertex shader that does nothing but write a vertex attribute array’s vertices to gl_Position, and a fragment shader that then goes on to write red to gl_FragColor.[1]

The vertex coordinates range from (0,0) to (1,1); rendered properly, they should cover the upper right quarter of the screen.

Problem, though… the screen coordinates, instead of ranging from (-1,-1) to (1,1) as I’d expect, range from approximately (-0.000000000000000000021,-0.000000000000000000021) to (0.000000000000000000021,0.000000000000000000021). (0,0) is still in the center, thankfully.

I’ve established this range by experiment - subtracting/adding Very Tiny Values from the input to gl_Position in the vertex shader. It probably has a meaning of some kind - epsilon times the glViewport parameters? I don’t know.

To be honest, I don’t know where to start debugging this. I’ve been perturbing the program for the last hour, with less than impressive results… the next step would be to try a minimal test-case, but I’m not sure what a minimal test-case /is/, for core 3.3.

So… what’s the smallest amount of code required to put a 2D triangle on-screen, with some trivial shaders involved? (Perspective correction is irrelevant; it’s all 2D).

And do my symptoms look familiar to anyone?

[1] Yes, gl_FragColor is deprecated. I don’t understand the replacement yet, and at least that part works as-is.

frag shader:
#version 330
out vec4 fragColor;

void main(void)
{

and call glBindFragDataLocation(progid,0,“fragColor”); after shader program link;

I suppose that covers the deprecated bit, yep. No ideas about the actual problem I was facing, then? It /is/ pretty bizarre, I guess…

Out of curiosity, what would happen if I bound multiple out vectors from the fragment shader, to different indexes? There’s only one framebuffer…

Problem, though… the screen coordinates, instead of ranging from (-1,-1) to (1,1) as I’d expect, range from approximately (-0.000000000000000000021,-0.000000000000000000021) to (0.000000000000000000021,0.000000000000000000021). (0,0) is still in the center, thankfully.

Sounds like your vertex position of 1 is being sent as a normalized short or integer. Which means it’s actually 1/65535 or 1/(2^32 - 1).

Can you post the line that sets up the vertex attribute? Are you sure you’re linking your attributes from your code to the shader correctly?

Sure. I’m not sure I understand, though; if the input coordinates were out of whack in that way, shouldn’t the 4000-something triangles get drawn as a single dot in the center of the screen, instead of getting blown up to several quintillion times their normal size?

To reiterate: Manually altering the vertex shader’s output gl_Position shows that the range of the screen, as measured by that coordinate system, is ~10^-20 times the size it should be.

Unless I actually have to set its range manually and there’s no default (an option I’ve considered, and failed to find documentation for), I don’t see how that can happen.

But okay. Code:


    // Construct tile points
    glGenBuffers(1, &tiles);
    glBindBuffer(GL_ARRAY_BUFFER, tiles);
    vector<GLfloat> points;
    points.reserve(w*h*2);
    for (int x=0; x < w; x++) {
      for (int y=0; y < h; y++) {
        points.push_back(double(x * init.font.dispx) / w); // Top left
        points.push_back(double(y * init.font.dispy) / h);
        points.push_back(double((x+1) * init.font.dispx) / w); // Top right
        points.push_back(double(y * init.font.dispy) / h);
        points.push_back(double(x * init.font.dispx) / w); // Bottom left
        points.push_back(double((y+1) * init.font.dispy) / h);

        points.push_back(double(x * init.font.dispx) / w); // Bottom left
        points.push_back(double((y+1) * init.font.dispy) / h);
        points.push_back(double((x+1) * init.font.dispx) / w); // Top right
        points.push_back(double(y * init.font.dispy) / h);
        points.push_back(double((x+1) * init.font.dispx) / w); // Bottom right
        points.push_back(double((y+1) * init.font.dispy) / h);
      }
    }
    glBufferData(GL_ARRAY_BUFFER, points.size() * sizeof(GLfloat), &points[0], GL_STATIC_DRAW);
    printGLError();

    // Load shaders
    shader.reload();
    // And bind the input points
    glVertexAttribPointer(shader.attrib("position"), 2, GL_FLOAT, GL_FALSE, 0, 0);
    glEnableVertexAttribArray(shader.attrib("position"));

shader.reload() does about what you’d expect, loading the vertex and fragment shaders into memory, compiling, glUseProgram’ing, etc. I’m quite certain that part works.

The drawing code is just
glDrawArrays(GL_TRIANGLES, 0, gps.dimx * gps.dimy * 6);
, since I never actually undo any of this state.

Vertex and fragment shader:


#version 140 // -*- Mode: C++ -*-

// Inputs from DF
in vec2 position;
in int gl_VertexID;

flat out int tile;
out vec4 gl_Position;

void main() {
  tile = gl_VertexID / 6;

  // You can change the location of the tile edges here, but normally you should just let them pass through.
  gl_Position = vec4(position.x-0.000000000000000000017, position.y-0.00000000000000000005, 0.0, 0.0);
  // gl_Position = vec4(0,0,0,0);
}


====== fragment shader =======

#version 140 // -*- mode: C++ -*-

flat in int tile;

void main() {
  // gl_FragColor = vec4(1,1,0,1);
  // gl_FragColor = gl_FragCoord/600;
  if (tile < 2)
    gl_FragColor = vec4(tile + 0.5, 0,0,1);
  else
    discard;
}

Actual output: http://brage.info/~svein/meh.jpg