opengl 3.3 core, glsl 3.30, render fails

I’m struggling with the OpenGL 3.3 core profile. I’m trying to get a simple shader rendered on the screen, but I get only a black screen. I’m using a Intel® HD Graphics 4000,

The vertex shader


#version 330
layout(location = 0) in vec2 pos;
out vec2 c;

void main(void) {
    gl_Position = vec4(pos, 0.0, 1.0);
    c = (pos+1)*0.5;
}

The fragment shader


#version 330
in vec2 c;
out vec4 color;

void main(void) {
    color = vec4(c, 1, 1);
}

in vbo[0] the vertex points are stored, vbo[1] stores the indices

    
GLFWwindow* window;

GLuint vao;
GLuint vbo[2];
GLuint program;

const GLfloat square[8] = {
    -1.0, -1.0,
    -1.0,  1.0,
    1.0, -1.0,
    1.0,  1.0
};

const GLfloat indices[4] = { 0, 1, 2, 3 };

Init OpenGL 3.3 core context
glGetString(GL_VERSION) returns “3.3 (Core Profile) Mesa 10.2.1”
glGetString(GL_SHADING_LANGUAGE_VERSION) returns “3.30”


if( !glfwInit() ) {
    std::cerr << "Failed to initialize GLFW
";
    return -1;
}

glfwWindowHint(GLFW_SAMPLES, 0);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

// Open a window and create its OpenGL context
window = glfwCreateWindow( 1024, 768, "", 0, 0);
if( window == NULL ) {
    std::cerr << "Failed to open GLFW window.
";
    glfwTerminate();
    return -1;
}
glfwMakeContextCurrent(window);

// Initialize GL3W
if (gl3wInit()) {
    std::cerr << "Failed to initialize GL3W" << std::endl;
    return -1;
}

if (!gl3wIsSupported(3, 3)) {
    std::cerr << "OpenGL Version 3.0 not supported" << std::endl;
    return -1;
}

std::cout << "OpenGL " << glGetString(GL_VERSION) << ", GLSL " <<  glGetString(GL_SHADING_LANGUAGE_VERSION) << std::endl;


The shaders compile successfully, I’ve checked it with glGetShaderiv


glGenBuffers(2, vbo);

glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, 8*sizeof(GLfloat), square, GL_STATIC_DRAW);

glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, 4*sizeof(GLushort), indices, GL_STATIC_DRAW);

glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

program = glCreateProgram();

GLuint vertex_shader, fragment_shader;

loadShader_FILE(vertex_shader, "shader/default.vsh", GL_VERTEX_SHADER);
glAttachShader(program, vertex_shader);

loadShader_FILE(fragment_shader, "shader/default.fsh", GL_FRAGMENT_SHADER);
glAttachShader(program, fragment_shader);

glLinkProgram(program);

glUseProgram(program);

glViewport(0, 0, 1024, 768);

glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glVertexAttribPointer(
        0,
        2,
        GL_FLOAT,
        GL_FALSE,
        sizeof(GLfloat)*2,
        (void*)0);
glEnableVertexAttribArray(0);

glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[1]);
glDrawElements(
        GL_TRIANGLE_STRIP,
        4,
        GL_UNSIGNED_SHORT,
        (void*)0);

glDisableVertexAttribArray(0);
glfwSwapBuffers(window);

Filtered output of glxinfo


$ glxinfo | grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile 
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.2.1
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 10.2.1
OpenGL shading language version string: 1.30
OpenGL context flags: (none)

[QUOTE=banana joe;1260090]I get only a black screen.


    gl_Position = vec4(pos, 0.0, 0.0);

[/QUOTE]

Setting clipspace gl_Position.w = 0.0 means that all NDC xyz = infinity.

I’ve corrected the vertex shader and I’m still get a black screen

const GLfloat indices[4] = { 0, 1, 2, 3 };

You’ve uploaded GLushorts, but declared floats. So your actual index data is [0000 0000 0000 3f80], which draws one degenerate triangle, and then dereferences far outside of your allocated vbo (i.e. possible crash or GPU restart without a robust context.)

thanks, it works now