Hello all,
My system has been using openGL 3.3 methodology and done away with all the deprecated functionality for some time now so I thought it was high time I actually made it official by creating an openGL 3.2 forward compatible core profile. So far (in Windows) I had a normal profile created with wglCreateContext (with the dummy window method to enable multisampling). Everything was working fine back then.
First Question: The core profile does away with all the deprecated functions. So one would assume that this would severely increase an application’s performance. Is that a correct assumption?
Continuing, after I created the context I could see nothing in the screen so I assumed I did something wrong with the context creation. But after a bit of testing it seems that the openGL 3.2 context must have been created since many functions seem to be working (not returning illegal operation or anything).
So I planted glGetError() in various places and managed to spot the first illegal operation. It happens in glVertexAttribPointer(). The way I use it is like below:
//for the 2d vbo
glBindBuffer(GL_ARRAY_BUFFER,*(bufferID+Vertex2D_BUFFER_OBJECT));
//will sumbit vertex coords on index 0
glEnableVertexAttribArray(VERTEXCOORD_ATTRIB_INDEX);
glVertexAttribPointer(VERTEXCOORD_ATTRIB_INDEX, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex2D), BUFFER_OFFSET(0));
In the code above VERTEXCOORD_ATTRIB_INDEX is 0. Vertex2D is composed of two floats. As I said all of the functionality is working fine if I don’t specifically ask for an openGL 3.2 context. If I do, the first glError happens in the first encounter of a glVertexAttribPointer.
Second Question:Is there something wrong in the way I use the glVertexAttribPointer? The above code appears in the drawing loop of some 2D elements.
Checking the GL 3.3 specification we can see that it mentions the following about vertex arrays:
So I thought that maybe VERTEXCOORD_ATTRIB_INDEX should be something other than zero, but that still did not work.
Proceeding to another question, while checking around the net I saw various guides about creating an openGL 3.x context in windows. There are some things which are not explained and I can not understand. Check this code for example from http://sites.google.com/site/opengltutorialsbyaks/introduction-to-opengl-3-2—tutorial-01
void CGLRenderer::Reshape(CDC *pDC, int w, int h)
{
wglMakeCurrent(pDC->m_hDC, m_hrc);
//---------------------------------
glViewport (0, 0, (GLsizei) w, (GLsizei) h);
//---------------------------------
wglMakeCurrent(NULL, NULL);
}
The function DrawScene() actually draws the scene.
void CGLRenderer::DrawScene(CDC *pDC)
{
wglMakeCurrent(pDC->m_hDC, m_hrc);
//--------------------------------
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 3);
//--------------------------------
glFlush ();
SwapBuffers(pDC->m_hDC);
wglMakeCurrent(NULL, NULL);
}
Third Question: I saw it in other guides dealing with openGL 3.x contexts too, and I can not understand it. Why do they make the context current in every loop and then unmake it. I even saw a guide that was deleting and remaking the context in every drawing loop. What is the reason behind this behaviour?