OpenGL 3.x Core Profile Questions

Hello all,

My system has been using openGL 3.3 methodology and done away with all the deprecated functionality for some time now so I thought it was high time I actually made it official by creating an openGL 3.2 forward compatible core profile. So far (in Windows) I had a normal profile created with wglCreateContext (with the dummy window method to enable multisampling). Everything was working fine back then.

First Question: The core profile does away with all the deprecated functions. So one would assume that this would severely increase an application’s performance. Is that a correct assumption?

Continuing, after I created the context I could see nothing in the screen so I assumed I did something wrong with the context creation. But after a bit of testing it seems that the openGL 3.2 context must have been created since many functions seem to be working (not returning illegal operation or anything).

So I planted glGetError() in various places and managed to spot the first illegal operation. It happens in glVertexAttribPointer(). The way I use it is like below:


    //for the 2d vbo
    glBindBuffer(GL_ARRAY_BUFFER,*(bufferID+Vertex2D_BUFFER_OBJECT));

    //will sumbit vertex coords on index 0
    glEnableVertexAttribArray(VERTEXCOORD_ATTRIB_INDEX);
    glVertexAttribPointer(VERTEXCOORD_ATTRIB_INDEX, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex2D), BUFFER_OFFSET(0));

In the code above VERTEXCOORD_ATTRIB_INDEX is 0. Vertex2D is composed of two floats. As I said all of the functionality is working fine if I don’t specifically ask for an openGL 3.2 context. If I do, the first glError happens in the first encounter of a glVertexAttribPointer.
Second Question:Is there something wrong in the way I use the glVertexAttribPointer? The above code appears in the drawing loop of some 2D elements.

Checking the GL 3.3 specification we can see that it mentions the following about vertex arrays:

So I thought that maybe VERTEXCOORD_ATTRIB_INDEX should be something other than zero, but that still did not work.

Proceeding to another question, while checking around the net I saw various guides about creating an openGL 3.x context in windows. There are some things which are not explained and I can not understand. Check this code for example from http://sites.google.com/site/opengltutorialsbyaks/introduction-to-opengl-3-2—tutorial-01


void CGLRenderer::Reshape(CDC *pDC, int w, int h)
{
      wglMakeCurrent(pDC->m_hDC, m_hrc);
      //---------------------------------
      glViewport (0, 0, (GLsizei) w, (GLsizei) h); 
      //---------------------------------
      wglMakeCurrent(NULL, NULL);
}

The function DrawScene() actually draws the scene.

void CGLRenderer::DrawScene(CDC *pDC)
{
      wglMakeCurrent(pDC->m_hDC, m_hrc);
      //--------------------------------
      glClear(GL_COLOR_BUFFER_BIT);

      glDrawArrays(GL_TRIANGLES, 0, 3);
      //--------------------------------
      glFlush (); 
      SwapBuffers(pDC->m_hDC);
      wglMakeCurrent(NULL, NULL);
}

Third Question: I saw it in other guides dealing with openGL 3.x contexts too, and I can not understand it. Why do they make the context current in every loop and then unmake it. I even saw a guide that was deleting and remaking the context in every drawing loop. What is the reason behind this behaviour?

My system has been using openGL 3.3 methodology and done away with all the deprecated functionality for some time now so I thought it was high time I actually made it official by creating an openGL 3.2 forward compatible core profile.

Please stop using “forward compatible” profiles. That was a temporary measure back in the 3.0 days, when nothing was actually removed yet. Now, it’s just “core” and “compatibility”.

So one would assume that this would severely increase an application’s performance. Is that a correct assumption?

No.

glBindBuffer(GL_ARRAY_BUFFER,*(bufferID+Vertex2D_BUFFER_OBJECT));

There is only one way this code makes sense. If bufferID is not an ID but an array of IDs, generated with glGenBuffers. And Vertex2D_BUFFER_OBJECT is an integer within the bounds of that array. In which case, the most reasonable way to write that would be:

glBindBuffer(GL_ARRAY_BUFFER, bufferID[Vertex2D_BUFFER_OBJECT]);

And if that’s not the case, then this code is confused.

So I planted glGetError() in various places and managed to spot the first illegal operation.

You did a lot of work to track down where the error happens. Yet you neglect to say what the error actually is. Is it an INVALID_OPERATION? Is it an INVALID_ENUM or INVALID_VALUE?

I saw it in other guides dealing with openGL 3.x contexts too, and I can not understand it. Why do they make the context current in every loop and then unmake it.

It’s safe practice, especially in a generic object like that. If you created two CGLRenderer objects, they would each have their own windows and their own contexts. If their code just assumed that their context was current, they’d break in the event of creating multiple objects.

If you’re writing an application where multiple windows is not a possibility (and you should ensure that it doesn’t happen), then you don’t need to.

I even saw a guide that was deleting and remaking the context in every drawing loop. What is the reason behind this behaviour?

Why do you assume that everything you see on the Internet is reasonable? If something smells fishy to you, then odds are it’s fishy.

I’d like to see a link to that “guide”.

Hi,
I think you should pass the id return by glGenBuffers to glBindBuffer like this,


GLuint bufferID;
glGenBuffers(1, &bufferID);
glBindBuffer(GL_ARRAY_BUFFER, bufferID);

why do u add an offset to the bufferID?

Another thing I would recommend is to start out simple. Try these tutorials (http://arcsynthesis.org/gltut/). Once you understand how the basics are then go ahead with handling of multiple DCs. When u have multiple device context, in case when u have multiple rendering views/windows, you need to identify the current context so that the later opengl calls are directed to it.

EDIT: just realized Alfonse had given a comprehensive answer seconds before mine.

Hmm so what do you put in the wglCreateContextARB attribute WGL_CONTEXT_FLAGS_ARB? Nothing?

Why not? Can you elaborate? And if not then what is the point in even requesting an openGL 3.x context?

Yes that is the case. It is an array of IDs. And yes the Vertex2D_BUFFER_OBJECT is an int of that array. No problem there. I specified that all of my code works correctly when a non openGL 3.X context is not specifically requested.

I assumed you would interpret it as invalid operation. Since I specifically say invalid operation I meant GL_INVALID_OPERATION, hex: 0x502

Oh I see in what way they meant it in that guide then. Yeah it does make sense, thanks.

See the assumption that Alfonse made and the answer I gave him. It is an array of buffer object IDs. I know how to bind buffer objects, and how to use them. As I said in the start the problem lies only when I attempt to create an openGL 3.x context.

So I am trying to understand why am I getting an GL_INVALID_OPERATION for the above glVertexAttribPointer() call if I ask for an openGL 3.x context and why all works fine when I don’t.

Quick Edit: Forgot to add something important. If during the openGL 3.x context creation I request a compatibility context then everything works fine. So I guess that something is wrong with the way I am using Vertex Attributes.

When using a core profile, you are required to have a VAO. This is definitely a gotcha when starting to work with core profile contexts.

Even doing something as simple as the following when initializing a new GL context is sufficient:

GLuint vao = 0;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

-David

I knew it was something simple! Yes my friend you are indeed correct. In my code I was not utilizing VAOs at all. And as you said the 3.x core profile requires them. Everything works fine now. Thanks a lot.

I guess I have some reading ahead of me on how to properly use VAOs and what they have to add to the way I am already doing things.

SOLUTION
So in case someone else has the same problem let’s re-iterate. If after creating an openGL 3.2 core context glVertexAttribPointer produces a glGetError() value of GL_INVALID_OPERATION then chances are you are not using VAOs at all in your application.

The solution would be to start doing so :slight_smile:

Hmm so what do you put in the wglCreateContextARB attribute WGL_CONTEXT_FLAGS_ARB? Nothing?

Who says you need CONTEXT_FLAGS? Put whatever there you need. If you’re making a debug context, set that. But if you’re not using a flag, don’t set it at all. Or if its easier for you, just set it to zero.

The most important settings are the CONTEXT_MAJOR_VERSION, CONTEXT_MINOR_VERSION, and CONTEXT_PROFILE_MASK values.

Why not? Can you elaborate? And if not then what is the point in even requesting an openGL 3.x context?

It doesn’t matter why or why not; it has little if any effect on performance. Some have reported slightly slower execution, others on different hardware or driver versions have reported slightly faster execution.

The point of asking for a core context is to get a core context. To have the API prevent you from being able to access things that may not be in your best interests.

Who says you need CONTEXT_FLAGS? Put whatever there you need. If you’re making a debug context, set that. But if you’re not using a flag, don’t set it at all. Or if its easier for you, just set it to zero.

Thanks. From this post and your original reply to the topic I realized you don’t actually need it.

It doesn’t matter why or why not; it has little if any effect on performance. Some have reported slightly slower execution, others on different hardware or driver versions have reported slightly faster execution.

Hmm… that’s kind of disheartening. I had assumed that you would get better performance because of things that got deprecated in openGL 3.2 such as the shader variables gl_NormalMatrix e.t.c.

Taking the gl_NormalMatrix for example it has to have been calculated somewhere and it is a costly calculation. So if it is no longer happening under the hood then I assumed it would result in a performance increase.

Of course you do have to calculate it yourself, but there are also occasions where you might not need to do so,hence you could increase performance from there … or so I thought.

Taking the gl_NormalMatrix for example it has to have been calculated somewhere and it is a costly calculation. So if it is no longer happening under the hood then I assumed it would result in a performance increase.

It would only be calculated if you were using it. If you’re not using it, then it won’t be generated.

The fastest code is the code that never gets called.

The tests in question were on core GL code that would be run in a compatibility or core context. So it’s not a comparison of “fixed-function vs shaders” or something like that. There weren’t changes to use non-core functionality; they were tests based solely on whether the profile has an impact on application performance.

Some drivers implement core by adding extra checks around compatibility profile. This is the reason why on such implementations core would be (slightly) slower than compatibility.

Strictly speaking forward compatible still has use. It removes all deprecated features; not all deprecated is removed in core. I see both core and forward compatible suitable for development builds. For release, it is a different story: you might not want either. Forward compatible could even break your app in future if something new is deprecated.

I see both core and forward compatible suitable for development builds.

This is also a good thing to do if you think you may port to OSX in the future. OSX 10.7’s GL3.2 context is core only, and while they also have a legacy GL2.1 driver with some 3.x extensions, it’s notably missing features such as uniform buffers, texture buffers, multisample textures, ARB geometry shaders, and any GLSL version > 1.20.

It removes all deprecated features; not all deprecated is removed in core.

And what good is that? If a deprecated feature wasn’t removed from 3.1, that’s because there is no actual desire from the ARB to remove it. It’s not going away, so why bother pretending it’s not there?

This is also a good thing to do if you think you may port to OSX in the future.

How? As you point out, OSX 10.7 is core only. Forward compatible has nothing to do with that.

If you query GL_CONTEXT_FLAGS, you’ll see that OS X’s Core Profile contexts are in fact forward-compatible.

This makes a big difference if you look at the GLSL interaction:

The OpenGL API has a forward compatibility mode that will disallow use of deprecated features. If compiling in a mode where use of deprecated features is disallowed, their use causes compile time errors.

As mentioned earlier in this thread, many API were removed from Core Profile. However that doesn’t apply to GLSL; if you want to guarantee that you can’t use deprecated things like gl_FragColor or texture2D() then you need a forward-compatible context.