choosing Core vs Compatibility contexts

Is there a good reason to choose to create core profile contexts over compatibility?

There are some conveniences in compatibility that I don’t want to do without, such as drawing lines in immediate mode, simple single color drawing, polygon stipple, and built-in alpha test. My understanding is that these things are still hardware accelerated anyway, on mainstream nvidia and ati cards.

Also, I can still choose to write core-profile code in a compatibility context - just ignore all the “old” calls. So, what is the benefit of choosing to create a core opengl context, other than the reduced api for programming it? Again, assuming I am targeting mainstream ATI/nvidia cards no more than say 2 or 3 years old.

You’ve hit the nail on the head. Generally target APIs in the core, but if you find some compatibility (or extension) functionality useful (either because it was already written and a risk/waste of time/money to rewrite, or because you’re time constrained on new development and legacy functionality would speed that up), then just alloc a compatibility profile and use what APIs you want: core or not. That’s what I do.

The one thing I can think of that eventually nudges you away from some compatibility functionality a little bit is if you want to use GLSL 1.3+ features. You “might” always be able to use #extension in a GLSL 1.2 shader and pull them in, but if you actually declare that you’re using GLSL 1.3+ in shaders (e.g. #version 130) to get the new features, then you lose the ability to access built-in uniforms (e.g. modelview/projection/material/lighting/fogging unifoms, etc.). You can still use a compatibility profile, but this may tempt you to stop using some of the built-in legacy functionality on the C++ side and roll your own. But if GLSL 1.2 is enough for you for now, just use it.

Actually the shader model is an interesting point. Personally I don’t really care about built-in materials/lighting (that’s the part I want to write myself) but I might want to use the built in gl_Normal, gl_MultiTexCoord0, etc built-in vertex attributes. (I have legacy code that is written around this.) The spec seems to show that they are still available if I do “#version 420 compatibility”.

Unfortunately they don’t seem to provide a good way of mixing the built-in vertex attributes with custom ones. It seems to be all or nothing there.

I guess the question comes back to: why would I ever create a core profile context? :slight_smile:

Is there a good reason to choose to create core profile contexts over compatibility?

I don’t know; do you want to use OpenGL 3.0+ on a Mac? Because Apple isn’t supporting compatibility contexts. You have a choice: 2.1+extensions, or 3.2 core.

Also, I can still choose to write core-profile code in a compatibility context - just ignore all the “old” calls.

How do you know you’re writing core-profile code? Are you using an enum in a place where it’s not allowed in core? Did you forget to remove a GL_QUADS or something?

You can’t be sure of anything unless you make the driver check for you.

Unfortunately they don’t seem to provide a good way of mixing the built-in vertex attributes with custom ones. It seems to be all or nothing there.

How do you figure? You can combine them all you want. Neither can affect the other. Unless of course, you’re on NVIDIA, where they alias built-ins with customs, which goes against what the spec says.

I guess the question comes back to: why would I ever create a core profile context?

Because you want to make it possible for IHVs to do what Apple does and finally stop supporting compatibility. It won’t happen on its own; the more people use compatibility, the harder it will be to eventually get rid of it.

Aha! I didn’t know that Mac behaved that way. Thanks.

To this I just say: I use the spec as a guide, and write core-like code for things like main geometry lighting and advanced effects, and some simple immediate mode rendering to draw certain types of widgets. The point of this is the convenience of some aspects of the prior api, without loss of performance.

Yes, nvidia is the problem there.

Legacy support is the bane of existence, both for IHV driver writers AND app software developers. I can’t just upgrade all my codebase to core profile without much pain, time and expense. Change hurts.

And as soon as your rendering immediate mode your core ambitions are gone. :slight_smile: IMHO, if you’re serious about writing core, striving for legacy convenience isn’t the right move. From an architectural standpoint core isn’t less convenient anyway, since from the time you automate vertex buffer setup and invoking the right draw calls based on the buffer’s parameters, using core features boils down to a few C++ calls for almost anything you want to render.

Furthermore, I agree with Alfonse that as an ultimate compatibility barrier the only way to ensure core compliance is using a core context and checking for errors - especially with existing code. Still, if you’re writing from scratch and stick to the spec, chances are that you’ll be mostly compliant.

Immediate mode isn’t fast but for just drawing a line or 2, it is fine.

I think polygon stipple has never been hw accelerated.

It is up to you if you need to cut corners.

I recommend to avoid the old stuff if you are working on a multi-platform and multi-API engine.

In theory immediate mode is horrible. But in practise drivers seem to be well optimised to handle it. Unless you need to push a lot of vertices, for gui/2d stuff its fine.