Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 8 of 8

Thread: choosing Core vs Compatibility contexts

Hybrid View

  1. #1
    Junior Member Newbie
    Join Date
    Apr 2008
    Posts
    3

    choosing Core vs Compatibility contexts

    Is there a good reason to choose to create core profile contexts over compatibility?

    There are some conveniences in compatibility that I don't want to do without, such as drawing lines in immediate mode, simple single color drawing, polygon stipple, and built-in alpha test. My understanding is that these things are still hardware accelerated anyway, on mainstream nvidia and ati cards.

    Also, I can still choose to write core-profile code in a compatibility context - just ignore all the "old" calls. So, what is the benefit of choosing to create a core opengl context, other than the reduced api for programming it? Again, assuming I am targeting mainstream ATI/nvidia cards no more than say 2 or 3 years old.

  2. #2
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    3,220

    Re: choosing Core vs Compatibility contexts

    Quote Originally Posted by Radagast
    I can still choose to write core-profile code in a compatibility context - just ignore all the "old" calls. So, what is the benefit of choosing to create a core opengl context,
    You've hit the nail on the head. Generally target APIs in the core, but if you find some compatibility (or extension) functionality useful (either because it was already written and a risk/waste of time/money to rewrite, or because you're time constrained on new development and legacy functionality would speed that up), then just alloc a compatibility profile and use what APIs you want: core or not. That's what I do.

    The one thing I can think of that eventually nudges you away from some compatibility functionality a little bit is if you want to use GLSL 1.3+ features. You "might" always be able to use #extension in a GLSL 1.2 shader and pull them in, but if you actually declare that you're using GLSL 1.3+ in shaders (e.g. #version 130) to get the new features, then you lose the ability to access built-in uniforms (e.g. modelview/projection/material/lighting/fogging unifoms, etc.). You can still use a compatibility profile, but this may tempt you to stop using some of the built-in legacy functionality on the C++ side and roll your own. But if GLSL 1.2 is enough for you for now, just use it.

  3. #3
    Junior Member Newbie
    Join Date
    Apr 2008
    Posts
    3

    Re: choosing Core vs Compatibility contexts

    Actually the shader model is an interesting point. Personally I don't really care about built-in materials/lighting (that's the part I want to write myself) but I might want to use the built in gl_Normal, gl_MultiTexCoord0, etc built-in vertex attributes. (I have legacy code that is written around this.) The spec seems to show that they are still available if I do "#version 420 compatibility".

    Unfortunately they don't seem to provide a good way of mixing the built-in vertex attributes with custom ones. It seems to be all or nothing there.

    I guess the question comes back to: why would I ever create a core profile context?

  4. #4
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: choosing Core vs Compatibility contexts

    Is there a good reason to choose to create core profile contexts over compatibility?
    I don't know; do you want to use OpenGL 3.0+ on a Mac? Because Apple isn't supporting compatibility contexts. You have a choice: 2.1+extensions, or 3.2 core.

    Also, I can still choose to write core-profile code in a compatibility context - just ignore all the "old" calls.
    How do you know you're writing core-profile code? Are you using an enum in a place where it's not allowed in core? Did you forget to remove a GL_QUADS or something?

    You can't be sure of anything unless you make the driver check for you.

    Unfortunately they don't seem to provide a good way of mixing the built-in vertex attributes with custom ones. It seems to be all or nothing there.
    How do you figure? You can combine them all you want. Neither can affect the other. Unless of course, you're on NVIDIA, where they alias built-ins with customs, which goes against what the spec says.

    I guess the question comes back to: why would I ever create a core profile context?
    Because you want to make it possible for IHVs to do what Apple does and finally stop supporting compatibility. It won't happen on its own; the more people use compatibility, the harder it will be to eventually get rid of it.

  5. #5
    Junior Member Newbie
    Join Date
    Apr 2008
    Posts
    3

    Re: choosing Core vs Compatibility contexts

    Quote Originally Posted by Alfonse Reinheart
    I don't know; do you want to use OpenGL 3.0+ on a Mac? Because Apple isn't supporting compatibility contexts. You have a choice: 2.1+extensions, or 3.2 core.
    Aha! I didn't know that Mac behaved that way. Thanks.

    Quote Originally Posted by Alfonse Reinheart
    Also, I can still choose to write core-profile code in a compatibility context - just ignore all the "old" calls.
    How do you know you're writing core-profile code? Are you using an enum in a place where it's not allowed in core? Did you forget to remove a GL_QUADS or something?

    You can't be sure of anything unless you make the driver check for you.
    To this I just say: I use the spec as a guide, and write core-like code for things like main geometry lighting and advanced effects, and some simple immediate mode rendering to draw certain types of widgets. The point of this is the convenience of some aspects of the prior api, without loss of performance.

    Quote Originally Posted by Alfonse Reinheart
    Unfortunately they don't seem to provide a good way of mixing the built-in vertex attributes with custom ones. It seems to be all or nothing there.
    How do you figure? You can combine them all you want. Neither can affect the other. Unless of course, you're on NVIDIA, where they alias built-ins with customs, which goes against what the spec says.
    Yes, nvidia is the problem there.

    Quote Originally Posted by Alfonse Reinheart
    Because you want to make it possible for IHVs to do what Apple does and finally stop supporting compatibility. It won't happen on its own; the more people use compatibility, the harder it will be to eventually get rid of it.
    Legacy support is the bane of existence, both for IHV driver writers AND app software developers. I can't just upgrade all my codebase to core profile without much pain, time and expense. Change hurts.

  6. #6
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128

    Re: choosing Core vs Compatibility contexts

    Quote Originally Posted by Radagast
    [..]and some simple immediate mode rendering to draw certain types of widgets. The point of this is the convenience of some aspects of the prior api, without loss of performance.
    And as soon as your rendering immediate mode your core ambitions are gone. IMHO, if you're serious about writing core, striving for legacy convenience isn't the right move. From an architectural standpoint core isn't less convenient anyway, since from the time you automate vertex buffer setup and invoking the right draw calls based on the buffer's parameters, using core features boils down to a few C++ calls for almost anything you want to render.

    Furthermore, I agree with Alfonse that as an ultimate compatibility barrier the only way to ensure core compliance is using a core context and checking for errors - especially with existing code. Still, if you're writing from scratch and stick to the spec, chances are that you'll be mostly compliant.

  7. #7
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264

    Re: choosing Core vs Compatibility contexts

    Quote Originally Posted by Radagast
    There are some conveniences in compatibility that I don't want to do without, such as drawing lines in immediate mode, simple single color drawing, polygon stipple, and built-in alpha test. My understanding is that these things are still hardware accelerated anyway, on mainstream nvidia and ati cards.
    Immediate mode isn't fast but for just drawing a line or 2, it is fine.

    I think polygon stipple has never been hw accelerated.

    It is up to you if you need to cut corners.

    I recommend to avoid the old stuff if you are working on a multi-platform and multi-API engine.
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

  8. #8
    Member Regular Contributor
    Join Date
    Dec 2007
    Posts
    253

    Re: choosing Core vs Compatibility contexts

    In theory immediate mode is horrible. But in practise drivers seem to be well optimised to handle it. Unless you need to push a lot of vertices, for gui/2d stuff its fine.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •