Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 2 FirstFirst 12
Results 11 to 13 of 13

Thread: GL 4 Core Function Loading in Core 3.1+ Confusion

  1. #11
    Junior Member Newbie
    Join Date
    Dec 2013
    Posts
    10
    Quote Originally Posted by Alfonse Reinheart
    And what about "pre-tessellation hardware" that cannot actually implement transform_feedback2? Your problem is that you believe that this functionality is purely an API change. It is not.
    This is basically my problem. I assumed that it is fully hardware implementable without "newer" tessellation hardware. I thought it is some kind cure to the problem of not having ARB_draw_indirect on OpenGL 3.x compatible hardware... Ok, I see my problem now.

    Quote Originally Posted by Alfonse Reinheart
    I don't see why you're so up in arms about things like transform_feedback_2, when there are far more eggregious pieces of functionality that could be implemented on GL 3.3 hardware that aren't core 3.3. Like ARB_separate_shader_objects, most of ARB_enhanced_layouts, ARB_explicit_uniform_locations, ARB_texture_storage, ARB_buffer_storage, and so forth. None of them are specific to hardware versions.
    Because I implemented an algorithm that mainly uses transform_feedback_2 for computation and I was in the believe that the algorithm could run on older hardware that barely supports OpenGL 3.3 but not OpenGL 4. I was wrong as I was wrong about the OpenGL ES 3 thoughts. I didn't looked it up exactly enough.

    Quote Originally Posted by Alfonse Reinheart
    I wouldn't specify an OpenGL version. I would specific the specific hardware that I know will work. Like GeForce GT 2xx or better, Radeon HD 3xxx or better, Intel HD 4xxx or better. Not only is that more specific, it's easier for a user to know when they have the right hardware. They don't have to look up a version number; they just look up what their hardware is.
    But that requires the user to check if her graphics card is newer than the one you specified and it requires you, the developer, to check if the driver actually implements that feature for all those graphics cards. Maybe I am wrong about that too, but if I read, "this graphics card is compatible to DirectX 11", can I assume that this graphics card supports ALL DirectX 11 features as a consumer as well as a developer? So could I equally say that "this product requires Core OpenGL 3.3", everyone can assume ALL features being implemented in the driver that can be look up in glcorearb.h, up to the specific section in that file?

    Quote Originally Posted by Alfonse Reinheart
    It's important to note that "written against" and "minimal version" are not always the same version number. For example, transform_feedback2 is written against OpenGL 2.1, so all of its changes are relative to that specification. But the minimum version it says that it requires is 2.0.
    This is very confusing and I assume that I can not derive any association to core features from the raw spec files?!

    Quote Originally Posted by Alfonse Reinheart
    If you want to prevent such mistakes in the future, you should use an OpenGL Loading Library that provides version-specific loaders. Where you can say "only give me what core OpenGL 3.3 provides", and the headers you get will only have core functions from that version.
    I would like to cut out some dependencies from my engine, so the gl loading has fallen first. Even if it means that I make mistakes. To be perfectly honest, only because we have Unity or UE doesn't mean the world has to stop coding OpenGL, C++ or anything else that is fun.

  2. #12
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,932
    Quote Originally Posted by master_of_the_gl View Post
    This is basically my problem. I assumed that it is fully hardware implementable without "newer" tessellation hardware. I thought it is some kind cure to the problem of not having ARB_draw_indirect on OpenGL 3.x compatible hardware... Ok, I see my problem now.
    Well, it very well may be. But you won't know from looking at extension lists or version numbers. You will only know by looking at the actual hardware itself.

    For example, this database or this tool that contains a database. If you want to know what hardware supports which features, those are good resources. Not the XML spec files.

    In this case, you can see that there's quite a lot of OpenGL 3.3 hardware that does support ARB_transform_feedback2. Is it universal? No idea. But it seems pretty substantial. I personally would not feel uncomfortable saying that Radeon HD 3xxx+, GeForce GT 2xx+, and Intel HD 4xxx+ would support it.

    Quote Originally Posted by master_of_the_gl View Post
    But that requires the user to check if her graphics card is newer than the one you specified and it requires you, the developer, to check if the driver actually implements that feature for all those graphics cards. Maybe I am wrong about that too, but if I read, "this graphics card is compatible to DirectX 11", can I assume that this graphics card supports ALL DirectX 11 features as a consumer as well as a developer? So could I equally say that "this product requires Core OpenGL 3.3", everyone can assume ALL features being implemented in the driver that can be look up in glcorearb.h, up to the specific section in that file?
    I wouldn't. Remember: there are always driver bugs; even if it claims to support something, that doesn't mean it's correctly supported. So comprehensive testing is always important.

    Quote Originally Posted by master_of_the_gl View Post
    This is very confusing and I assume that I can not derive any association to core features from the raw spec files?!
    The confusion comes from you trying to equate version numbers with hardware that can implement it. If you stop doing that, the confusion goes away.

    EXT_transform_feedback is written against OpenGL 2.1. ARB_transform_feedback2 does not in any way rely on features from higher OpenGL versions. So... why would it be written against 3.3 or 4.0, when it could be written against 2.1 and thus used in tandem with EXT_transform_feedback?

    And the raw spec files make it clear that the functions/enums in ARB_transform_feedback2 are incorporated into core OpenGL 4.0, not 3.3 or 2.1.

    Quote Originally Posted by master_of_the_gl View Post
    I would like to cut out some dependencies from my engine, so the gl loading has fallen first. Even if it means that I make mistakes.
    Well I hope the time you've spent on this issue was totally worth the whole... two files you gave up in external dependencies.

    Quote Originally Posted by master_of_the_gl View Post
    To be perfectly honest, only because we have Unity or UE doesn't mean the world has to stop coding OpenGL, C++ or anything else that is fun.
    I will never understand the need some people have to make things harder for themselves. I can understand writing a loader if you have specific features you want that no loader provides. Like not exposing functions/enums outside of certain versions, or using a C++11-style interface or exporting to Python or whatever. But if a tool exists, works, is simple to use, and does exactly what you need... I see no advantage to throwing it away.

    There's a huge chasm between "save myself some trouble in using OpenGL" and "give up total control of my application to an engine". Trying to equate them is just false equivalence.

  3. #13
    Junior Member Newbie
    Join Date
    Dec 2013
    Posts
    10
    I think I see the problem of having a GPU vendor commiting to a OpenGL version. Say a vendor makes the statement "GPU X supports OpenGL Version 4". Could we say that, this GPU would also support further OpenGL 4.x versions? I dont know. But this could be a problem, because if that GPU does not implement some feature of, say, OpenGL 4.10, the game of "you need OpenGL 4 + this list of extensions" would start from the beginning. Because I think you can not sync a GPU architecture change with some OpenGL version, so it could be the case that one GPU supports OpenGL 4.0-4.3 and the next arch could support OpenGL 4.0-4.10... I every though there is some sort of synchronization...
    And so it is easier to name some minimum version and try yourself up with extensions with the promise of vendors have extension(ARB or not) very possibly implemented. So it totally makes sense to name the GPU and NOT the OpenGL version.

    Quote Originally Posted by Alfonse Reinheart
    There's a huge chasm between "save myself some trouble in using OpenGL" and "give up total control of my application to an engine". Trying to equate them is just false equivalence.
    Well, if you put it that way, you might be right. But you could also abstract things and say "what does save me the most trouble?". I would say, it is the way of least resistance. And so you could straight skip all that OpenGL/fundamental C++/APIs/massive learning/investing huge amounts of time/(and so many other things) and come straight to Unity or any other very high level engine(with editor and what not) that will save you the most trouble. Right?

    And with your following statement, I would assume, you mean the abstract version of "what does save me the most trouble?".

    Quote Originally Posted by Alfonse Reinheart
    I will never understand the need some people have to make things harder for themselves.
    I would say, its passion!

    Thank you very much for taking so much time answering my questions! I think I understand things better now and I'll be more aware of some details in loading the GL functions myself.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •