Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 18

Thread: OpenGL and depressing deprecation!

  1. #1
    Junior Member Newbie
    Join Date
    Jan 2010
    Posts
    11

    Question OpenGL and depressing deprecation!

    Hey!

    Been a while since I was here and I dont know were to post this, but I feel I must ventilate my thoughts about OpenGL somewhere.
    After completing several OpenGL projects, including one commercial, with quite success and ease Im becoming more and more worried about the future of OpenGL.

    Since Im upgrading graphics and things in a current project I have experimenting with shadow volumes, GLSL and all that stuff quite a lot, and I must say I found the way to mix good old OpenGL with some shaders is the far easiest and fastest way to get something drawn on the screen.

    Just started with geometry shaders and it works great with the old deprecated functions. I found that in my biggest project about 99.9% of the code uses deprecated functions.

    Since Im in the middle of a general upgrade of a project Im getting a bit concerned if I should start all over.
    The next issue is that if I start all over Im not sure if I should stick to OpenGL.
    Dont get me wrong here, I love OpenGL. In its current form. I started with DirectX5 about 100 years ago and after struggling for a year it took almost a week to accomplish the same in OpenGL with better results and speed.
    So in my opinion the reason for its widespread is that its so easy to get the hang of. Latest versions actually deprecate almost EVERYTHING that makes GL so easy to use.

    I know I will get a lot of arguments about all the "advanced" users that get the hang of all maths behind 3D graphics, not that Im a total isiot myself. But to me it sounds like 3D programming in OpenGL is getting more and more away from the independent users with the cool ideas and more towards the "proffessionals" that know how to code but not what to code.
    It could be a dangerous future if OpenGL gets to difficult to use. Especially when a lot of helper libraries and stuff is available for the "other" API.

    I really hope the old functions will stay as they are when it comes to graphics card drivers. Its still amazing to download drivers that has the size of medium sized full games and the deprecated OpenGL part cant make up many of these MBytes anyway.

    It still chocks me that display lists was deprecated. I will not be able to do without them. You can put up many arguments that VBOs are more flexible and all that. But since the possibility to use shaders got possible its almost a brand new start for display list, since you can animate primitives in displaylists using GLSL.
    And I have never seen a VBO thats even close to executing faster then a displaylist.

    Finally my actual question in the matter: WHEN should you actually stop using the deprecated functions.
    And dont give me the standard ASAP answer. I hope someone with deeper insight in the Khronos group can give an estimated guess anyway.

  2. #2
    Advanced Member Frequent Contributor arekkusu's Avatar
    Join Date
    Nov 2003
    Posts
    783
    Quote Originally Posted by AstroM View Post
    WHEN should you actually stop using the deprecated functions.
    How about: "when you want to port your code to a phone."

  3. #3
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,215
    Now.
    Yesterday.
    Tomorrow.
    Next week.
    Never.

    These are all valid answers and just illustrate the point: if the deprecated functions continue to work well for what you're doing, then just continue using them.

    Part of the drive towards deprecation was a feeling that the OpenGL API was getting too big, too unwieldy, too complex, with 17 different ways of specifying vertex formats and 53 different ways of actually drawing them. By focussing it down to one way, in theory there is only one fast path, driver writers get to target optimizations better, programmers know exactly what they need to do to hit that fast path, and consumers get a more predictable and consistent experience. We get to wave goodbye to silly situations like: "draw it this way and it's fast on NV but grinds to single-digit framerates on AMD, draw it the other way and it's fast on AMD but NV goes into software emulation unless these 3 specific states are enabled, which in turn cause Intel and AMD to explode unless those other 2 states are disabled but doing that causes AMD to ... f-ck it, I'll just write eight different rendering backends and be done with it".

    In practice deprecation is not mandatory. Your current code will continue to work in future. Where you might hit trouble is if you're using a really new feature that doesn't define any specific interaction with the older drawing commands, so check extension specs, check which OpenGL version they're written against, have some degree of familiarity with what drawing commands you can use with that version (you don't need to know them in detail) and make some informed decisions based on actual facts. Above all, test on different hardware configurations so that you don't fall into the "works on NV only" trap.

  4. #4
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    495
    Quote Originally Posted by AstroM View Post
    Since Im upgrading graphics and things in a current project I have experimenting with shadow volumes, GLSL and all that stuff quite a lot, and I must say I found the way to mix good old OpenGL with some shaders is the far easiest and fastest way to get something drawn on the screen.
    Unfortunately, "hello world" programs aren't the priority. As software becomes more complex, the overhead of the "boilerplate" becomes proportionally less significant.

    Quote Originally Posted by AstroM View Post
    I really hope the old functions will stay as they are when it comes to graphics card drivers.
    While the compatibility profile won't be going away any time soon, eventually you're likely to be forced to make a choice between the legacy API and the new features. In particular, Apple have said that they won't be adding any new extensions to the compatibility profile. So if you want the newest features, you'll have to use the core profile.

    Quote Originally Posted by AstroM View Post
    Its still amazing to download drivers that has the size of medium sized full games and the deprecated OpenGL part cant make up many of these MBytes anyway.
    You can't separate the code into the "legacy" and "new" parts. If you can use both the legacy features and the new features at the same time, then the code which implements the new features has to take account of all the legacy features. The result is that complexity grows exponentially.

    Quote Originally Posted by AstroM View Post
    It still chocks me that display lists was deprecated.
    It would seem that X11 (and specifically the network-transparency aspect) isn't as important a platform as it once was. Bear in mind that the original motivation for display lists was to avoid sending the same sequence of commands over the network (which, in those days, was limited to 10 Mbit/sec) every frame. Outside of that use-case, display lists aren't all that important.

    Quote Originally Posted by AstroM View Post
    WHEN should you actually stop using the deprecated functions.
    When you no longer need to support systems which lack OpenGL 3.x.

  5. #5
    Intern Contributor
    Join Date
    Mar 2014
    Posts
    50
    To be honest, 90% of the deprecated stuff belongs to the garbage bin.

    The only regrettable thing is that they threw away the immediate mode drawing commands without providing an efficient way to draw large amounts of small, dynamic low polygon batches. The vertex buffer upload times for these are a true performance killer when using glBuffer(Sub)Data so this was probably the biggest roadblock for core profile adoption by software.

    Only some very recent features provide a viable alternative for this.

    Granted, I occasionally miss the convenience of the built in matrices, but all the rest - that includes the entire fixed function pipeline and especially the display lists are heavy baggage that needs to be carried around by the drivers but offers very little use aside from supporting ancient hardware.

    The main problem I have with deprecation is that it was removed all at once, instead of doing it gradually. The deprecation in 3.0 was a clear indicator of which functionality needed to go, but the time between deprecation and removal was just too short for removing everything in one move. No software can adopt to changes that pull away the rug under its feet in one move. The result of this ill-thought out strategy are the compatibility profiles, which probably will haunt OpenGL until all eternity.

    I think it would have gone a lot smoother, if 3.x only had deprecated, and finally removed fixed function, leaving the rest for 4.x. Thanks to doing it all at the same time the likelihood of software being upgraded is a lot less than it could have been.

  6. #6
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,215
    The problem is that buffer objects have been in core since GL1.5. They're not anything new, yet we frequently see people making this complaint as if they were. There are plenty of intermediate porting steps available - one of them is called GL2.1 - and it's difficult to see how adding yet another intermediate porting step could have made the situation any different. Odds are we'd still be seeing the same complaints if that had been done.

    Right now you have three main options when it comes to older code:

    1. Don't port it at all. It will continue to work; some day you may encounter a case where deprecated functionality doesn't coexist nicely with new higher functionality, but if you're maintaining a legacy program you may usefully decide to just stay with GL2.1 or lower, and your program will just continue working as before.
    2. Bring it up to GL2.1, move it over to buffer objects only, move it over to shaders only, then make the jump to core contexts. This is what I consider the most sensible approach for those who don't want to take option 1; you get to make the transition to buffer objects only and shaders only at a pace that suits you, then, when and only when that's complete, you jump up and start accessing higher functionality.
    3. Do a full rewrite to bring it up to core contexts in one go. This is problematical for reasons outlined here and elsewhere.

    Thing is, a lot of complaints about deprecation and removal of deprecated functionality are written as if options 1 and 2 did not exist.

    Regarding "large amounts of small, dynamic low polygon batches", buffer objects in GL2.1 actually do have options that will let you handle them; the real performance problem comes with a naive implementation, where you take each glBegin/glEnd pair and treat it as a separate upload/separate draw. That's not a fault of the API, it's a fault of how you're using it, and arguably also a fault of the older specifications for not providing clarity on how things should be used in a performant manner. The solution of course is to batch your updates so that you're only making a single glBufferSubData call per-frame, then fire off draw calls. Yes, that means work and restructuring of code if you're porting from glBegin/glEnd, but that brings me back to the first point: none of this is anything new; we've had since GL1.5 to make the transition to buffer objects, so complaining about it now seems fairly silly.

  7. #7
    Intern Contributor
    Join Date
    Mar 2014
    Posts
    50

    )

    Quote Originally Posted by mhagain View Post
    [*]Bring it up to GL2.1, move it over to buffer objects only, move it over to shaders only, then make the jump to core contexts. This is what I consider the most sensible approach for those who don't want to take option 1; you get to make the transition to buffer objects only and shaders only at a pace that suits you, then, when and only when that's complete, you jump up and start accessing higher functionality.
    I see you completely fail to see the issue here, namely that this is a non-trivial transition that may require a MASSIVE investment of time, if at all possible. Let's make it clear: There are situations where the limitations of GL 3.x buffers will make this entirely impossible (because all of the existing buffer upload methods are too slow. Fun part: In Direct3D this was significantly less of an issue!) I have been working on a project that makes very liberal use of immediate mode drawing whenever something is available to be drawn. This project is also (quite unsurprisingly) 100% CPU-bottlenecked. But to ever make this work with buffers we have to do even more maintenance on the CPU without experiencing any benefit whatsoever from faster GPU access, just to optimize buffer upload times. In other words, up until very recently, porting was a no-go - just because of immediate mode. Getting the code away from the fixed function pipeline and the builtin matrix stack was a simple and straightforward matter by comparison - but with immediate mode we had to wait for almost 6 years until a viable replacement came along (with the emphasis being on 'viable'!)

    Apparently you don't see the ramifications of the handling of the core profile. It was utterly shortsighted and apparently only concerned with newly developed software - but apparently no thought was wasted how the deprecation mechanism could be used to get old, existing software upgraded to newer features. The plain and simple fact is that the amount of ported software will be inversely proportional to the amount of work required. The more work is needed to port an existing piece of software, the less likely it is to be ported because if porting involves throwing away large pieces and starting from scratch the inevitable result will be your first option: Don't port it at all!

    It's just - do the OpenGL maintainers really want that? It should have been clear from the outset that using new features has to mean playing by the new rules exclusively. Deprecation should help steer developers to leave the old behind and start using the new - so that in the future the old stuff will be gone. That means, you have to take a gradual approach. If you advance too brutally you lose touch with your developers.

    And in GL 3.x land it is far easier to port old code away from the fixed function pipeline to shaders than it is to rewrite code to use buffers - in fact in many cases it's utterly impossible because the internal organization simply doesn't allow it.

    But that's simply not possible if such a careless approach is taken. If you want to let go of the old you have to make 200% certain that people will adopt.
    That means, you deprecate stuff that's more or less equivalently replaced by an existing modern feature - you DO NOT(!!!) deprecate (and don't even think about removing) stuff that forces a complete application rewrite! If you can't deprecate immediate mode without providing an equivalent replacement that's better tied into the 'modern' ways you wait with deprecation!
    So, at the time of 3.0, fixed function was more or less equivalently replaced by shaders, the matrix stack is a special case but since it's barely part of the hardware state not a big deal, so deprecating both was fine.
    On the other hand, if you wanted to replace code that's inherently tied to liberally invoking draw calls via immediate mode - you alternatively do it... ... ... how?!? The answer is, you can't!
    It might have messed up the core profile for a few more versions but I'd guarantee you that if that had happened there would not be a compatibility profile now! I'd rather have taken such a temporary mess compared to the permanent one we have now.

    And to be clear about it: Yes, immediate mode needed to go, but it was removed at the wrong time! Now, with persistently mapped buffers I suddenly can port over all my old legacy code without any hassle (as in: no need to restructure the existing logic because there is no performance penalty anymore for just putting some data into a buffer and issue a draw call) - but wait - there's a 'BUT'! Immediate mode has been gone from the core profile for years and there's already drivers out there which implement only core profile for post 3.0 versions. Although this doesn't prevent porting, it definitely makes it harder, in case some co-workers stuck with such a driver also need to work on the project. So we are now in a situation that a properly designed deprecation mechanism was supposed to avoid (as in, you always have one version at your disposal where the old feature you want to replace and the new one you want to replace it with both exist and are fully working!)


    Face it: 90% of all existing legacy code is structurally incompatible with the 3.x core profile! That means, 90% of all existing legacy code is never getting ported at all to 3.x core! The reason it's structurally incompatible is not the deprecation of the fixed function pipeline - it's also not the deprecation of the matrix stack - it's solely the deprecation of immediate mode rendering without having any performant means to simply replace it.
    And the inevitable result of this was pressure to compromise - and behold - that compromise has become the monster called 'compatibility mode'. Had deprecation been done properly in a way to drive developers toward updating their code instead of having a preservationist crutch hacked into the driver, things might look better now.

    It doesn't matter one bit that buffers had been core since 1.5. Before deprecation of immediate mode, developers chose between those two based on which approach was better suited to the problem at hand. Sometimes a buffer works better, but at other times it performs far worse. And it's these 'far worse' situations that were inadequately dealt with in GL 3.x core.

  8. #8
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    Astrom: My take is simple: no legacy GL in new code.

    If you're forced to maintain a legacy code base, usually due to economical, time and compatibility constraints, by all means, keep the legacy well and clean. As mhagain already stated: there are core GL 4.4 features you can already use even in legacy code, the most prominent being plain vertex buffer objects.

    Quote Originally Posted by Nikki_k
    I see you completely fail to see the issue here, namely that this is a non-trivial transition that may require a MASSIVE investment of time
    I see you completely fail to see that it will be a huge or possibly massive investment of time anyway. The question is, do you invest the time in small steps, porting feature by feature, or do you go ahead and rewrite everything. Going from legacy to modern core OpenGL takes time and care - no doubt. Still, mhagain already proposed the first option - and he's right to do so IMO.

    Quote Originally Posted by Nikki_k
    [..]because all of the existing buffer upload methods are too slow. Fun part: In Direct3D this was significantly less of an issue!
    Proof please. I'm not aware of any D3D10 feature that so massively kicks the GL's ass. Or am I simply not aware of something similar to persistently mapped buffers in D3D10? I thought the only thing giving you an advantage over is D3D10_MAP_WRITE_NO_OVERWRITE with a D3D10_MAP_WRITE_DISCARD at frame begin.

    Quote Originally Posted by Nikki_k
    [..]this work with buffers we have to do even more maintenance on the CPU without experiencing any benefit whatsoever from faster GPU access, just to optimize buffer upload times.[..]but with immediate mode we had to wait for almost 6 years until a viable replacement came along[..].
    Yeah yeah, you mentioned that already - several times - in another thread. It's high time you tell us what frickin' exotic scenario you're talking about. Otherwise you'll simply stay in that magical position that no one here can disagree with because there isn't enough hard facts to do so. Cut the crap and get real.

    Quote Originally Posted by Nikki_k
    The more work is needed to port an existing piece of software, the less likely it is to be ported because if porting involves throwing away large pieces and starting from scratch the inevitable result will be your first option: Don't port it at all! [..] That means, you have to take a gradual approach. If you advance too brutally you lose touch with your developers.
    Microsoft did it. Maintaining backwards compatibility for over 20 years is ludicrous for something like OpenGL. D3D10/11 doesn't give a crap about the D3D9 API. The things is, even if you only leverage the features that comply to the D3D9 feature subset still supported by D3D11, you still have to code against the D3D11 API. You can't even use the old D3D9 format descriptors. No way you're gonna have a D3D11 renderer and still write stuff similar to glEnableClientState(GL_FOG_COORD_ARRAY), to mention just one example the make me want to jump out the window, while at the same time wanting to have kids with the GL 4.4 spec because of GL_ARB_buffer_storage.

    And what are we gonna do anyway? Suppose there had been a compatibility break and we now were forced to either stay with GL3.0 at max OR start rewriting our code bases to use GL3.1+ core features - what would have been the alternative? Transition to D3D and a complete rewrite of everything? Also, where I work, we're supporting Win/Linux/Mac - go to D3D and you have to write another renderer if you want to keep Linux and Mac around.

    Quote Originally Posted by Nikki_k
    If you want to let go of the old you have to make 200% certain that people will adopt.
    IMO, you have to make sure that people have to adopt - see D3D. That's where the ARB failed - by letting us use the new stuff and the old crap side-by-side. I seriously doubt many companies would have been pissed off enough to leave their GL renderers behind.

    And there is no substantial problem I know of that's solvable with GL2.1 but not with GL 3.1+ - if you have one, stop rambling and prove it with an example.

    Quote Originally Posted by Nikki_k
    That means, you deprecate stuff that's more or less equivalently replaced by an existing modern feature - you DO NOT(!!!) deprecate (and don't even think about removing) stuff that forces a complete application rewrite!
    Name one feature you're missing from GL 3.1+ that forced you to rewrite your entire application. I'm very, very curious. If you're answer is gonna be what you repeatedly mentioned, i.e. immediate mode vertex attrib submission is king and everything else is not applicable or too slow (which is a hilarious observation in itself), I refer you to my earlier proposition.

    Quote Originally Posted by Nikki_k
    If you can't deprecate immediate mode without providing an equivalent replacement that's better tied into the 'modern' ways you wait with deprecation!
    Oh, my bad, there it is again ...

    Quote Originally Posted by Nikki_k
    if you wanted to replace code that's inherently tied to liberally invoking draw calls via immediate mode - you alternatively do it... ... ... how?!? The answer is, you can't!
    Liberally invoking draw calls? Since when is someone writing a real world application processing large vertex counts interested in liberally invoking draw calls? Please define liberallyand please state why you can't batch multiple liberal draw calls into one and source the attribs from a buffer object. Otherwise, this is just as vague as everything else you stated so far to defend immediate mode attrib submission.

    Quote Originally Posted by Nikki_k
    And to be clear about it: Yes, immediate mode needed to go, but it was removed at the wrong time!
    More than 15 years isn't enough? Seriously?

    Quote Originally Posted by Nikki_k
    as in: no need to restructure the existing logic because there is no performance penalty anymore for just putting some data into a buffer and issue a draw call)
    See? That's what I'm talking about ... the code to do that, except for a few lines of code, is exactly the same. In fact, with persistent mapping, you have to do synchronization inside the draw loop yourself - a task that's non-trivial with non-trivial applications.

    Persistent mapping is an optimization and it doesn't make rewriting your write hundreds of times easier. You, however, continue to state this perverted notion that persistently mapped buffers are the only viable remedy for something that was previously only adequately solvable with immediate mode ... Have you ever had a look the the "approaching zero driver overhead" presentation of the GDC14? Did you have a look at the code sample that transformed a non-persistent mapping to a persistent mapping? Your argument before was that you cannot replace immediate mode with anything else other than persistently mapped buffers. If you're so sure about what your saying, please explain the supposedly huge difference between an async mapping implemenation and a persistent mapping implementation - because you didn't say that async mapping was too slow because of implicit synching inside the driver or something (and that's AFAIK only reportedly so in case of NVIDIA drivers which really seem to hate MAP_UNSYCHRONIZED), you said you couldn't do it at all.

    Quote Originally Posted by Nikki_k
    So we are now in a situation that a properly designed deprecation mechanism was supposed to avoid (as in, you always have one version at your disposal where the old feature you want to replace and the new one you want to replace it with both exist and are fully working!)
    Again, there is nothing of importance you can't do with core GL 3.1+ that you can do with GL 2.1 - except for quads maybe. You have everyhing you need at your disposal to go from GL2.1 to core GL 3.0 - and everything you write then is still usable even if you then move directly to a GL 4.4 core context.

    Even if it means a little more work, it's almost definitely solvable and never a worse solution. If I'm wrong, please correct me with concrete examples.

    Quote Originally Posted by Nikki_k
    Face it: 90% of all existing legacy code is structurally incompatible with the 3.x core profile! That means, 90% of all existing legacy code is never getting ported at all to 3.x core!
    Nice assumption. Got any proof? You are aware that you're talking about almost every application out there using OpenGL, right? Also, a rewrite is essentially also a port - neither the existence of your renderer seizes, nor do the problems you solved before completely vanish just because you're bumping the GL version.

    Quote Originally Posted by Nikki_k
    [..]it's solely the deprecation of immediate mode rendering without having any performant means to simply replace it.
    You have not produced any statistics that support this statement, also no code or a high-level description of your problem at hand. You're just rambling on and on ...

    Quote Originally Posted by Nikki_k
    Before deprecation of immediate mode, developers chose between those two based on which approach was better suited to the problem at hand.
    Wrong again. Developers chose client side vertex arrays before VBOs because for amounts of data above a certain threshold, client side vertex arrays substantially improve transfer rates and substantially reduce draw call overhead. Plus, there is no way of rendering indexed geometry with immediate mode because you needed either an index array or, surprise, a buffer object holding indices.

    Quote Originally Posted by Nikki_k
    Sometimes a buffer works better, but at other times it performs far worse.
    Again, purely speculation - and stating the a buffer object supposedly performs better than immediate mode sometimes ... that's really something to behold. Unless the driver is heavily optimized to batch vertex attributes you submit and send the whole batch once you hit glEnd() or even uses some more refined optimizations, there is no way immediate mode submission can be faster than sourcing directly from GPU memory - not in theory and not in practice.

    Quote Originally Posted by Nikki_k
    And it's these 'far worse' situations that were inadequately dealt with in GL 3.x core.
    Name three.
    Last edited by thokra; 06-05-2014 at 06:55 AM.

  9. #9
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,215
    Quote Originally Posted by Nikki_k View Post
    I see you completely fail to see the issue here, namely that this is a non-trivial transition that may require a MASSIVE investment of time, if at all possible. Let's make it clear: There are situations where the limitations of GL 3.x buffers will make this entirely impossible (because all of the existing buffer upload methods are too slow. Fun part: In Direct3D this was significantly less of an issue!).
    No.

    The point is that this isn't a GL3.x+ problem; this is a problem that goes all the way back to GL1.3 with the GL_ARB_vertex_buffer_object extension, so you've had more than ample time to get used to the idea of using buffer objects, and more than ample time to learn how to use them properly.

    Talking about it as though it were a GL3.x+ problem and as if it were something new and horrible isn't helping your position. Howabout you try doing something constructive like dealing with the problem instead?

    And for the record, I also know and have programmed in D3D8, 9, 10 and 11, so I'm well aware of the no-overwrite/discard pattern and of the differences between it and GL buffers.

  10. #10
    Member Regular Contributor malexander's Avatar
    Join Date
    Aug 2009
    Location
    Ontario
    Posts
    326
    As long as you're only targeting AMD and Nvidia cards on Windows or Linux, you can use a GL4.4 compatibility context. However, if you intend to every support GL3/4 features with Windows/Intel graphics (which is becoming a larger segment) or OSX, you'll need to use a core profile context. OSX only gives you the option of an older GL2.1-based context (with a few rather old GL3 extensions thrown in), or a pure GL3.2 core profile with either GL3.2 (10.7), GL4.1 (10.9) or GL4.4 (10.10) context.

    As our application (with TONS of GL1.x code in it, on the order of 10s of thousands of lines of GL-specific code) works on OSX, we had to undertake a process of conversion to modern GL from display lists & immediate mode (in the worst case) as supporting a GL3.2 and a GL2.1 rendering backend proved to be more problematic that taking the core-profile plunge. We were also concerned that Apple might simply drop the GL2.1 profile at some point, as they're known to do with old APIs.

    Most of our GL code was for drawing 2D UI elements, but we also have an extensive 3D viewport (polys, NURBS, volumes, etc). We completely rewrote the 3D rendering code (which took quite an investment), but conversion of the 2D UI elements took significantly less time (2-3 months). The 3D conversion was done to generally improve performance and appearance, and used modern core GL3.2+ features. It was the 2D UI conversion that was done specifically because of core-GL platform issues. Someone way back had wrapped all the immediate mode and basic GL commands in our own functions (which had debugging and assertion code, etc), so we used these and replaced the underlying GL mechanism. It now streams these vertex values to VBOs, and only flushes them when the GL state changes significantly enough to warrant it (texture change, for example). After that we looked at rendering bottlenecks and converted those slow rendering paths with our pseudo-immediate mode code and hand-converted those to modern GL.

    So it's not impossible, but certainly isn't trivial either. If you are thinking about Mac OSX as a potential platform and want to use modern GL features, you'll be faced with this problem. Otherwise, I wouldn't worry about the core profile at all, and instead gradually upgrade the parts of your application that will benefit from modern GL techniques (whether it be performance or new capabilities).

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •