Something Gone Wrong!

After porting, or trying to port an some “old” OpenGL application to “modern” core OpenGL, I have to ask these questions.

Any good reason the good old immediate mode drawing (glBegin/glEnd) is deprecated? Why could not keep it along with glMatrix* stuff?

Also any problem keeping stipple line drawing and line width?

Thanks.

The “good old” glBegin/glEnd code needs to send all data to the GPU each frame, even if it doesn’t change, and incurs huge function call overhead in doing do. This was recognised as a problem even in older versions of OpenGL. Remember that VBOs are not a new feature; they date to OpenGL 1.5, whereas the underlying vertex array API dates to OpenGL 1.1 - even older.

No, the “good old” glBegin/glEnd code wasn’t good, and even in the old days better alternatives were available.

Wide lines and stipple mode are unlikely to be hardware-accelerated.

The “good old” glBegin/glEnd code needs to send all data to the GPU each frame, even if it doesn’t change, and incurs huge function call overhead in doing do

Maybe this is the case for video games, not CAD software, where updates can only happen when required and the graphics itself is mainly CPU bound. CAD graphics is more dynamic and changes on the CPU side, which makes glBegin/End pairs much more convenient. BTW the port I’ve done works faster using “legacy” OpenGL!

Wide lines and stipple mode are unlikely to be hardware-accelerated.

But it happens that they are always accelerated! For example I can emulate stippled wide lines using CPU and shaders, what OpenGL implementation can do in case of missing direct HW acceleration. So in either case it’s accelerated. Letting OpenGL driver do the job gives more chance of direct HW functionality mapping. I bet the professional high-end workstation graphics cards support line drawing in hardware.

[QUOTE=gloptus;1290911]After porting, or trying to port an some “old” OpenGL application to “modern” core OpenGL, I have to ask these questions.

Any good reason the good old immediate mode drawing (glBegin/glEnd) is deprecated? Why could not keep it along with glMatrix* stuff?

Also any problem keeping stipple line drawing and line width?[/QUOTE]

You’ve already gotten an answer to the why some of this was deprecated.

However, if it doesn’t make cost/benefit sense to rewrite a particular feature to eliminate the deprecated functionality, you can still use that functionality in recent OpenGL versions. Just create a “compatibility” context instead of a “core” context. Theoretical purists are offended by this, but people in industry that have to deliver products for a profit know that it doesn’t always pay to rewrite something old to use the latest in-vogue methods.

Maybe this is the case for video games, not CAD software, where updates can only happen when required and the graphics itself is mainly CPU bound. CAD graphics is more dynamic and changes on the CPU side, which makes glBegin/End pairs much more convenient. BTW the port I’ve done works faster using “legacy” OpenGL!

As you probably already have, you can implement your own glBegin/glEnd-style implementation which feeds data into VBOs, which offers reuse of vertex data across multiple renders of that geometry.

Another option is (if you’re targetting NVidia GL drivers) you can often wrap your glBegin()/glEnd() batches inside GL display lists (another deprecated feature) and get tremendous speedups that rival using on-GPU VBOs with NVidia’s bindless vertex data functionality.

That said, long term you of course want to lean toward using GL functionality that’s more widely available and which has been demonstrated to perform better with less CPU and GPU overhead. That gives you more frame time to add content and value to your product.

However, if it doesn’t make cost/benefit sense to rewrite a a particular feature to eliminate the deprecated functionality, you can still use that functionality in recent OpenGL versions. Just create a “compatibility” context instead of a “core” context. Theoretical purists are offended by this, but people in industry that have to deliver products for a profit know that it doesn’t always pay to rewrite something old to use the latest in-vogue methods.

“Theoretical purists” and people who want to use 3.x+ features on Mac OSX where you don’t have a choice of a compatibility context. Also, it seems that there are a fair number of open-source Linux drivers for OpenGL that only support core profile OpenGL. So there’s another set of implementations that aren’t available to you if you want to use the compatibility context.

Faster than what? Client-side vertex arrays or VBOs? VBOs which were updated in place, or VBOs with streaming updates?

I don’t doubt that it’s possible to do a half-baked job of porting glBegin/glEnd code which ends up being even slower. However, there will always be faster alternatives to glBegin/glEnd.

Good point. Thanks for the correction. Some platforms do make updating harder than necessary.

As well as performance it’s also a case of robustness. Drivers have been moving to doing everything via buffer objects and shaders since 2002, so that’s going to be the most tested code path and the one least likely to exhibit bugs, weirdness, conformance glitches, etc. Which I would guess are important items for CAD people too.

I highlighted 2002 for a reason, and it’s worth re-emphasising. None of this is new functionality. A couple of times per year it seems we get a CAD programmer come on and be outraged over a direction that the API has been moving in for over 15 years. If you’ve had 15 years to be aware of it, but you’re only becoming aware of it now, you really don’t have grounds for complaint.

1 Like

If you’ve had 15 years to be aware of it, but you’re only becoming aware of it now, you really don’t have grounds for complaint.

No, I do have grounds for complaint. And I have been aware of the problem since version 3.0. It seems that whoever pushed for this awkward change never aware of CAD software, and was trying to bring OpenGL to video game industry.

The concept of “comparability profile” for deprecated functionality proves that the whole deprecation idea was not well planned.

And yet, it was the CAD software industry that derailed the Longs Peak initiative, which would have seen a complete rewrite of OpenGL. If the videogame industry had their way, OpenGL 3.0 would have been completely incompatible with 2.1. The CAD software industry is wholly responsible for the deprecation/removal model and the core/compatibility divide.

It’s pretty remarkable to have someone break something and then complain about it being broken.

The OpenGL ARB, who include members from the CAD industry.

Sorry, but this is a PRATT. I’ve told you before, this comes up very regularly and it’s the very same discussion and very same outcome every time. You are bringing absolutely nothing new to the table, you are saying absolutely nothing that hasn’t already been said many times over, and you’re not going to change anything by dragging up the same old arguments again.

1 Like

That’s to be expected, IMHO.

Video games are developed, published, then abandoned. Application software (not limited to CAD) is often maintained over decades.

Game developers aren’t going to care if a game simply doesn’t work on a newer system; they’ve already made their money. If a driver continues to support older OpenGL versions (while only providing the core profile for newer versions), that’s more than enough.

But for a code base which is continually upgraded, needing a substantial rewrite before you can use any new features is a real problem. Particularly if the software isn’t monolithic but has an entire ecosystem of add-ons depending upon a stable API.

[QUOTE=GClements;1290945]That’s to be expected, IMHO.

Video games are developed, published, then abandoned. Application software (not limited to CAD) is often maintained over decades.

Game developers aren’t going to care if a game simply doesn’t work on a newer system; they’ve already made their money. If a driver continues to support older OpenGL versions (while only providing the core profile for newer versions), that’s more than enough.

But for a code base which is continually upgraded, needing a substantial rewrite before you can use any new features is a real problem. Particularly if the software isn’t monolithic but has an entire ecosystem of add-ons depending upon a stable API.[/QUOTE]

This isn’t 100% true either; many games nowadays are dependent on third-party engines for which a stable API over an extended period of time is important. Look at the latest Unreal engine, for example; that meets the “ecosystem of add-ons” description.

Oh, I wasn’t saying that CAD houses were not supporting their own needs. I was simply refuting gloptus’s statement that the ARB was somehow unaware of CAD houses, when they were in fact intimately involved in the process.

I think the principle difference is two-fold.

First, high-end game/engine developers are willing and able to expend enormous effort in pursuit of a goal. Whether it is performance, some visual effect, platform portability, or whatever. If supporting a particular platform requires writing a new rendering backend… so be it; just hire on a couple of programmers to get it done and maintain it.

CAD houses are not as willing and/or able to expend such an effort.

Second, even more long-term game engines have some baseline expectations of a system. The modern Unreal engine will not support computers from 2002. Even large engine developers have the luxury of saying “your system must be at least this advanced to ride this ride.”

I don’t think CAD houses are able to be so discriminating.

[QUOTE=mhagain;1290939]The OpenGL ARB, who include members from the CAD industry.

Sorry, but this is a PRATT. I’ve told you before, this comes up very regularly and it’s the very same discussion and very same outcome every time. You are bringing absolutely nothing new to the table, you are saying absolutely nothing that hasn’t already been said many times over, and you’re not going to change anything by dragging up the same old arguments again.[/QUOTE]

And you added nothing more than a typical response to a non-ending issue.

Oh, I wasn’t saying that CAD houses were not supporting their own needs. I was simply refuting gloptus’s statement that the ARB was somehow unaware of CAD houses, when they were in fact intimately involved in the process.

And how do you know that CAD developers were involved in the “process”? :doh:

It is only a “non-ending issue” because people such as yourself will never let it end. That’s why it’s a PRATT issue: you have said literally nothing which has not been debated ad-nausium over the years.

Let’s look at this from a different perspective: was your question asked in good faith? Your question was:

Any good reason the good old immediate mode drawing (glBegin/glEnd) is deprecated? Why could not keep it along with glMatrix* stuff?

Also any problem keeping stipple line drawing and line width?

People provided answers. You rejected those answers as not being a “good reason”. You almost certainly knew those answers before you asked the question.

This doesn’t sound like you were seeking information in good faith. It seems instead like you were “asking a question” just to create an argument or otherwise complain about the state of things not being as you want them. It seems more like you wanted to “ask a question”, get the expected response, and then rail against it as the lone sane man against the forum.

If all you want to do is argue or complain, then nothing useful will be served by this thread. If you are genuinely seeking information in good faith, then what kind of answer would you consider a “good reason”?

You’re the one making the claim; you are therefore the one who needs to provide evidence for it. What proof do you have that CAD developers were ignored or otherwise not included?

An API either meets the need of it’s users or it dies. The whole point of having competition and options is that if an API fails to do what you need in the way you need it, you can just choose to switch to something else.

OpenGL, in the aftermath of version 2.1, was a dying API.

It had serious design flaws inherited from older versions, the need to maintain full backwards compatibility with the very earliest version was preventing it from moving forward, there were too many different ways of doing the same thing but without clarity as to which was preferred,

All I’m asking is whether there’s a technical reason for removing something that’s very convenient and fast, and I mean faster than reinventing the wheel using the “modern” API in terms performance and of coding effort. No valid technical reason other than the common nonsense “this is not how modern hardware works” was ever given! :doh:

It wasn’t removed. You can still use it!

Just create a compatibility context and it’s still there.

Or, and even on a Mac, create a 2.1 or lower context and it will still be there too.

There’s absolutely nothing stopping you continuing to code with glBegin, glEnd, glMatrixMode, glLineWidth or whatever; they’re all still there and you’re getting all bent out of shape over a problem that doesn’t even exist.