Display lists in 3.1

As far as I know, display lists are deprecated or even removed in OpenGL 3.1.

Because of this, there is one thing that stops me from moving to 3.1. I use wglUseFontOutlines extensively and this returns display lists, and there is no other way to get extruded fonts.

Or is there? Does anyone know of an alternative way to get fonts as models and not bitmaps in 3.1?

Thanks.

There are many ways to do that. The easiest is to use FTGL. Big plus: you get one step closer to cross-platform compatibility.

Second alternative: you can read the outlines of the font files directly in your code and tesselate them. Not easy, but it’s been done before.

Third alternative: check the implementation of wglUseFontOutlines in the Wine source code (but beware of its license).

FTGL looks like the best alternative to me. Can you tell me what format does it convert the font to other than display lists?

Thanks.

Read the manual ?

Oh, it is against the rules to ask WHY someone suggested something?

Thanks a lot, I will keep that in mind.

I have never used FTGL, so I have no idea what formats FTGL it can convert to. However, I skimmed both the manual and its source code a couple of years ago and it looked both simple to use and versatile.

> As far as I know, display lists are deprecated or even removed in OpenGL 3.1.

This whole idea of deprecating removing features from OpenGL is just a really stupid idea. This is a great example of why: other APIs depends on features that have been deprecated and it just makes OpenGL harder to use, not easier.

Speaking about rendering fonts, another “deprecated” feature is glBitmap. But that’s ok, you can recode all your bitmap font rendering code to use glyphs loaded into textures where you draw a series of textured rectangles, one per glyph, to render your bitmap font characters. You’ll be replacing a few lines of simple code, perhaps calling glutBitmapCharacter, with hundreds of lines of textured glyph rendering code. You’ll end up greatly disturbing the OpenGL state machine and perhaps introducing bugs because of that. The driver still has all the code for glBitmap in it (so older apps work) so glBitmap could just work (as it will for older apps), but deprecation says those working driver paths aren’t allowed to be exercised so the driver is obliged to hide perfectly good features from you.

Of course when you go to render all those textured rectangles, you’ll be sad to find out that another feature “deprecated” is GL_QUADS. The result is that if you want to now draw lots of textured rectangle (say to work around the lack of glBitmap), you’ll have to see 50% more (redundant) vertex indices to send GL_TRIANGLES instead. Of course all OpenGL implementations and GPUs have efficient support for GL_QUADS. Removing GL_QUADS is totally inane, but that didn’t stop the OpenGL deprecation zealots.

My advice: Just continue to use display lists (and glBitmap and GL_QUADS), create your OpenGL context the way you always have (so you won’t get a context that deprecates features), and be happy.

The sad thing is that display lists remain THE fastest way to send static geometry and state changes despite their “deprecation”.

On recent NVIDIA drivers, the driver is capable of running your display list execution on another thread. This means all the overhead from state changes can be done on your “other core” thereby giving valuable CPU cycles back to your application thread. Display lists are more valuable today than they’ve ever been–making their “deprecation” rather ironic.

  • For dual-core driver operation to work, you need a) more than one core in your system, and b) to not do an excessive number of glGet* queries and such. If the driver detects your API usage is unfriendly to the dual-core optimization, the optimization automatically disables itself.
  • Mark

Is any of what was said here viable for persons wanting their code to work on non-NVIDIA implementations?

NVIDIA is well known for having a very solid display list implementation. ATI is not.

If the current Steam survey is correct, ATI only has ~25% of the market. That is still far too much to ignore.

Well no it’s not. The idea is to streamline the API, and actually help you get on the fast path, while still providing a full feature set while people make the move over.

Why?

This is just nonsense!

AFAIK all geometry is converted to Triangles at the HW level anyway.
So QUADS were an artificial construct in many ways.
They also pose various problems which Triangles don’t, when it comes to the “planarness” of geometry.

And you can actually do Font glyphs using exactly the same number of vertices and using Triangle_Strips. That can be done on any version of OpenGL. If you really want to get really funky you can do fonts with a single point and use geometry shaders to construct the rest of the geometry, whilst also adding other shading effects. You simply need to RTFM!!!

No they are not. Proper use of the correct Buffer Objects and so on are just as fast and more flexible than DLs.

Mark, I thought I recognized your name. If you really are the same as your profile suggests I am stunned by the stuff you are saying here! It’s almost as if you had some kind of agenda. :wink: Because the stuff you are saying is very biassed and misleading. I recently moved to NVidia HW from ATI, but have always enjoyed a good relationship with ATI. As an ambassador of your company you’ve really put me off having any dealings with your corporation, which is not what I would expect at all! Frankly, your comments seem politically motivated, which would be fine if they weren’t also (from my POV at least) deliberately misleading to people who come here seeking unbiased advice.

Overall, (IMO) you are much much better served if you start to move over to OpenGL3.x and learn the best ways to do things. Using the deprecated model is of course an option, but in the long haul you are going to fall foul of significant future API changes…

If you are really the Mark Kilgard, I have to say, I’m rather shocked by your suggestions. In one of your recent postings, you said that “The Beast has now 666 entry points”. Do you really believe that a 666 (and growing!) of functions API is easier to maintain and extend than a more lightweight one?

nVidia and ATI are maybe the most important contributors to GL3.0+. If you seriously doubt that removing DLs and GL_QUADS is a bad thing, why haven’t you prevented it back then?

This is a great example of why: other APIs depends on features that have been deprecated and it just makes OpenGL harder to use, not easier.

Existing (old) APIs can use the old OpenGL features. But you should not encourage people to use these old OpenGL features in their new, yet to be created APIs and applications.

Yes, I see your point: Today, getting even a single triangle one the screen is very hard, from a beginners point of view. But so is DX10… Let external libraries provide the convenience functions that beginners need. (Btw, where is that “eco system” Khronos was talking about years ago??)

The sad thing is that display lists remain THE fastest way to send static geometry and state changes despite their “deprecation”.

Then, why don’t you just re-introduce them to GL3.0+ as new extension? But in a proper way, fitted to the needs of modern OpenGL applications.

If the driver detects your API usage is unfriendly to the dual-core optimization, the optimization automatically disables itself.

When in the recent past have automated driver “guesses” been any good? I always see them fail.
Buffer-Usage hints: failed.
Special-Value optimzations for uniforms: failed.
Threaded Optimization: failed.
automated Multi-GPU exploitation: failed.

Give the API user explicit control. Instead of trying to guess, what the application intends to do, let the application tell the driver.

That post really ticked me off this morning!

The more I think about it someone should see if Mark knows about this account.
He is either smoking crack, or a misguided fanboy has hijacked / created that account. I am hopeful it is the latter!

Kilgard has posted here in the past, though quite infrequently. Check his posts from his profile; it is him.

Mark Kilgard does not work for Khronos or the OpenGL ARB. He works for NVIDIA. When he thinks of OpenGL, he thinks of the NVIDIA implementation of OpenGL.

No they are not. Proper use of the correct Buffer Objects and so on are just as fast and more flexible than DLs.

I would not be too sure of that. Display lists ought to be able to achieve the maximum possible performance on a platform, but doing so requires a great deal of optimization work. NVIDIA did that work, and their display lists can be faster than VBOs. ATI has not, and their display lists show no speed improvements.

I for one agree completely with Mark’s post. It’s a practical view that makes sense.

IMO, as long as the lastest GPU capabilities and fastest access methods are available to developers, and there is clear guidance on what those fast paths are and how to use them, there’s no need to stab those with working codebases in the back with a mandated rewrite. That’s the Microsoft DX mindset. They’re a monopoly. They can do that. And gamedevs just have to suck it up and live with it. Not a big deal for them as they rewrite the engine every game anyway. Real commercial apps? Ha! Hardly. They have better things to do with company profit than re-invent the wheel, just because some bigshot supplier company said so.

So long as the older GL features don’t get in the way of the fastest access methods, who really cares if they’re still there. Vendors, because they still have to maintain them, right? Well NVidia’s already clearly stated many times that they’re fine with that, not to mention are doing great things lately to further improve OpenGL performance and have very stable GL drivers anyway. So let’s just move on and stop trying to look out for the vendors. They can do that for themselves.

long as the lastest GPU capabilities and fastest access methods are available to developers,

See, if all the old cruft stays forever, every new extension has to be cross-checked against every old feature. And even if the application sticks to the ‘fast path’, the driver MUST assume, that you COULD make use of old features ANY time, so this adds extra code (which can contain bugs) which should not be needed in first place.

and there is clear guidance on what those fast paths are and how to use them

Not true. Where is this clear guidance? Truth is, there are a lot of assumptions and myths on the internet on how to achieve maximum performance with OpenGL today, but if you then try them out, it often doesn’t work that way. For instance, I recently tried to replace all Matrix Stack related code in my own engine in a “pure” GL3.1 manner with UBOs. Until recently I could not match glLoadMatrix’s performance, because UBO buffer updates slowed me down to a crawl. All the “praised” methods on how to update a buffer object with new data didn’t work.
I will report about that in a few days. But this example again shows, that there is no clear documented way on how to achieve the maximum performance for this (and many other) scenarios.

there’s no need to stab those with working codebases in the back

Somehow this false prejudice stays in the mind of people :mad: The deprecation model will not brake existing code!!!

That’s the Microsoft DX mindset. They’re a monopoly. They can do that.

Well, the success of DX clearly shows, that this mindeset actually works better than sticking to the past forever.
And people often forget that just because DXn comes out, DXn-1 will not just vanish. If you are unwilling to change anything, you can stick to your favourite DX version if you want. But in reality, people are mostly looking forward the next DX version, eager to try out the new features it brings.

Real commercial apps? Ha! Hardly. They have better things to do with company profit than re-invent the wheel, just because some bigshot supplier company said so.

Its sometimes astonishing how little the developers of these actually know about modern OpenGL, because they just don’t feel the pressure to keep up with the development of technology.
Seeing a video like this:
http://www.youtube.com/watch?v=Gm1l0Rjkm1E
in year 2009 makes me just laugh. Wow… they actually made it to get from immediate mode rendering into VBO based rendering 6 years after the introduction of VBOs :confused:
Switching from IM to VBO is certainly not reinventing the wheel… it requires redesigning the whole renderer. And obviously it has been rewarded with much better performance. And the customers will sure like it.

So long as the older GL features don’t get in the way of the fastest access methods, who really cares if they’re still there.

I am not a driver programmer at nVidia or ATI, but my gut feeling as programmer just tells me that having to support (i.e. emulate) every old feature bloats the code; and more code means more bugs. Additionally, having to assume that the API user might trigger any of the old obscure features any time, means having to do checks for it all around the code.

nVidia claimed some time ago, they spend around 5% of the driver time for GL-object-name checks and lookups. That is why Longs Peak would have introduced GL-provided handles for objects. The necessary checks/hashmaps/whatever could just go away with it, leaving faster and more stable driver code.

GL3.0 tries to get close to this by forbidding to provide your own names for textures and any other GL objects.

But, surprise, suprise ARB_compatibility reintroduces name-generation-on-first-bind semantics. The driver just has to assume that I create my own names and therefor all that 5% overhead stays in the driver: A clear example where old features get in the way of faster methods.

The deprecation model will not brake existing code!!!

It is here that the flaw lies with the deprecation model.

The purpose of the Longs Peak effort was to create a modern API not bound by the cruft of decades of extensions and patches. Ergo, it must break existing code. That is, existing code would not work on Longs Peak implementations. They would have to rely on 2.x implementations for older features.

The deprecation model does not allow this. The deprecation model forces hardware makers to continue to implement the old code. Until deprecated APIs become removed, they will be available. And the longer they remain available, hardware makers will have to keep them available.

And if what Kilgard suggests is true, NVIDIA appears to want to support the these APIs even if they are removed. We will never be rid of these APIs.

Longs Peak would have forced the issue and made a clean break.

And people often forget that just because DXn comes out, DXn-1 will not just vanish.

This is due to Microsoft’s unfailing need for full backwards compatibility. Microsoft assumes the responsibility for making DXn-based hardware work on DXn-1 APIs. They separate what hardware makers have to implement and what users see.

No such agency or separation exists for OpenGL on Windows.

But, surprise, suprise ARB_compatibility reintroduces name-generation-on-first-bind semantics.

ARB_compatibility is an extension. It is supported if an implementation desires to.

Your conclusion is also incorrect. It is simply not enough to forbid the user from providing their own names. GL objects are pointers. And pointers are 64-bits in size. A 64-bit GL implementation will need to have a mapping table, and the 5% lookup overhead that comes with it.

Not necessarily, it can reserve a large contiguous memory chunk and use 32-bit offsets into it.

Well there are some undeniable facts here…

  1. Mark’s comments about QUADS are completely misleading and incorrect.

  2. OpenGL is a High Speed Graphics library, and whilst some funky driver specific DL implementation may be nice, as another poster pointed out: If they are so damn good then resubmit them updated for GL3.x.

The object of OpenGL is not to hand hold people who learned the API in the 60’s, but to move forward and maintain a High Speed Platform Independent Graphics library. Mark’s comments are just as counterproductive as the comments of those that want to keep crap in OpenGL to support their aging CAD/CAM packages…

  1. This is an OpenGL forum, and in this particular area people come for advice, not information that is skewed by partisan or corporate loyalty. Fair enough, point out your opinions, and your feelings on a subject, but DO NOT mislead people deliberately. Mark’s advice was not good advice for OpenGL as it stands today, or moving forward. It was the blinkered view of a manufacturer’s rep, with all that that carries with it.

Sure, I am really happy about some of Nvidia’s short-term decisions about the deprecation model, and overall if I am honest I have found their OpenGL implementations to be more solid than ATIs. But that does not excuse them, or their reps from trollish behavior, if / when they do it. And all Mark has done is make me more nostalgic about the exchanges I have had with ATI’s engineers.

I really like this forum, and the people here, and enjoy doing my small part to help out, and I really can’t express how out of the blue, and how annoying Mark’s comments were to me. I expect better from someone who has been working in this industry for so long, which was why I was even not sure if it was him.

Regardless of Mark’s viewpoint, what really ticked me off was that the information was deliberately factually incorrect, and Mark knew that as he typed it. Period.

wrt to DX, I think Microsoft learned their lesson with Windows. If OpenGL followed the Windows path forever then it might be as messy as OpenGL3.0 was potentially looking when it was first revealed to us.

Not necessarily, it can reserve a large contiguous memory chunk and use 32-bit offsets into it.

You have just described a hash table. This is probably not far from what they already use.

The principle difference between what they have now and what you describe is that they do not have to handle the pathological case of the user submitting an object name like MAX_INT. The performance difference between the two cases would probably be negligible.

Mark’s comments about QUADS are completely misleading and incorrect.

I checked the OpenGL 3.1 specification. GL_QUADS are specifically mentioned as being deprecated. The deprecation section is the only place where QUADS are mentioned.

His statement is not completely wrong. He was correct about QUADS being deprecated.

as another poster pointed out: If they are so damn good then resubmit them updated for GL3.x.

NVIDIA is one voice in the ARB. If NVIDIA were in charge, we would have had Longs Peak a year ago.

Mark’s advice was not good advice for OpenGL as it stands today, or moving forward.

For some of his advice, I would agree that this is true. Nobody should ever use glBitmap. It is an unoptimized path on most implementations.

But on the removal of QUADS, he is correct. This is a problem that has yet to be rectified.

Har Har.
If you are going to split hairs at least make sure you have your semantics correct. :slight_smile:
At no point did I say his statement was “completely wrong”.
QUADS are deprecated. I never disputed that.

What he said may as well have been completely wrong though because it was totally unhelpful.

What I actually said was this:

Mark’s comments about QUADS are completely misleading and incorrect.

His comments are completely misleading and are incorrect advice as they lead the reader to assume that the deprecation of QUADS puts you in a situation where you are going to have to shift more data around to achieve things that were previously possible in OpenGL. That is not the case at all!

Just like his view of using glBitmap, which you pick him up on, using QUADS for geometry is open to abuse, and is not actually the fastest way to do it either.

It is the most conceptually easy for the lazy to understand and code, but not a good use of HW.

But then we get back to my earlier point which is that this is a High Speed Graphics API, not a nursery API, or a plotting language!

Just like his view of using glBitmap, which you pick him up on, using QUADS for geometry is open to abuse, and is not actually the fastest way to do it either.

It is the most conceptually easy for the lazy to understand and code, but not a good use of HW.

Please provide evidence, in the form of actual hardware tests and timings, that sending quadrateral data via 2 GL_TRIANGLES (6 vertices) per quad is faster than sending quadralateral data via GL_QUADS.

The fact that QUADS is deprecated is not evidence that it is slow.