Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 12 12311 ... LastLast
Results 1 to 10 of 120

Thread: Display lists in 3.1

  1. #1
    Junior Member Regular Contributor
    Join Date
    Feb 2007
    Location
    Hungary
    Posts
    173

    Display lists in 3.1

    As far as I know, display lists are deprecated or even removed in OpenGL 3.1.

    Because of this, there is one thing that stops me from moving to 3.1. I use wglUseFontOutlines extensively and this returns display lists, and there is no other way to get extruded fonts.

    Or is there? Does anyone know of an alternative way to get fonts as models and not bitmaps in 3.1?

    Thanks.

  2. #2
    Member Regular Contributor
    Join Date
    Oct 2006
    Posts
    353

    Re: Display lists in 3.1

    There are many ways to do that. The easiest is to use FTGL. Big plus: you get one step closer to cross-platform compatibility.

    Second alternative: you can read the outlines of the font files directly in your code and tesselate them. Not easy, but it's been done before.

    Third alternative: check the implementation of wglUseFontOutlines in the Wine source code (but beware of its license).
    [The Open Toolkit library: C# OpenGL 4.4, OpenGL ES 3.1, OpenAL 1.1 for Mono/.Net]

  3. #3
    Junior Member Regular Contributor
    Join Date
    Feb 2007
    Location
    Hungary
    Posts
    173

    Re: Display lists in 3.1

    FTGL looks like the best alternative to me. Can you tell me what format does it convert the font to other than display lists?

    Thanks.

  4. #4
    Senior Member OpenGL Guru
    Join Date
    Dec 2000
    Location
    Reutlingen, Germany
    Posts
    2,042

    Re: Display lists in 3.1

    Read the manual ?
    GLIM - Immediate Mode Emulation for GL3

  5. #5
    Junior Member Regular Contributor
    Join Date
    Feb 2007
    Location
    Hungary
    Posts
    173

    Re: Display lists in 3.1

    Oh, it is against the rules to ask WHY someone suggested something?

    Thanks a lot, I will keep that in mind.

  6. #6
    Member Regular Contributor
    Join Date
    Oct 2006
    Posts
    353

    Re: Display lists in 3.1

    I have never used FTGL, so I have no idea what formats FTGL it can convert to. However, I skimmed both the manual and its source code a couple of years ago and it looked both simple to use and versatile.
    [The Open Toolkit library: C# OpenGL 4.4, OpenGL ES 3.1, OpenAL 1.1 for Mono/.Net]

  7. #7
    Intern Contributor
    Join Date
    Nov 2002
    Location
    Austin, Texas
    Posts
    50

    Re: Display lists in 3.1

    > As far as I know, display lists are deprecated or even removed in OpenGL 3.1.

    This whole idea of deprecating removing features from OpenGL is just a really stupid idea. This is a great example of why: other APIs depends on features that have been deprecated and it just makes OpenGL harder to use, not easier.

    Speaking about rendering fonts, another "deprecated" feature is glBitmap. But that's ok, you can recode all your bitmap font rendering code to use glyphs loaded into textures where you draw a series of textured rectangles, one per glyph, to render your bitmap font characters. You'll be replacing a few lines of simple code, perhaps calling glutBitmapCharacter, with hundreds of lines of textured glyph rendering code. You'll end up greatly disturbing the OpenGL state machine and perhaps introducing bugs because of that. The driver still has all the code for glBitmap in it (so older apps work) so glBitmap could just work (as it will for older apps), but deprecation says those working driver paths aren't allowed to be exercised so the driver is obliged to hide perfectly good features from you.

    Of course when you go to render all those textured rectangles, you'll be sad to find out that another feature "deprecated" is GL_QUADS. The result is that if you want to now draw lots of textured rectangle (say to work around the lack of glBitmap), you'll have to see 50% more (redundant) vertex indices to send GL_TRIANGLES instead. Of course all OpenGL implementations and GPUs have efficient support for GL_QUADS. Removing GL_QUADS is totally inane, but that didn't stop the OpenGL deprecation zealots.

    My advice: Just continue to use display lists (and glBitmap and GL_QUADS), create your OpenGL context the way you always have (so you won't get a context that deprecates features), and be happy.

    The sad thing is that display lists remain THE fastest way to send static geometry and state changes despite their "deprecation".

    On recent NVIDIA drivers, the driver is capable of running your display list execution on another thread. This means all the overhead from state changes can be done on your "other core" thereby giving valuable CPU cycles back to your application thread. Display lists are more valuable today than they've ever been--making their "deprecation" rather ironic.

    * For dual-core driver operation to work, you need a) more than one core in your system, and b) to not do an excessive number of glGet* queries and such. If the driver detects your API usage is unfriendly to the dual-core optimization, the optimization automatically disables itself.

    - Mark

  8. #8
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    6,031

    Re: Display lists in 3.1

    Is any of what was said here viable for persons wanting their code to work on non-NVIDIA implementations?

    NVIDIA is well known for having a very solid display list implementation. ATI is not.

    If the current Steam survey is correct, ATI only has ~25% of the market. That is still far too much to ignore.

  9. #9
    Advanced Member Frequent Contributor scratt's Avatar
    Join Date
    May 2008
    Location
    Thailand
    Posts
    555

    Re: Display lists in 3.1

    Quote Originally Posted by Mark Kilgard
    This whole idea of deprecating removing features from OpenGL is just a really stupid idea. This is a great example of why: other APIs depends on features that have been deprecated and it just makes OpenGL harder to use, not easier.
    Well no it's not. The idea is to streamline the API, and actually help you get on the fast path, while still providing a full feature set while people make the move over.

    Quote Originally Posted by Mark Kilgard
    Speaking about rendering fonts, another "deprecated" feature is glBitmap. But that's ok, you can recode all your bitmap font rendering code to use glyphs .... You'll end up greatly disturbing the OpenGL state machine and perhaps introducing bugs because of that.
    Why?

    Quote Originally Posted by Mark Kilgard
    Of course when you go to render all those textured rectangles, you'll be sad to find out that another feature "deprecated" is GL_QUADS. The result is that if you want to now draw lots of textured rectangle (say to work around the lack of glBitmap), you'll have to see 50% more (redundant) vertex indices to send GL_TRIANGLES instead. Of course all OpenGL implementations and GPUs have efficient support for GL_QUADS. Removing GL_QUADS is totally inane, but that didn't stop the OpenGL deprecation zealots.
    This is just nonsense!

    AFAIK all geometry is converted to Triangles at the HW level anyway.
    So QUADS were an artificial construct in many ways.
    They also pose various problems which Triangles don't, when it comes to the "planarness" of geometry.

    And you can actually do Font glyphs using exactly the same number of vertices and using Triangle_Strips. That can be done on any version of OpenGL. If you really want to get really funky you can do fonts with a single point and use geometry shaders to construct the rest of the geometry, whilst also adding other shading effects. You simply need to RTFM!!!

    Quote Originally Posted by Mark Kilgard
    My advice: Just continue to use display lists (and glBitmap and GL_QUADS), create your OpenGL context the way you always have (so you won't get a context that deprecates features), and be happy.

    The sad thing is that display lists remain THE fastest way to send static geometry and state changes despite their "deprecation".
    No they are not. Proper use of the correct Buffer Objects and so on are just as fast and more flexible than DLs.

    Mark, I thought I recognized your name. If you really are the same as your profile suggests I am stunned by the stuff you are saying here! It's almost as if you had some kind of agenda. Because the stuff you are saying is very biassed and misleading. I recently moved to NVidia HW from ATI, but have always enjoyed a good relationship with ATI. As an ambassador of your company you've really put me off having any dealings with your corporation, which is not what I would expect at all! Frankly, your comments seem politically motivated, which would be fine if they weren't also (from my POV at least) deliberately misleading to people who come here seeking *unbiased* advice.

    Overall, (IMO) you are much much better served if you start to move over to OpenGL3.x and learn the best ways to do things. Using the deprecated model is of course an option, but in the long haul you are going to fall foul of significant future API changes...

  10. #10
    Advanced Member Frequent Contributor
    Join Date
    Apr 2003
    Posts
    669

    Re: Display lists in 3.1

    If you are really _the_ Mark Kilgard, I have to say, I'm rather shocked by your suggestions. In one of your recent postings, you said that "The Beast has now 666 entry points". Do you really believe that a 666 (and growing!) of functions API is easier to maintain and _extend_ than a more lightweight one?

    nVidia and ATI are maybe the most important contributors to GL3.0+. If you seriously doubt that removing DLs and GL_QUADS is a bad thing, why haven't you prevented it back then?

    This is a great example of why: other APIs depends on features that have been deprecated and it just makes OpenGL harder to use, not easier.
    Existing (old) APIs can use the old OpenGL features. But you should not encourage people to use these old OpenGL features in their _new_, yet to be created APIs and applications.

    Yes, I see your point: Today, getting even a single triangle one the screen is very hard, from a beginners point of view. But so is DX10... Let external libraries provide the convenience functions that beginners need. (Btw, where is that "eco system" Khronos was talking about years ago??)

    The sad thing is that display lists remain THE fastest way to send static geometry and state changes despite their "deprecation".
    Then, why don't you just re-introduce them to GL3.0+ as new extension? But in a proper way, fitted to the needs of modern OpenGL applications.

    If the driver detects your API usage is unfriendly to the dual-core optimization, the optimization automatically disables itself.
    When in the recent past have automated driver "guesses" been any good? I always see them fail.
    Buffer-Usage hints: failed.
    Special-Value optimzations for uniforms: failed.
    Threaded Optimization: failed.
    automated Multi-GPU exploitation: failed.

    Give the API user explicit control. Instead of trying to guess, what the application intends to do, let the application tell the driver.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •