PDA

View Full Version : Display lists in 3.1



aronsatie
04-02-2009, 12:33 AM
As far as I know, display lists are deprecated or even removed in OpenGL 3.1.

Because of this, there is one thing that stops me from moving to 3.1. I use wglUseFontOutlines extensively and this returns display lists, and there is no other way to get extruded fonts.

Or is there? Does anyone know of an alternative way to get fonts as models and not bitmaps in 3.1?

Thanks.

Stephen A
04-02-2009, 01:06 AM
There are many ways to do that. The easiest is to use FTGL (http://ftgl.wiki.sourceforge.net/). Big plus: you get one step closer to cross-platform compatibility.

Second alternative: you can read the outlines of the font files directly in your code and tesselate them. Not easy, but it's been done before.

Third alternative: check the implementation of wglUseFontOutlines in the Wine source code (but beware of its license).

aronsatie
04-02-2009, 01:49 AM
FTGL looks like the best alternative to me. Can you tell me what format does it convert the font to other than display lists?

Thanks.

Jan
04-02-2009, 02:02 AM
Read the manual ?

aronsatie
04-02-2009, 05:40 AM
Oh, it is against the rules to ask WHY someone suggested something?

Thanks a lot, I will keep that in mind.

Stephen A
04-02-2009, 08:13 AM
I have never used FTGL, so I have no idea what formats FTGL it can convert to. However, I skimmed both the manual and its source code a couple of years ago and it looked both simple to use and versatile.

Mark Kilgard
06-05-2009, 04:16 PM
> As far as I know, display lists are deprecated or even removed in OpenGL 3.1.

This whole idea of deprecating removing features from OpenGL is just a really stupid idea. This is a great example of why: other APIs depends on features that have been deprecated and it just makes OpenGL harder to use, not easier.

Speaking about rendering fonts, another "deprecated" feature is glBitmap. But that's ok, you can recode all your bitmap font rendering code to use glyphs loaded into textures where you draw a series of textured rectangles, one per glyph, to render your bitmap font characters. You'll be replacing a few lines of simple code, perhaps calling glutBitmapCharacter, with hundreds of lines of textured glyph rendering code. You'll end up greatly disturbing the OpenGL state machine and perhaps introducing bugs because of that. The driver still has all the code for glBitmap in it (so older apps work) so glBitmap could just work (as it will for older apps), but deprecation says those working driver paths aren't allowed to be exercised so the driver is obliged to hide perfectly good features from you.

Of course when you go to render all those textured rectangles, you'll be sad to find out that another feature "deprecated" is GL_QUADS. The result is that if you want to now draw lots of textured rectangle (say to work around the lack of glBitmap), you'll have to see 50% more (redundant) vertex indices to send GL_TRIANGLES instead. Of course all OpenGL implementations and GPUs have efficient support for GL_QUADS. Removing GL_QUADS is totally inane, but that didn't stop the OpenGL deprecation zealots.

My advice: Just continue to use display lists (and glBitmap and GL_QUADS), create your OpenGL context the way you always have (so you won't get a context that deprecates features), and be happy.

The sad thing is that display lists remain THE fastest way to send static geometry and state changes despite their "deprecation".

On recent NVIDIA drivers, the driver is capable of running your display list execution on another thread. This means all the overhead from state changes can be done on your "other core" thereby giving valuable CPU cycles back to your application thread. Display lists are more valuable today than they've ever been--making their "deprecation" rather ironic.

* For dual-core driver operation to work, you need a) more than one core in your system, and b) to not do an excessive number of glGet* queries and such. If the driver detects your API usage is unfriendly to the dual-core optimization, the optimization automatically disables itself.

- Mark

Alfonse Reinheart
06-05-2009, 09:18 PM
Is any of what was said here viable for persons wanting their code to work on non-NVIDIA implementations?

NVIDIA is well known for having a very solid display list implementation. ATI is not.

If the current Steam survey is correct, ATI only has ~25% of the market. That is still far too much to ignore.

scratt
06-05-2009, 09:19 PM
This whole idea of deprecating removing features from OpenGL is just a really stupid idea. This is a great example of why: other APIs depends on features that have been deprecated and it just makes OpenGL harder to use, not easier.

Well no it's not. The idea is to streamline the API, and actually help you get on the fast path, while still providing a full feature set while people make the move over.


Speaking about rendering fonts, another "deprecated" feature is glBitmap. But that's ok, you can recode all your bitmap font rendering code to use glyphs .... You'll end up greatly disturbing the OpenGL state machine and perhaps introducing bugs because of that.

Why?


Of course when you go to render all those textured rectangles, you'll be sad to find out that another feature "deprecated" is GL_QUADS. The result is that if you want to now draw lots of textured rectangle (say to work around the lack of glBitmap), you'll have to see 50% more (redundant) vertex indices to send GL_TRIANGLES instead. Of course all OpenGL implementations and GPUs have efficient support for GL_QUADS. Removing GL_QUADS is totally inane, but that didn't stop the OpenGL deprecation zealots.

This is just nonsense!

AFAIK all geometry is converted to Triangles at the HW level anyway.
So QUADS were an artificial construct in many ways.
They also pose various problems which Triangles don't, when it comes to the "planarness" of geometry.

And you can actually do Font glyphs using exactly the same number of vertices and using Triangle_Strips. That can be done on any version of OpenGL. If you really want to get really funky you can do fonts with a single point and use geometry shaders to construct the rest of the geometry, whilst also adding other shading effects. You simply need to RTFM!!!


My advice: Just continue to use display lists (and glBitmap and GL_QUADS), create your OpenGL context the way you always have (so you won't get a context that deprecates features), and be happy.

The sad thing is that display lists remain THE fastest way to send static geometry and state changes despite their "deprecation".

No they are not. Proper use of the correct Buffer Objects and so on are just as fast and more flexible than DLs.

Mark, I thought I recognized your name. If you really are the same as your profile suggests I am stunned by the stuff you are saying here! It's almost as if you had some kind of agenda. ;) Because the stuff you are saying is very biassed and misleading. I recently moved to NVidia HW from ATI, but have always enjoyed a good relationship with ATI. As an ambassador of your company you've really put me off having any dealings with your corporation, which is not what I would expect at all! Frankly, your comments seem politically motivated, which would be fine if they weren't also (from my POV at least) deliberately misleading to people who come here seeking *unbiased* advice.

Overall, (IMO) you are much much better served if you start to move over to OpenGL3.x and learn the best ways to do things. Using the deprecated model is of course an option, but in the long haul you are going to fall foul of significant future API changes...

skynet
06-06-2009, 05:41 AM
If you are really _the_ Mark Kilgard, I have to say, I'm rather shocked by your suggestions. In one of your recent postings, you said that "The Beast has now 666 entry points". Do you really believe that a 666 (and growing!) of functions API is easier to maintain and _extend_ than a more lightweight one?

nVidia and ATI are maybe the most important contributors to GL3.0+. If you seriously doubt that removing DLs and GL_QUADS is a bad thing, why haven't you prevented it back then?


This is a great example of why: other APIs depends on features that have been deprecated and it just makes OpenGL harder to use, not easier.
Existing (old) APIs can use the old OpenGL features. But you should not encourage people to use these old OpenGL features in their _new_, yet to be created APIs and applications.

Yes, I see your point: Today, getting even a single triangle one the screen is very hard, from a beginners point of view. But so is DX10... Let external libraries provide the convenience functions that beginners need. (Btw, where is that "eco system" Khronos was talking about years ago??)


The sad thing is that display lists remain THE fastest way to send static geometry and state changes despite their "deprecation".
Then, why don't you just re-introduce them to GL3.0+ as new extension? But in a proper way, fitted to the needs of modern OpenGL applications.


If the driver detects your API usage is unfriendly to the dual-core optimization, the optimization automatically disables itself.
When in the recent past have automated driver "guesses" been any good? I always see them fail.
Buffer-Usage hints: failed.
Special-Value optimzations for uniforms: failed.
Threaded Optimization: failed.
automated Multi-GPU exploitation: failed.

Give the API user explicit control. Instead of trying to guess, what the application intends to do, let the application tell the driver.

scratt
06-06-2009, 07:36 AM
That post really ticked me off this morning!

The more I think about it someone should see if Mark knows about this account.
He is either smoking crack, or a misguided fanboy has hijacked / created that account. I am hopeful it is the latter!

Alfonse Reinheart
06-06-2009, 12:35 PM
Kilgard has posted here in the past, though quite infrequently. Check his posts from his profile; it is him.

Mark Kilgard does not work for Khronos or the OpenGL ARB. He works for NVIDIA. When he thinks of OpenGL, he thinks of the NVIDIA implementation of OpenGL.


No they are not. Proper use of the correct Buffer Objects and so on are just as fast and more flexible than DLs.

I would not be too sure of that. Display lists ought to be able to achieve the maximum possible performance on a platform, but doing so requires a great deal of optimization work. NVIDIA did that work, and their display lists can be faster than VBOs. ATI has not, and their display lists show no speed improvements.

Dark Photon
06-06-2009, 12:35 PM
I for one agree completely with Mark's post. It's a practical view that makes sense.

IMO, as long as the lastest GPU capabilities and fastest access methods are available to developers, and there is clear guidance on what those fast paths are and how to use them, there's no need to stab those with working codebases in the back with a mandated rewrite. That's the Microsoft DX mindset. They're a monopoly. They can do that. And gamedevs just have to suck it up and live with it. Not a big deal for them as they rewrite the engine every game anyway. Real commercial apps? Ha! Hardly. They have better things to do with company profit than re-invent the wheel, just because some bigshot supplier company said so.

So long as the older GL features don't get in the way of the fastest access methods, who really cares if they're still there. Vendors, because they still have to maintain them, right? Well NVidia's already clearly stated many times that they're fine with that, not to mention are doing great things lately to further improve OpenGL performance and have very stable GL drivers anyway. So let's just move on and stop trying to look out for the vendors. They can do that for themselves.

skynet
06-06-2009, 03:24 PM
long as the lastest GPU capabilities and fastest access methods are available to developers,

See, if all the old cruft stays forever, every new extension has to be cross-checked against every old feature. And even if the application sticks to the 'fast path', the driver MUST assume, that you COULD make use of old features ANY time, so this adds extra code (which can contain bugs) which should not be needed in first place.


and there is clear guidance on what those fast paths are and how to use them
Not true. Where is this clear guidance? Truth is, there are a lot of assumptions and myths on the internet on how to achieve maximum performance with OpenGL today, but if you then try them out, it often doesn't work that way. For instance, I recently tried to replace all Matrix Stack related code in my own engine in a "pure" GL3.1 manner with UBOs. Until recently I could not match glLoadMatrix's performance, because UBO buffer updates slowed me down to a crawl. All the "praised" methods on how to update a buffer object with new data didn't work.
I will report about that in a few days. But this example again shows, that there is no _clear documented way_ on how to achieve the maximum performance for this (and many other) scenarios.


there's no need to stab those with working codebases in the back
Somehow this false prejudice stays in the mind of people :mad: The deprecation model will not brake existing code!!!!


That's the Microsoft DX mindset. They're a monopoly. They can do that.
Well, the success of DX clearly shows, that this mindeset actually works better than sticking to the past forever.
And people often forget that just because DXn comes out, DXn-1 will not just vanish. If you are unwilling to change anything, you _can_ stick to your favourite DX version if you want. But in reality, people are mostly looking forward the next DX version, eager to try out the new features it brings.


Real commercial apps? Ha! Hardly. They have better things to do with company profit than re-invent the wheel, just because some bigshot supplier company said so.
Its sometimes astonishing how little the developers of these actually know about modern OpenGL, because they just don't feel the pressure to keep up with the development of technology.
Seeing a video like this:
http://www.youtube.com/watch?v=Gm1l0Rjkm1E
in year 2009 makes me just laugh. Wow... they actually made it to get from immediate mode rendering into VBO based rendering 6 years after the introduction of VBOs :confused:
Switching from IM to VBO is certainly not reinventing the wheel... it requires redesigning the whole renderer. And obviously it has been rewarded with much better performance. And the customers will sure like it.


So long as the older GL features don't get in the way of the fastest access methods, who really cares if they're still there.
I am not a driver programmer at nVidia or ATI, but my gut feeling as programmer just tells me that having to support (i.e. emulate) every old feature bloats the code; and more code means more bugs. Additionally, having to assume that the API user might trigger any of the old obscure features any time, means having to do checks for it all around the code.

nVidia claimed some time ago, they spend around 5% of the driver time for GL-object-name checks and lookups. That is why Longs Peak would have introduced GL-provided handles for objects. The necessary checks/hashmaps/whatever could just go away with it, leaving faster and more stable driver code.

GL3.0 tries to get close to this by forbidding to provide your own names for textures and any other GL objects.

But, surprise, suprise ARB_compatibility reintroduces name-generation-on-first-bind semantics. The driver just has to assume that I create my own names and therefor all that 5% overhead stays in the driver: A clear example where old features get in the way of faster methods.

Alfonse Reinheart
06-06-2009, 06:56 PM
The deprecation model will not brake existing code!!!!

It is here that the flaw lies with the deprecation model.

The purpose of the Longs Peak effort was to create a modern API not bound by the cruft of decades of extensions and patches. Ergo, it must break existing code. That is, existing code would not work on Longs Peak implementations. They would have to rely on 2.x implementations for older features.

The deprecation model does not allow this. The deprecation model forces hardware makers to continue to implement the old code. Until deprecated APIs become removed, they will be available. And the longer they remain available, hardware makers will have to keep them available.

And if what Kilgard suggests is true, NVIDIA appears to want to support the these APIs even if they are removed. We will never be rid of these APIs.

Longs Peak would have forced the issue and made a clean break.


And people often forget that just because DXn comes out, DXn-1 will not just vanish.

This is due to Microsoft's unfailing need for full backwards compatibility. Microsoft assumes the responsibility for making DXn-based hardware work on DXn-1 APIs. They separate what hardware makers have to implement and what users see.

No such agency or separation exists for OpenGL on Windows.


But, surprise, suprise ARB_compatibility reintroduces name-generation-on-first-bind semantics.

ARB_compatibility is an extension. It is supported if an implementation desires to.

Your conclusion is also incorrect. It is simply not enough to forbid the user from providing their own names. GL objects are pointers. And pointers are 64-bits in size. A 64-bit GL implementation will need to have a mapping table, and the 5% lookup overhead that comes with it.

Starfox
06-06-2009, 07:31 PM
GL objects are pointers. And pointers are 64-bits in size. A 64-bit GL implementation will need to have a mapping table, and the 5% lookup overhead that comes with it.
Not necessarily, it can reserve a large contiguous memory chunk and use 32-bit offsets into it.

scratt
06-06-2009, 08:54 PM
Well there are some undeniable facts here..

1. Mark's comments about QUADS are completely misleading and incorrect.

2. OpenGL is a High Speed Graphics library, and whilst some funky driver specific DL implementation may be nice, as another poster pointed out: If they are so damn good then resubmit them updated for GL3.x.

The object of OpenGL is not to hand hold people who learned the API in the 60's, but to move forward and maintain a High Speed *Platform Independent* Graphics library. Mark's comments are just as counterproductive as the comments of those that want to keep crap in OpenGL to support their aging CAD/CAM packages...

3. This is an OpenGL forum, and in this particular area people come for advice, not information that is skewed by partisan or corporate loyalty. Fair enough, point out your opinions, and your feelings on a subject, but DO NOT mislead people deliberately. Mark's advice was not good advice for OpenGL as it stands today, or moving forward. It was the blinkered view of a manufacturer's rep, with all that that carries with it.

Sure, I am really happy about some of Nvidia's short-term decisions about the deprecation model, and overall if I am honest I have found their OpenGL implementations to be more solid than ATIs. But that does not excuse them, or their reps from trollish behavior, if / when they do it. And all Mark has done is make me more nostalgic about the exchanges I have had with ATI's engineers.

I really like this forum, and the people here, and enjoy doing my small part to help out, and I really can't express how out of the blue, and how annoying Mark's comments were to me. I expect better from someone who has been working in this industry for so long, which was why I was even not sure if it was him.

Regardless of Mark's viewpoint, what really ticked me off was that the information was deliberately factually incorrect, and Mark knew that as he typed it. Period.

wrt to DX, I think Microsoft learned their lesson with Windows. If OpenGL followed the Windows path forever then it might be as messy as OpenGL3.0 was potentially looking when it was first revealed to us.

Alfonse Reinheart
06-06-2009, 10:30 PM
Not necessarily, it can reserve a large contiguous memory chunk and use 32-bit offsets into it.

You have just described a hash table. This is probably not far from what they already use.

The principle difference between what they have now and what you describe is that they do not have to handle the pathological case of the user submitting an object name like MAX_INT. The performance difference between the two cases would probably be negligible.


Mark's comments about QUADS are completely misleading and incorrect.

I checked the OpenGL 3.1 specification. GL_QUADS are specifically mentioned as being deprecated. The deprecation section is the only place where QUADS are mentioned.

His statement is not completely wrong. He was correct about QUADS being deprecated.


as another poster pointed out: If they are so damn good then resubmit them updated for GL3.x.

NVIDIA is one voice in the ARB. If NVIDIA were in charge, we would have had Longs Peak a year ago.


Mark's advice was not good advice for OpenGL as it stands today, or moving forward.

For some of his advice, I would agree that this is true. Nobody should ever use glBitmap. It is an unoptimized path on most implementations.

But on the removal of QUADS, he is correct. This is a problem that has yet to be rectified.

scratt
06-07-2009, 02:53 AM
Mark's comments about QUADS are completely misleading and incorrect.

I checked the OpenGL 3.1 specification. GL_QUADS are specifically mentioned as being deprecated. The deprecation section is the only place where QUADS are mentioned.

His statement is not completely wrong. He was correct about QUADS being deprecated.


Mark's advice was not good advice for OpenGL as it stands today, or moving forward.

For some of his advice, I would agree that this is true. Nobody should ever use glBitmap. It is an unoptimized path on most implementations.

But on the removal of QUADS, he is correct. This is a problem that has yet to be rectified.

Har Har.
If you are going to split hairs at least make sure you have your semantics correct. :)
At no point did I say his statement was "completely wrong".
QUADS are deprecated. I never disputed that.

What he said may as well have been completely wrong though because it was totally unhelpful.

What I actually said was this:


Mark's comments about QUADS are completely misleading and incorrect.

His comments are completely misleading and are incorrect advice as they lead the reader to assume that the deprecation of QUADS puts you in a situation where you are going to have to shift more data around to achieve things that were previously possible in OpenGL. That is not the case at all!

Just like his view of using glBitmap, which you pick him up on, using QUADS for geometry is open to abuse, and is not actually the fastest way to do it either.

It is the most conceptually easy for the lazy to understand and code, but not a good use of HW.

But then we get back to my earlier point which is that this is a High Speed Graphics API, not a nursery API, or a plotting language!

Alfonse Reinheart
06-07-2009, 03:15 PM
Just like his view of using glBitmap, which you pick him up on, using QUADS for geometry is open to abuse, and is not actually the fastest way to do it either.

It is the most conceptually easy for the lazy to understand and code, but not a good use of HW.

Please provide evidence, in the form of actual hardware tests and timings, that sending quadrateral data via 2 GL_TRIANGLES (6 vertices) per quad is faster than sending quadralateral data via GL_QUADS.

The fact that QUADS is deprecated is not evidence that it is slow.

scratt
06-07-2009, 08:50 PM
Are you being deliberately dense?

If you read my posts from earlier in this very thread, or have any kind of OpenGL experience, you'll be aware of the various Triangle Strip, Fan etc. configurations & also Indexing. Both offer faster ways to put text, or any kind of "QUAD" on screen than GL_QUAD. Particularly wrt to long lines of text those two methods offer a better alternative than QUADS, and always have done.

Using indexing, or stripping you can send the same 50 characters that a QUAD system would send to the screen with half (or less) the vertices. On any HW that is going to be faster.

The point is you don't need to send even 4 vertices per QUAD if you use methods like Strips and Fans. So both your and Mark's understanding of these formats seems to be lacking, either deliberately or through pure ignorance.

There is a reason why in Geometry Shaders you don't have QUADS either btw.
There is also a reason why benchmarking is done using TRIs as well.
Think about it for a second.

If you delve a little deeper then you might even discover Geometry Shaders and realize that you could even do text with a single vertex. But that's obviously not available on all HW.

Getting rid of QUADS might be unpopular, but is is a good thing in the long run, as it makes you look at what the GPU is meant to work with as a raw format, and pushes you in a direction which is going to give you more efficient results.

Are you really going to take your lead from someone who recommends using the old glBitMap commands!?!?

RTFM! ;)

Alfonse Reinheart
06-07-2009, 10:11 PM
Using indexing, or stripping you can send the same 50 characters that a QUAD system would send to the screen with half (or less) the vertices. On any HW that is going to be faster.

I understand now. You are speaking under the belief that all fonts are fixed-width. You assume that a run of text will result in a sequence of quadrilaterals that all share edges with one another, with no breaks inbetween glyphs.

This is not the case.

Any glyphs generated from variable-width fonts will not share edges. Ligatures, kerning, and other text formatting effects will see to that. Glyphs can overlap or have large gaps between them. This is designed into the font and must be respected for best visual results.

If you limit yourself to fixed-width fonts of Latin-1 languages, then your statement may be correct.


If you delve a little deeper then you might even discover Geometry Shaders and realize that you could even do text with a single vertex. But that's obviously not available on all HW.

The key question in this discussion was performance. Geometry shaders use performance. The number of vertices would would decrease only at the expense of increasing the overall render time.


Are you really going to take your lead from someone who recommends using the old glBitMap commands!?!?

This part of the discussion is not about trusting Mark Kilgard or you. It is about what is correct. Just as Mark is incorrect about using glBitmap, you are incorrect about triangle strips being more effective in text rendering.

scratt
06-07-2009, 10:22 PM
[quote]I understand now. You are speaking under the belief that all fonts are fixed-width. You assume that a run of text will result in a sequence of quadrilaterals that all share edges with one another, with no breaks inbetween glyphs.

Nope. Not at all.

One single instance of a QUAD drawn as a TRIANGLE_STRIP is *exactly* the same number of vertices as a QUAD. Period.

There is a direct comparison for you. None of your qualifications.

Not a good real world example, but nonetheless the same.

If you want to do varying widths then indexing, and many other methods are available. Degenerate triangles and so on.. You just have to be familiar with them.
And in most, if not all, there are ways to solve all the problems you pose, *and* save data transfer / processing power.

I have mentioned all these methods to you already at various places and you choose to latch onto the strip example as the one and only case. That is as blinkered as believing that QUADS are the panacea to all ills. They are not, and can be easily and exactly replicated using the same or even less vertexes.

AFAIK that is exactly what the hardware / drivers have been doing behind the scenes for a long time. Only now it is exposed to us and we are asked to take some time to think about it on our end, and use more native formats for our geometry.
A good idea IMO.

Now back to your Geometry shader comment.
The whole point of sending a single vertex and then making the geometry on the GPU is exactly to limit the client-server bandwidth usage. Something that your complaint and Mark's was specifically about as you are talking about number of vertices sent.

As for the allocation or usage of horsepower on the CPU end or GPU end, what do you think happens on the GPU end when you submit a QUAD?

Eric
06-08-2009, 12:25 AM
Someone sent an alert to the moderating team about Mark's post, on the assumption that the poster wasn't really Mark Kilgard. Just to make things clear: as Alfonse Reinheart pointed out earlier in this topic, all the evidence shows that the post comes from the real Mark Kilgard.

Alfonse Reinheart
06-08-2009, 01:50 AM
Now, I really understand. You think that GL_QUADS is not directly supported by hardware. That every draw call with GL_QUADS causes a software routine to run that converts the data into GL_TRIANGLES or some other format.

You are mistaken.

The piece of hardware that reads from the post-T&L cache understands GL_QUADS. It therefore reads 3 vertices from the cache, outputs a triangle made of them, reads a fourth vertex, and outputs a second triangle made of 2, 3, and 4. Then it skips 4 vertices ahead and repeats.

This same piece of hardware understands GL_TRIANGLES (read three, emit triangle, repeat), GL_TRIANGLE_STRIP (read 3, emit triangle, skip 1 vert, repeat), GL_TRIANGLE_FAN, and the rest.

This hardware has existed for near on a decade. It has existed in hardware since hardware T&L came to be. Every piece of T&L-based hardware must have something that converts triangle strips and fans into a list of triangles. It is in no way difficult to add modes to this hardware that can handle quad lists.

You may ask for evidence of this. My evidence is an old extension: NV_vertex_array_range (http://www.opengl.org/registry/specs/NV/vertex_array_range.txt). This is an NVIDIA extension, an ancient, very low-level precursor to buffer objects.

Near the end of the specification are implementation notes for NVIDIA hardware of the time. It lists the appropriate vertex formats that are acceptable on their hardware. It lists things like vertex attribute sizes for specific attributes, particular vertex formats, and so forth. It lists specific limitations for specific hardware. The NV10, the GeForce 256, is specifically mentioned. It has a number of very strict vertex format limitations.

Nowhere on that comprehensive list of limitations will you find a prohibition on the use of GL_QUADS. If NVIDIA had implemented GL_QUADS as a software layer, reading the vertex data and turning it into GL_TRIANGLES or some other form, then they would have stated a prohibition against using GL_QUADS. This is because reading from VAR memory would be very performance damaging.

No such limitation is spoken of. This is because no such limitation exists. And the only reason for that is if the hardware natively supported GL_QUADS. I even found an old demo program I downloaded years ago for VAR. It uses GL_QUAD_STRIP exclusively.

Not everything that was deprecated by the ARB was unsupported in hardware.

Xmas
06-08-2009, 02:59 AM
Is there some proof that all OpenGL implementations efficiently support GL_QUADS?

GL_QUAD_STRIP is different because it almost directly maps to GL_TRIANGLE_STRIP (except in wireframe mode, and rounded down to an even number of vertices).

Jackis
06-08-2009, 03:13 AM
Look, camrades, it was a friday night, Mark was surely exhausted after a hard week... So he took some bottles of beer or smth else and left such an affective post. He's not a robot in any, he has his points of view, so what we are discussing? As for me, he could have even written "Damn, I hate GL, it suxx, use DX, it rulezz!", and it's his point of view. It's a free forum, not a Kronos corporate mailbox.
Deprecation mechanism is here, with us, and we can't break it. So we must deal with it. Me, I'm also worrying for the drivers efficiency with 2 codepathes being supported, but anyway, it's not my headache! It's driver developers who must take care of it, and I believe NV or ATI guys would do their best.
About quadstrips vs tristrips - mine tests year ago showed me, that I had *NO* perfomance difference using this or that. Test was rather good for me - big rectangular grids of 255*255 vertices. And I had no difference.
Tristrips for me are much more evident, because I have full control about diagonal edge. Using quad - I don't know, where it's supposed to be. And oh, triangle is most primitive with correct atribute interpolsation. Quad is not.

dletozeun
06-08-2009, 04:25 AM
Reading all these posts debating about quads, I am wondering if all that's worth the effort, I mean, who is using quads en masse in his renderer if it is not for postprocessing things (fullscreen quads), hud or font rendering?
IMO, all this stuff is nothing in computation time terms compared to rest of the scene rendering. I think I need some more clarifications about your passion about quads. :)
(Please stay calm, I am not criticizing anybody, quads fans or not :) , just need your light).

Dark Photon
06-08-2009, 05:25 AM
Reading all these posts debating about quads, I am wondering if all that's worth the effort...
Amen to that. Unless the bandwidth overhead of the extra vertex indices is the bottleneck for rendering QUADS as TRIS for someone, this whole discussion makes no sense to me. I've never observed index bandwidth to be the bottleneck.


I mean, who is using quads en masse in his renderer if it is not for postprocessing things (fullscreen quads), hud or font rendering?
We use 'em. Impostors, light points, etc. But for convenience, not because there's some coveted perf advantage we know of.


I think I need some more clarifications about your passion about quads. :)
Yeah, I missed the whole point too. But I fear we're getting off-topic, that being a merciless flame of Mark for stating his opinion. Which I wish would just stop.

Next thread...

Jan
06-08-2009, 08:24 AM
I can see that DLs are a good idea for multithreaded drivers. However the CURRENT DLs are a pain in the ass, have ever been one and i am sooo happy, that they have finally been kicked out.

HOWEVER, now that this abomination is gone, it might be a good idea to re-invent them in a way that can be supported easily by all vendors and is guaranteed to give good performance.

There has been another thread about it, and i have already given a few ideas, but now is the time for NV, ATI and the ARB to decide how to continue with this.

Oh, and thanks to nVidia and ARB_compatibility we have actually entirely "broken" GL 3 implementations today. nVidia might have the best OpenGL implementation, but they are absolutely reckless and don't care much about the future of OpenGL as long as they are "the best" (well not as bad as ATI and Intel...) today.

And i think that Marks comment shows nVidias view on OpenGL 3 pretty well. Even if it is his personal view, it is definitely influenced by the general opinion at nVidia.

Jan.

Brolingstanz
06-08-2009, 09:35 AM
If I'm NV I'm probably thinking

- If it ain't broke, don't fix it!
- Why rock the boat?
- Why make things easier for the competition?
- What's to be gained by streamlining the API in areas where we have a considerable investment, proven track record of stability and speed, and the customers to prove it?
- Why throw the baby out with the bathwater?
- Where's the real beef?
- Demonstrated and continued leadership in propelling the API forward virtually unfettered by concerns over, say, "QUADS".

Developers are always on the lookout for ways to make their lives easier down the road, but we all know full well that's not what makes the (fiscal) world go 'round.

Back to tendin' my biscuits...

P.S. Sorry for the OT but I can't resist playing devil's advocate. ;-)

scratt
06-08-2009, 10:44 AM
<snip>

Good Grief.

Nope I don't think that QUADS drop you off the fast path.

We could go on all day... or all weekend and into the week. Oh wait, we did!
I do appreciate your comments, and those of others.

Alfonse Reinheart
06-08-2009, 10:44 AM
Oh, and thanks to nVidia and ARB_compatibility we have actually entirely "broken" GL 3 implementations today.

In what way does ARB_compatibility break NVIDIA's GL 3 implementation?


If I'm NV I'm probably thinking

No, what NVIDIA wants is NvidiaGL.

The best example of this is the bindless graphics extension. The claim is made that bindless graphics is the best way to achieve fast performance on NVIDIA hardware.

If NVIDIA's goal is to subvert OpenGL and convert it into NvidiaGL, what better way than with bindless graphics? To use it, you have to write special shaders that are entirely incompatible with regular shaders. To use it, you have to write rendering code that is entirely incompatible with previous rendering code. To support non-bindless and bindless in a single application, you must write and maintain 2 copies of all vertex shaders.

Adding to that is the issue of Vertex Array Objects and Uniform Buffer Objects. These should provide fast, efficient ways to change state and render primitives. Yes, bindless should be faster, but it should not be 7x faster. With NVIDIA emphasizing bindless graphics, what incentive does NVIDIA have to optimize these codepaths? All they need to do is create a self-fulfilling prophecy, that bindless graphics is much faster than the alternatives.

Id Software can afford to maintain 2 shader stacks. As can Blizzard and many other large developers. What do the rest of us do? Accept the needlessly 7x slower VAO and UBO method?


Nope I don't think that QUADS drop you off the fast path.

Then how can QUADS possibly be slower than indexed triangle strips? Indexed strips send far more vertex data; they have to send indicies. Font rendering does not have the vertex sharing that indexed strip rendering needs to be more efficient. Every letter would need a degenerate strip connecting it to the next. So every letter would tale 4 indicies for the letter, and 4 indices for the degenerate strips connecting them.

knackered
06-08-2009, 11:32 AM
This is a bit surreal.
I like the cut of mark's gib, but I don't agree with most of what he's said. Should have been a clean break. Feature freeze old GL, introduce new API - this hybrid nonsense is bad. I agree with the stuff he said about dlists, but only for geometry, and only as a means of letting the driver put the geometry into optimal format. If I want that done in another thread, it should be up to me - not the driver. I'm best placed to decide how to use my CPU's, the driver should stick to optimising stuff for the GPU.

Mark Kilgard
06-08-2009, 11:59 AM
> If you are really _the_ Mark Kilgard, I have to say, I'm rather shocked by your suggestions. In one of your recent postings, you said that "The Beast has now 666 entry points". Do you really believe that a 666 (and growing!) of functions API is easier to maintain and _extend_ than a more lightweight one?

I think (scratch that), I know the size of the API (whether 20, 666, or 2000) commands has little to do with how easy it is to maintain and extend a 3D API. Does that shock you? It might; I've worked on OpenGL for 18 years so I approach your question with a good deal of accumulated experience and even, dare I say, expertise on the subject.

I don't think API entry point count has much, if really anything, to do with maintainability of an OpenGL implementation. It has far more to do with 1) investment in detailed regression testing, 2) hiring, retaining, and encouraging first-rate 3D architects, hardware designers, and driver engineers, 3) clean, well-written specifications, and 4) a profitable business enterprise that can sustain the prior three.

Those are the key four factors. I could probably list more if you forced me to do so, but those are really the four key factors. If you forced me to list 20 more, I'm confident API size would still not make my list.

> nVidia and ATI are maybe the most important contributors to GL3.0+. If you seriously doubt that removing DLs and GL_QUADS is a bad thing, why haven't you prevented it back then?

I thought it was a poor course of action then; I think it's a poor course of action now. I've done my best to prevent deprecation from hurting the OpenGL ecosystem. Deprecation exists, but I consider it to be basically a side-show.

NVIDIA doesn't remove and won't remove GL_QUADS or GL_QUAD_STRIP or display lists (or any of the so-called deprecated features). These features all *just work*. Obviously our underlying GPU hardware does (and will always) support quads, etc.

Now if YOU want to avoid these features because YOU think (or someone else has convinced you) these fully operational features are icky or non-modern, go ahead and don't use them. But nobody has to stop using them, particularly if they find them useful/fast/efficient/convenient or simply already implemented in their existing code base. NVIDIA intends to keep all these features useful, fast, efficient, convenient, and working.

The problem is that someone's judgment (be they app developer, driver implementer, or whatever) of what is good and bad in the API probably doesn't match the judgment of others. My years of experience inform me that people tend to consider features they personally don't happen to use as "non-essential" and ready fodder for deprecation. The fact that other OpenGL users may consider these same features totally essential and have built substantial chunks of their application around the particular feature you consider non-essential probably doesn't matter much to you; I assure you the person or organization or industry relying on said feature feels differently.

What you might not appreciate (though I do!) is that this unspecified "other user" may be the one that does far more than you to sustain the business model that supports OpenGL's continued development. CAD vendors used to say (this is less so now) they didn't care about texture mapping; game developers would say they don't care about line stipple or display lists.

For good reason, the marketplace doesn't really let you buy a "CAD GPU design" or "volume rendering GPU design" or "game GPU design" tailored just for CAD, volume rendering, or gaming; the same efficient, inexpensive GPU design can do ALL these things (and more!) and there's no specialized GPU design on the market that can do CAD (or volume rendering or gaming) better than the general-purpose GPU design.

That said, a particular product line (such as Quadro for CAD) can be and is tailored for the demands of high-end CAD and content creation, but the 3D API rendering feature set (what is actually supported by OpenGL) is the SAME for a GeForce intended for consumer applications and gaming. In the same way, when GeForce products are tailored for over-clocking and awesome multi-GPU configurations, that's simply tailoring the product for gaming enthusiasts. This is much the same way there's not a CPU instruction set for web browsing and different instruction set for accounting.

There's a fallacy that if somehow the GPU stopped doing texture mapping well it would run CAD applications better; or if the GPU stopped doing line stipple (or quads or display lists), the GPU would magically play games faster. In isolation, the cost of any of the features is pretty negligible and certainly the subtraction of a feature won't improve another different feature. There's also been repeated examples of "unexpected providence" in the OpenGL API where a feature such as stencil testing, designed originally for CAD applications to use for constructive solid geometry and interference detection, get used to generate shadows in a game such as Doom 3 or Chronicles of Riddick

Said another way, if I concentrated on *just* the features of OpenGL YOU care about, I would likely NOT have a viable technical/economic model to sustain OpenGL. It's probably also true that if I just concentrated on the features of unspecified "other user" of OpenGL, I would also NOT have a viable technical/economic model to sustain OpenGL. But in combination, the multitude of features, performance, and capacity requirements of the sum total of 3D application development create a value-creating economic environment that sustains OpenGL in a way that benefits all parties involved.

Knowing this to be true, how do you expect that "zero'ing out" features by deprecation is going to suddenly make other features better or faster. There's a knee-jerk answer: duh, well, if company Z doesn't have to work on feature A anymore, they will finally have the time/resources to properly implement feature B.

But that doesn't hold up to scrutiny. Almost all of the features listed for deprecation have been in OpenGL since OpenGL 1.0 (1992). If the features were simple enough to implement in hardware 17 years ago and now you have over 200x more transistors for graphics than back then, was it really the complexity of some feature that has saddled copmany Z's OpenGL implementation for all these years? Give me a break.

Moreover, feature A and feature B are very likely completely independent features with almost nothing to do with each other. Then you can't claim feature A is making feature B hard to implement.

> Existing (old) APIs can use the old OpenGL features. But you should not encourage people to use these old OpenGL features in their _new_, yet to be created APIs and applications.

I encourage anyone using OpenGL to use any feature within the API, old or new, that meets their needs.

If you think I'm going to be going around telling NVIDIA's partners and customers (or anyone using OpenGL) what features of OpenGL they should not be using, you are sadly mistaken.

Developers are free to use old and new features of OpenGL and they should rightfully be able to expect the features to interact correctly, operate efficiently, and perform robustly. Why would I (or they) want anything less than that?

I think it is wholly unreasonable to tell developer A that in order to use new feature Z, developer A is going to have to stop using old features B, C, D, E, F, G, H, I, J, K... (the list of deprecated feature is long) that have *nothing* to do with feature Z.

This isn't to say that I want OpenGL to be stagnant. Far from it, I've worked hard to modernize OpenGL for the last decade. I wrote and implemented the first specification for highly configurable fragment shading (register combiners), specified the new texture targets for cube mapping, specified the first programmable extension for vertex processing using a textual shader representation, played an early role (and continue to do so) developing a portable, high-level C-like language (Cg) for shaders, specified and implemented support for rectangle and non-power-of-two textures, implemented the driver-side support for GLSL and OpenGL 2.0 API for NVIDIA, and more recently worked to eliminate annoying selectors from the OpenGL API with the EXT_direct_state_access extension. Before any of this, I wrote GLUT to help popularize OpenGL.

All in all, I'm pretty committed to OpenGL's success. If I thought deprecation would make OpenGL more successful, I'd be all for it (but that's entirely NOT the case). Instead, I think deprecation is on-balance bad for or, at best, irrelevant to OpenGL's future development and success.

I'm really proud of what our industry (and the participants on opengl.org specifically) have managed to create with OpenGL. Arguably, source code implementing 3D graphics is MORE portable across varied computing platforms than code to implement user interfaces, 2D graphics, or any other type of digital media processing. That's amazing.

But deprecation in OpenGL is an unfortunate side-show. It's a distraction. It gives other OpenGL implementers an excuse for foisting poorly performing and buggy OpenGL implementations on the industry; they (wrongly) get off lightly from you developers by employing a "blame the API" strategy that places the costs of deprecation wholly on YOU rather than them just properly designing, implementing, and testing good OpenGL implementations.

Deprecation asks You All (the sum total of OpenGL developers out there) to solve Their Problem which is they refuse to devote the time and engineering resources to robustly implement OpenGL properly; instead, they blame the API and hope You All will re-code All Your applications to avoid the simpler solution of Them simply properly implementing their own OpenGL implementation.

Trust me, API size is NOT at the core of why these problem implementations are poor (go back to the four factors I listed earlier...). Attempts to "blame the API" for what are clearly faults in their implementation doesn't fix any root causes.

As an OpenGL developer, rather than poorly utilizing your time trying to convert your code to avoid deprecated features, you would be better served sending a loud-and-clear message that you expect OpenGL to be implemented fully, efficiently, and robustly.

- Mark

knackered
06-08-2009, 12:30 PM
that's all very well and good, but We don't have the market share to influence support of a minority API. We either have to continue to work around Their awful implementations of a undeniably complicated API, or We move to D3d if We have the choice (which I don't). Thanks for your understanding, Mark. How did you get so much ivory to make that tower?

Alfonse Reinheart
06-08-2009, 12:58 PM
I don't think API entry point count has much, if really anything, to do with maintainability of an OpenGL implementation. It has far more to do with 1) investment in detailed regression testing, 2) hiring, retaining, and encouraging first-rate 3D architects, hardware designers, and driver engineers, 3) clean, well-written specifications, and 4) a profitable business enterprise that can sustain the prior three.

It is a truism of software engineering that the larger your codebase is, the more people and effort you need to maintain and extend it. The larger your codebase, the more investment in detailed regression testing you need. The more hiring, retaining, and encouraging first-rate 3D architects, hardware designers, and driver engineers you need. And so on.

In short, the larger the codebase that an OpenGL implementation requires, the more money it takes to build and maintain it.

This is the reason why Intel's Direct3D drivers are pretty decent, while their OpenGL drivers are terrible. OpenGL implementations simply require more effort, and Intel does not see any profit in expending that effort.

Therefore, if OpenGL implementations required smaller codebases, then the meager effort that Intel already expends might be sufficient to create a solid GL implementation. That is the goal.

Maybe this will fail to achieve the goal. But it is certainly true that doing nothing will fail to achieve the goal, as it has already failed to do so for 10 years.


they (wrongly) get off lightly from you developers by employing a "blame the API" strategy that places the costs of deprecation wholly on YOU rather than them just properly designing, implementing, and testing good OpenGL implementations.

I disagree.

As a practical, reasonable programmer, I understand that there are tradeoffs. I understand why an implementation may not bother to optimize display list calls or glBitmap calls. I do not hold this against them. They, like the rest of us, live in the real world of limited budgets and manpower. They focus on what gets the best bang for the buck.

And as a developer, I prefer having the control that a lower-level interface provides. I do not want to have to guess at what APIs work well or not. If it means more work on my part, then I accept that.

There are two solutions to the implementation problem. One solution is to force all implementations to be complete and optimized. The other is to make what they're implementing simpler, so that the implementation can be more complete against the simpler specification.

Option one does not exist. We have tried this for 10 years, and there has been no success. Some have gotten better, this is true. But the fact remains that there are API landmines that throw you off the fast path in most implementations. These will no go away no matter what we do.

Refusing to accept deprecation and relying on ARB_compatibility will not change things. It will not give ATI or Intel added reason to spend more resources on OpenGL development.

I agree that the problems are the fault of implementers. But since they have not in 10 years fixed these problems, it is clear that they are not going to or are not able to. Therefore, it is incumbent upon us to find an alternative solution that can benefit both parties. "My way or the highway" doesn't work; compromises may.

Even if it means I have to convert GL_QUADS to GL_TRIANGLES for no real reason, I am willing to do so as my part of the compromise position. I expect better performing and better conforming implementations from those who have been deficient in return.

Zengar
06-08-2009, 01:56 PM
I have contacted Mark and can confirm that this is indeed his account. So I would like everyone to withheld any speculations around his identity in the future. Thanks ;)

dletozeun
06-08-2009, 04:00 PM
I think that the last Mark Kilgard's post ends pretty well this "dialogue of deafs" (if I can say).

dukey
06-08-2009, 05:00 PM
The easiest way to draw font stuff is using wglUseFontBitmaps under windows. Which for opengl 3.1 we can no longer use .. I've tried a lot of other methods, they all rely on 3rd party software, most of which have no support for unicode, which is a giant fail. Is it so hard to get font rendering with decent anti aliasing built into opengl ? :eek:

I think the font issue alone will be enough to scare CAD developers completely away from opengl 3.1.

Ilian Dinev
06-08-2009, 07:24 PM
I think the font issue alone will be enough to scare CAD developers completely away from opengl 3.1.
imho drawing text has become easier and more efficient. You just ask GDI or whatever-in-Linux to make a bitmap of selected chars or whole lines of text (preferably), put that in texture atlases and draw away. There's enough RAM and VRAM to waste nowadays.
3D extruded text is a rarely used feature, but made easy with existing libs afaik.

scratt
06-08-2009, 08:29 PM
FWIW I agree with you.

Simon Arbon
06-08-2009, 09:56 PM
<u>DISPLAY LISTS</u>
We still use these extensivly, mainly to simulate the proposed OGL3 features that were supposed to replace them, but which still have not turned up. (Such as lpDrawArrays and program objects).
I really liked the "Enhanced display lists" described at Siggraph asia 2008 and hope NVIDIA continue developing these ideas.
We could especially use an automatic 'background' display-list that resumes when Swapbuffers is executed and is suspended when the VSYNC occurs.

<u>QUADS</u>
We will be using tesselation of low-resolution meshes (made of quad patches) using AMD_vertex_shader_tessellator, with an OpenCL or GS tesselator for NVIDIA (until they release a real tesselator).
Quads are necessary for subdivision surfaces such as Catmull-Clark.

<u>DEPRECATION</u>
There is an awful lot on the deprecation list, though, that nobody should be using in any modern program.
Yes, all the old features need to be in the driver so that programs written for 3.0 and earlier will still run, and i can understand that companies with limited resources may want to add a new feature to a very old engine and you need to support this.
The only thing i would object to is if this:
1) Uses a sizable slab of extra memory (on a 32-bit machine).
2) Slows my program down, eg. by forcing unnecisary hash-table lookups of buffer names because i cant tell it that i am always going to use GenBuffers.
3) Makes a new extension more complex because it has to work around a conflict with one of the old functions.

If it is indeed true that GL_ARB_Compatibiliy has <u>no</u> performance impact on our program then i dont care that its there.

However, when i specifically ask for a 3.1 context i am specifically asking for a context optimised for modern features, but a driver with GL_ARB_Compatibiliy simply ignores what i have asked for.
A 3.0 context could have extensions for every new feature in 3.1, so if 3.1 has GL_ARB_Compatibiliy then the 3.0 and 3.1 context would be identical.
Future OpenGL additions can simply be added to 3.0 (and 2.1) as extensions, leaving 3.1 for those that DONT want compatability.

NVIDIA adds special optimisations to its drivers for specific game engines by detecting the name of the executable, but this does not apply to the smaller developers.
All i really want is a way to tell the driver that my program is well behaved (always uses GenBuffers, uses properly structured display lists etc.) and will not use any obsolete functions.
Hence GL_ARB_Compatibiliy should only be supplied if the program <u>asks for it</u>.
OpenGL3 already contains mechanisms capable of exactly this purpose, having separate profiles for 'Compatability' and 'Performance' would allow a driver to optimise itself for each case (without <u>requiring</u> it to do so).
OR add a WGL_CONTEXT_BACKWARD_COMPATIBLE_BIT_ARB to the WGL_CONTEXT_FLAGS.

Deprecated features that are still needed, such as quads and an enhanced version of display lists could be added back to the performance profile with specific extensions.
But the really bad stuff like application generated names or display lists that are allowed to contain a glBegin with no glEnd, just has to go.

scratt
06-08-2009, 10:11 PM
I think that the last Mark Kilgard's post ends pretty well this "dialogue of deafs" (if I can say).

Ha Ha. It's certainly better than his earlier one anyway! ;)
Not that I agree with all he says in that either though.

V-man
06-09-2009, 07:41 AM
I read these posts just today.
My opinion:
OpenGL has failed in some respects while Direct3D has done things right. Direct3D aimed at games. D3D used COM which helped to clearly demark version differences. D3D is a HAL. Games + Direct3D + Microsoft are rulling the 3D scene.
Tools have gravitated to D3D (3D studio, Maya, and who knows what) due to games.

The fact that nVidia has already written good code should not effect GL's design and future.
It's better to have a lightweight new API and layer GL on top of it. What the heck was wrong with Long Peaks???

zed
06-09-2009, 03:40 PM
like Ive asked before
Whats stopping nvidia/ati etc releasing opengl es drivers for the various OSs?
Ive got no interest in opengl3.0 but would perhaps switch over to opengl es

scratt
06-09-2009, 07:53 PM
like Ive asked before
Whats stopping nvidia/ati etc releasing opengl es drivers for the various OSs?
Ive got no interest in opengl3.0 but would perhaps switch over to opengl es

Interesting, and I also kind of agree with V-man's comment too. But...
It seems he is suggesting basically GL3.x with no deprecation support, and then anything a specific vendor wants to add in layered over the top, right?

The problem I see with that is the problem we have now. Certain vendors offering certain things, and there being the same crunch when you need to support more than one subset of HW.

Also, out of interest which version of ES would you want?

I have really enjoyed the relative confinement of working with GLES1.1 these last few months (and finally was forced to grasp the finer points of FF GL_TEX_ENV, multi-texturing etc.), but have started to miss shaders a lot...
Looking forward I am really excited about working with GLES2.0.

So in that sense GLES2 as a base API is an interesting proposition.

Ilian Dinev
06-09-2009, 08:24 PM
Just a random thought: EGL will have to emerge somehow. Microsoft will probably not make that dll, however easy it is. Vendors generally don't go providing a naked dll with their drivers, though cg.dll is a step in that direction. But so far they've been providing the direct .so files .
Another thought: extra licensing fees; Khronos viewing GL and GLES as separate.
</naive thoughts>

zed
06-10-2009, 03:49 PM
ok you dontwant to follow the latest fads whatever but

I believe opengl es devices are about to take off in a huge way (theres already ~40million iphone/pod ogl es devices out there already plus android etc so its not exactly not used)
but apple have just announcged the new ogl es 2.0 (thus supports shaders unlike the previous iphones)
but the thing is the cheapest model will be priced at $99!! (hell even me whos no fan of apple will get one at that price), apple have also ordered for 100million 8gb chips so they believe theyre gonna sell a few.


Vendors generally don't go providing a naked dll with their drivers,explain naked, all hardware I have seems to come with various dll's from the manufactures

Ilian Dinev
06-10-2009, 05:33 PM
explain naked, all hardware I have seems to come with various dll's from the manufactures I simply haven't seen something like wsock32.dll being overwritten/supplied by hw vendors.

That's $99 when you subscribe to a $100/mo plan for 2 years. And around $600 to get it from eBay after the price hiking ends (in about 2 years, as that happened so late with the first iPhone).

"Pandora" has ES2.0 support.

GLES on iPhone is fun; on Symbian it might be after you overcome the OS fluff and have your game playable on a tonne of screen-sizes;
GLES on Android and other Java phones is PITA (on PC we bitch when a call takes 2k+ cpu cycles, imagine what it is when it takes 650k cycles). Java unfortunately stays strong on mobile devices, even if cpus have MMUs for a decade already.

scratt
06-10-2009, 07:51 PM
GLES2.0 is only on the new iPhone anyway AFAIK.
*ducks swat from NDA gods*

And as has been said they are really starting from $99+++++++++++.

However, if you want to learn GLES, and / or GLES2.0 you can download the DevKit from Apple for free as long as you register. The only restriction is putting SW onto your device. The Simulator has always supported GLES and I don't see any reason why it won't have a full GLES2.0 implementation on it.

Great place to start though.

But then again... http://Beagleboard.org/

0r@ngE
06-12-2009, 01:00 AM
However, if you want to learn GLES, and / or GLES2.0 you can download the DevKit from Apple for free as long as you register. The only restriction is putting SW onto your device. The Simulator has always supported GLES and I don't see any reason why it won't have a full GLES2.0 implementation on it.

But why? You can learn GLES2.0 with AMD OpenGL ES 2.0 Emulator
http://developer.amd.com/gpu/opengl/Pages/default.aspx
* Support for core OpenGL ES 2.0 functionality
* Support for many important OpenGL ES 2.0 extensions
* Support for EGL 1.3

The OpenGL ES Emulator Control Panel enables control of many emulator options including:
* Modifying the screen size
* Modifying the available GPU memory
* Performance throttling
* Sending debugging output to files
* Viewing debugging output on the screen

Just try it!

scratt
06-12-2009, 01:15 AM
Great. Another option.

I wasn't saying people had to use the Apple Simulator. Just that it's an option.

At the end of the day you really want to see what happens on real hardware.
So getting a device of some sort that matches your target audience is going to be the best option.

A configurable simulator sounds as dangerous as the one major pitfall with Apple's Simulator. The fact that performance in your emulation environment and your target platform is *never* the same.

Ilian Dinev
06-12-2009, 09:56 AM
Identical or even comparable performance between simulator and device has never been available anywhere imho, unless you emulate a device with 1000+ times less powerful hardware, i.e a NES on a c2d. And has not been among the major hurdles. Just having one screen-size, same input methods and newer models not being slower are extremely huge unseen-before bonuses in the mobile gamedev, in my experience.

Brolingstanz
06-12-2009, 11:08 AM
Even I can see the value of fancy graphics on a phone, if say due to circumstances beyond your control you're forced to burn a few hours in a broom closet, with only the items on your person.

Jan
06-12-2009, 12:09 PM
Assuming you were in a broom closet "due to circumstances beyond your control" and had "to burn a few hours", couldn't we assume that "the items on your person" might not include a phone? Or anything at all?

Then again, i don't know what YOU usually do in a broom closet...

Brolingstanz
06-12-2009, 12:16 PM
I'd rather not say what I usually do in a broom closet, but I can assure you it's perfectly legal.

knackered
06-14-2009, 01:41 PM
and so, as the soft breeze gently nudges the sun over the horizon, another potentially interesting thread dissolves into bollocks.

scratt
06-15-2009, 10:12 AM
Just a bit. :)

handsomeforest
01-24-2010, 06:21 PM
I totally agree with Mark Kilgard regarding the OpenGL compatibility issue. There is really no reason to deprecate old and working OpenGL features for application programmers. I can understand that OpenGL driver Engineers would like this because it would make their jobs easier. I can also understand why some game programmers would like this because it would be less confusing. But there are so many CAD, scientific and engineering software that use the non-core features in the market today. These software actually make real differences in improving human lives as opposed to just have funs in computer games. It is simply a ridiculous proposition that we would require civil engineers, mechnical engineers, aerospace engineers etc. to not use modern graphics card in order to use their existing software. It is equally ridiculous to require them to purchase new modern graphics hardware in order to use new software that are written with new OpenGL core features only.

Alfonse Reinheart
01-24-2010, 08:03 PM
It is simply a ridiculous proposition that we would require civil engineers, mechnical engineers, aerospace engineers etc. to not use modern graphics card in order to use their existing software.

Um, exactly how would this be requiring them to not use OpenGL? What they would be required to do is change their applications. You're not losing any functionality; you're just streamlining the API.

Now, since Kilgard's position is NVIDIA's position, and NVIDIA owns enough of the GL market to be able to kill any proposal they don't like, deprecation is effectively dead as an API cleanup tool. No one will ever, ever write a purely core API. The most deprecation is is a guide to API paths that will actually be performance-friendly.

This is the second time OpenGL missed an opportunity to build a better API. They won't get a third one. So you can expect fixed-function, display lists, immediate mode, and any number of other poorly thought out features to be supported in perpetuity on OpenGL.

Which is also why OpenGL will be trapped forever on desktops. Any embedded devices will be using OpenGL ES. Which is much like what GL 3.0 ought to have been.

handsomeforest
01-24-2010, 08:44 PM
If a customer had an existing software that uses "deprecated" OpenGL feature and then happened to upgrade his hardware, he would expect his old software still working on the new hardware. Of course, he could call the application developer to upgrade the software. But then the application developer could choose not to due to such reason as economic consideration.

Nothing in OpenGL 3.2 is preventing you to write new software using the pure, core OpenGL feature only, if you choose to. Bear in mind that successful software needs to reach the maximum number of customers. For some software, it is not practical to require all its customers to use very new hardware (such as new game consoles) with which OpenGL 3.2 is available, unfortunately.

I think an analogy can be found in C++ programming language. Some people prefer to use STL's vector, others may find raw array easy to use. We can not say since STL's vector is better designed, we should deprecate the “new” keyword. There was once a time when some zealots think Java should replace C++ because Java is a pure objected oriented language. But the beauty of C++ is it does not impose a certain programming pattern on the part of programmers. It is one of the reason why C++ is successful. A person should not be just looking at his own narrow application area and then try to impose his narrow views on others.

Alfonse Reinheart
01-25-2010, 01:07 AM
If a customer had an existing software that uses "deprecated" OpenGL feature and then happened to upgrade his hardware, he would expect his old software still working on the new hardware.

Since when? There has never been any guarantee that any old software will run on new hardware with no modifications. There are plenty of examples of old code that simply doesn't work on new hardware.

Furthermore, that's not what was ever being discussed. Even in the full API rewrite land of Longs Peak, the old GL 2.1 would still be there as a legacy API. You simply couldn't intermingle them, so if you wanted to use LP features, you had to use LP in full. If you created a Longs Peak context, it was a Longs Peak context and exposed LP functions. If you created a 2.1 context, it was a 2.1 context and exposed 2.1 functions.


I think an analogy can be found in C++ programming language. Some people prefer to use STL's vector, others may find raw array easy to use. We can not say since STL's vector is better designed, we should deprecate the “new” keyword.

This is a pretty terrible analogy, as the "new" keyword is used for things other than raw arrays.


But the beauty of C++ is it does not impose a certain programming pattern on the part of programmers.

Of course it does. Are functions first-class objects? No. Ergo: minimal functional programming at best. The mere fact that it is statically typed means that imposes a programming pattern on the programmer.

Ketracel White
01-29-2010, 05:24 AM
No one will ever, ever write a purely core API. The most deprecation is is a guide to API paths that will actually be performance-friendly.


Of course not. The entire idea of the deprecation mechanism was based on flawed thinking. You can't have full backwards compatibility and a streamlined modern API using the same interface. The only way to solve this would have been a fresh start with a modern API that got rid of all the inconvenient ballast OpenGL is saddled with.

Just removing the obsolete features and declaring the rest 'core' is not going to do it. My major issue with the core is that even though it got rid of a lot of useless stuff (along with some I'm sorry to see gone) it didn't do anything to make the remaining features easier to use. It's still the same old and atrocious API that has been bothering me for 10 years now. So for most programmers there's just no motivation to switch. 2.1 + extensions is mostly as good as 3.2 but has the big advantage that it's much easier to target both modern and old hardware with the same code base.

Ilian Dinev
01-29-2010, 09:35 AM
...it didn't do anything to make the remaining features easier to use. It's still the same old and atrocious API that has been bothering me for 10 years now. So ...
I have to disagree here. Put a lightweight wrapper, and GL3.2 can look like DX10 if you want. Vtx-attrib declaration strings, that your GL3.2 path can bind to general attribs, and 1.5-2.1 can bind to fixed-func attribs; uniforms that you compute with your fav math-lib, and upload in whatever fashion is optimal for 3.2 vs 2.1 vs 2.0 vs 1.5; VBOs that are available on all cards, etc. It's certainly as rosy as DX8/9/10 transitions.
Like Alfonse wrote recently elsewhere:

Your view is more API-centric than mine. I have long since abstracted OpenGL out of my application. I have a nice API that uses objects and such, while under the hood it makes all of those OpenGL API calls.

Ketracel White
01-29-2010, 12:37 PM
To me the fundamental flaw in OpenGL's design is that everything has to be bound to the system before being used. As a result it's not possible to write something that reliably can manipulate objects.

Therefore any abstraction layer placed on top of it will suffer some problems, especially if you have to work with third party code you have no control over you'll be in trouble.

Something like 'why the hell does this not use the texture I want it to use?' And then, after scratching your head for month after month why your code is not working you realize that you depend on code that doesn't play nice. Well, I was there and it's not nice.

So bottom line, you can abstract the API all you want but what you can't abstract without getting inefficient is its completely outdated design.


[
I have to disagree here. Put a lightweight wrapper, and GL3.2 can look like DX10 if you want. Vtx-attrib declaration strings, that your GL3.2 path can bind to general attribs, and 1.5-2.1 can bind to fixed-func attribs; uniforms that you compute with your fav math-lib, and upload in whatever fashion is optimal for 3.2 vs 2.1 vs 2.0 vs 1.5; VBOs that are available on all cards, etc.


... and now take one guess what the average programmer working under tight time constraints will do. Right! He'll chose an approach where he does not need to duplicate code for everything so he will most likely implement GL 2.1 only with some extension checks for modern cards. Why should he go the GL 3.2 route where he needs to do everything with shaders? It's just more work. All the nice convenient fixed function stuff is still there and a significant portion of any application does not need shaders. Why waste work on them if he can code one path that fits all hardware without suffering any performance loss? From an economical standpoint going 3.2 would be a waste of valuable time. The stuff that's really useful is all available as 2.1 extensions - with the added advantage that it can be combined with what was deprecated.

Alfonse Reinheart
01-29-2010, 01:59 PM
Therefore any abstraction layer placed on top of it will suffer some problems, especially if you have to work with third party code you have no control over you'll be in trouble.

So, what you're talking about is what happens when you call code you don't control that uses OpenGL. That code should be part of your abstraction. Anything that does rendering should be part of the abstraction.


Why should he go the GL 3.2 route where he needs to do everything with shaders?

Um, have you been even reading the thread? Deprecation is dead! If you want to use display lists or fixed function or whatever else, it is still there!

Name an implementation of GL 3.2 that does not include the compatibility profile. Just one of them.

Ketracel White
01-29-2010, 02:46 PM
Therefore any abstraction layer placed on top of it will suffer some problems, especially if you have to work with third party code you have no control over you'll be in trouble.

So, what you're talking about is what happens when you call code you don't control that uses OpenGL. That code should be part of your abstraction. Anything that does rendering should be part of the abstraction.


Very funny! Tell me, how am I supposed to (efficiently) abstract code that I only have in binary form and where I can't even tell what exactly it's doing? All I know it that it doesn't play by the rules so I had to put it in a thick and very clumsy wrapper to use it.

[/QUOTE]




Um, have you been even reading the thread? Deprecation is dead! If you want to use display lists or fixed function or whatever else, it is still there!


I know. But I was talking about Core. Since deprecation is dead, what motivation is there not to use the old features (a.k.a. GL 2.1 plus extensions)?




Name an implementation of GL 3.2 that does not include the compatibility profile. Just one of them.

Let me guess: None! Nobody can afford to drop the old stuff. Which makes the core profiles an exercise in pointlessness. The entire thing was so ill conceived that it was doomed to fail from the first time it was mentioned.

Alfonse Reinheart
01-29-2010, 03:15 PM
Very funny! Tell me, how am I supposed to (efficiently) abstract code that I only have in binary form and where I can't even tell what exactly it's doing? All I know it that it doesn't play by the rules so I had to put it in a thick and very clumsy wrapper to use it.

For "telling what exactly it's doing", GLIntercept is a reasonable solution. That's orthogonal to abstracting.

As for how to abstract it, it's simple: no code outside of your abstraction layer may call this code. See? Your abstraction would have a function like "DoThatThingTheBinaryBlobDoes()", and the details would be handled in the implementation of that function. It would set the appropriate GL state, call the actual binary library function, and restore the GL state as needed for the rest of the abstraction to work.


Since deprecation is dead, what motivation is there not to use the old features (a.k.a. GL 2.1 plus extensions)?

2.1 + extensions is not the same as 3.2 compatibility. Indeed, I imagine if you're on a 3.2 compatibility capable implementation, you can't get 2.1 at all unless you specifically ask for it. The implementation is free to give you 3.2 compatibility.


The entire thing was so ill conceived that it was doomed to fail from the first time it was mentioned.

It was only doomed because NVIDIA doomed it. They decided to support compatibility profiles in perpetuity, and that's the end of it.

If NVIDIA and ATI had said, "We'll support GL 2.1, but all new stuff will be limited to 3.x core," it would have worked. ARB_compatibility and the compatibility profile are what killed it.

Ketracel White
01-30-2010, 01:55 AM
Very funny! Tell me, how am I supposed to (efficiently) abstract code that I only have in binary form and where I can't even tell what exactly it's doing? All I know it that it doesn't play by the rules so I had to put it in a thick and very clumsy wrapper to use it.

For "telling what exactly it's doing", GLIntercept is a reasonable solution. That's orthogonal to abstracting.

As for how to abstract it, it's simple: no code outside of your abstraction layer may call this code. See? Your abstraction would have a function like "DoThatThingTheBinaryBlobDoes()", and the details would be handled in the implementation of that function. It would set the appropriate GL state, call the actual binary library function, and restore the GL state as needed for the rest of the abstraction to work.



That's what I'm doing but that's precisely what shouldn't be needed as it makes the code inefficient (meaning in this particular case that I have to restore the entire texture state for up to 8 texture units each time I call such a function.) You can abstract shitty design all you want but there's always points where the [censored] bleeds through to bite you in the ass. And you can twist it all the way you want but I call a system that maintains one global state for the entire application badly designed. I admit that in ancient times it may have seemed convenient not to carry around all those pointers but in the end it was still a bad idea.


It was only doomed because NVIDIA doomed it. They decided to support compatibility profiles in perpetuity, and that's the end of it.


Well, you can see this from different viewpoints. It was a futile endeavour to try to bring the current OpenGL API up to date without changing the fundamental basics of how it works.

I can't say who is to blame that instead of a real upgrade to something modern we got this half-assed changes.

Instead of a new API that's designed to work with modern feature sets all we would have been left is the same old and outdated system just with less features. The bad decisions that were made when the modern features were first implemented were not addressed by all of this (like the stupid hint system to tell how a VBO is to be used, for exmple.)

So essentially GL 3.0 core was just 2.1 plus making a few common extensions core minus lots and lots of convenience. This may be something to get a few geeks excited but to any real world programmer such a system is not attractive as he would most likely stick to 2.1 plus the already existing extensions.

The thing is, if you need a fresh start, do a fresh start - even if it means changing fundamental design paradigms. Yes this would have resulted in an incompatible API but hey, does it really make a difference? 3.x core omits so much of the old functionality that it's mostly impossible anyway to port code straight over.

So instead of truly dumping the baggage, including the design flaws that are inherent in the API they just decided to mark some functionality obsolete but did nothing about the other issues (like the global application state for example.) So instead of a clean and modern API all we got is a stripped down version of the same we already have and that hardly serves as motivation to migrate. You gain nothing from doing so.

Well, I guess that's what you get if decisions have to be made by comittee. Since you can't satisfy anyone the best course of action is to do nothing.


So to boil it down from my point of view, GL 3.x core contains all the mess OpenGL implies but none of the convenience that previously made up for it. No, thank you, I'd say.



If NVIDIA and ATI had said, "We'll support GL 2.1, but all new stuff will be limited to 3.x core," it would have worked. ARB_compatibility and the compatibility profile are what killed it.

Possibly. I doubt it. We still would have been saddled with an API that was only brought into the future half way, not to mention that much of the new stuff would have been added to 2.1 as extensions which would have resulted in the same situation we are in now. So in my opinion it was inevitable that the current situation happened. I knew it from the moment the 3.0 specs were presented that it wouldn't work out.

Alfonse Reinheart
01-30-2010, 03:06 AM
That's what I'm doing but that's precisely what shouldn't be needed as it makes the code inefficient

First, glBindTexture is not necessarily inefficient. Especially when you're binding texture 0 (aka: unbinding textures). Binding things does not imply the desire to render with them.

Second, yes, OpenGL requires that you be responsible for the use of the API. That means that you need to be responsible for all use of the API, even usage that you have decided to cede responsibility over to a third party library. You made the choice to use a binary blob library with no source code access, one that makes no guarantee as to what state it is and is not changing. And therefore, you must take responsibility for your choices.


meaning in this particular case that I have to restore the entire texture state for up to 8 texture units each time I call such a function.

You have piqued my curiosity. What would make you think you need to do that?

It's been a long time since I did any texture environment fixed-function work, but in the land of shaders, it just doesn't matter. You bind a program, and you bind the textures that this program uses. If there are some other texture units with textures bound to them, nobody cares; it won't affect the rendering. And if those texture units did have something bound to them, it likewise does not matter, as you will be binding the needed textures for this program.

The only time I could imagine needing to clean out texture unit state would be if you bound a program, did some rendering with it, then called some function that does arbitrary unknown stuff to the texture state, and then wanted to keep rendering as if the unknown stuff had not happened. And even then, you only need to clean out the texture state that the program was actually using.


You can abstract shitty design all you want but there's always points where the [censored] bleeds through to bite you in the ass.

Absolutely not. If you have an abstraction that allows the underlying implementation to bleed through, this is the textbook definition of a bad abstraction. The whole point of an abstraction is to have the freedom to change the implementation without affecting the interface.

BTW, I think you mussed a [censor] point.


like the stupid hint system to tell how a VBO is to be used, for exmple.

Again, my curiosity is piqued. Exactly how would you have specified usage for buffer objects? Bear in mind that concepts like "AGP" don't last forever; even video memory itself may fall by the wayside as a location for permanent storage. Also bear in mind that buffer objects are not limited to mere vertex data.

I'm not entirely happy with the usage hints. I think they could have been a bit clearer as to when to use DYNAMIC. But overall, I think they were a pretty legitimate part of the buffer object API.


I can't say who is to blame that instead of a real upgrade to something modern we got this half-assed changes.

The ARB. They were working on it, and they failed to get it done. So instead, they tried deprecation rather than a single big change. NVIDIA torpedoed them on deprecation, so they're stuck with the old functionality.


like the global application state for example

You keep talking about this "global application state" as though it is some abstract concept. It isn't. It's called the GPU.

You only have one context because you only have one GPU*. You are rendering to a single thing. And that single thing has certain state. And changing that certain state has a cost. By exposing the context, you are able to determine how much state you are changing and find ways to change less state. A purely object-based API with no context, where you can a render function with a VAO, Program, Texture Set, and FBO render target, would make this much harder on the implementation.

The actual problem with the context isn't that it exists. It is that the context is used for more than rendering. When you bind a texture, it could be because you want to render with it on that texture unit. Or maybe you just want to upload some data to it. The implementation doesn't know. So there has to be a lot of back-end work that figures out when you are just poking around with object state, and when you really want to use the object.

* I'm aware that there are a lot of multi-GPU systems out there. But the drivers do their best to pretend that these are a single GPU.


So to boil it down from my point of view, GL 3.x core contains all the mess OpenGL implies but none of the convenience that previously made up for it. No, thank you, I'd say.

There is one good thing that comes out of this: it acts as a strong demarcation point. As new features are added to the API, they will be increasingly incompatible with legacy functionality.

It's not much, admittedly. But it's something.


not to mention that much of the new stuff would have been added to 2.1 as extensions

Did you miss the part where I said, "all new stuff will be limited to 3.x core?" That includes extensions. Implementations decide what to expose on what hardware.

peterfilm
01-31-2010, 06:48 PM
When you bind a texture, it could be because you want to render with it on that texture unit. Or maybe you just want to upload some data to it. The implementation doesn't know. So there has to be a lot of back-end work that figures out when you are just poking around with object state, and when you really want to use the object.
i thought that most implementations postponed any decision making of this sort until a draw/read/write operation is executed? All you're doing with things like glbindtexture is setting a bit in a bitfield. The real work (ie. setting the states GPU side) is done when you draw/read/write some kind of resource - and at that point it *knows* what you want to do (and can even defer setting state irrelevant to that operation, such as the blendop if all you're doing is updating a textures content).

Alfonse Reinheart
01-31-2010, 07:53 PM
i thought that most implementations postponed any decision making of this sort until a draw/read/write operation is executed? All you're doing with things like glbindtexture is setting a bit in a bitfield.

Exactly. Imagine what OpenGL could do if it could tell the difference between bind to edit and bind to render. It could give you errors at bind time. For example, if you bind an incomplete texture currently, this is perfectly legal currently. If OpenGL implementations could tell the difference, it would give a GL error immediately, not some time later thousands of lines of code away from the source of the actual problem.

Imagine FBO creation and management in such a system. An implementation could give an error when an improperly created FBO is bound to the context, rather than when you render with it.

It's much harder to tell what state happens to be incorrect when you draw than when you first set that state.

peterfilm
02-01-2010, 12:51 AM
no, I want my pipeline to be parallel! I'd rather use a sync object to check the error state...and maybe wait for the sync object if I care about the result straight away (to aide debugging, I suppose). I haven't read the whole of this thread, but you seem to be in favour of synchronising the implementation with the application thread.

Alfonse Reinheart
02-01-2010, 01:08 AM
I haven't read the whole of this thread, but you seem to be in favour of synchronising the implementation with the application thread.

I don't see what that has to do with anything being discussed.

When you call glBindTexture, the application must immediately fetch the object associated with that texture. This is so that it can modify that object, or get pointers to video memory for that texture's data (if you render with it).

When you call glDraw*, the implementation must immediately get the state for that object and copy it off somewhere for rendering. It must set that the texture in question is in use, so that attempts to destroy or change the object can be delayed.

So don't think that "nothing" happens when you bind a texture and change its state. A lot has to happen. But none of it causes a synchronization of the rendering pipeline. And nothing I'm proposing would cause such a synchronization either. It simply more correctly describes what it is you want to do. Binding means I want to draw with the texture, and it means only that. Whereas now there is some ambiguity.

peterfilm
02-01-2010, 01:01 PM
that certainly doesn't tally with what I've read from nvidia on the subject of bindless graphics. I understood that things like that immediately get posted to another thread, and subsequent lookups of object state and validation happens there. Calling glGetError stalls the application thread waiting for the drivers thread to return the error state. It doesn't flush the pipeline, but it makes your app sync with the driver thread. But you probably know more than I do. I'm always getting the wrong end of the stick.

Alfonse Reinheart
02-01-2010, 01:41 PM
that certainly doesn't tally with what I've read from nvidia on the subject of bindless graphics.

What I described is exactly why bindless was invented. So that you can not have to bind buffer objects in order to render. Buffer object binding (or rather, attaching a buffer object to the VAO state) is expensive because it must access various buffer object state data, rather than just moving a pointer around.

peterfilm
02-01-2010, 03:06 PM
i should shut up and read the thread!

aronsatie
02-05-2010, 05:31 AM
I re-read this topic with great interest just recently. It moved way beyond the reason why I opened it. But I would like to take the opportunity to go back to my main problem with ditching display lists.

Someone wrote that one does not need to use any wgl functions in Windows to display text, textures could be used instead and hardly anyone uses extruded 3D text anyway. The problem is that I do, extensively. If I can't use wglUseFontOutlines with a >=3.0 context then it is a bigger problem for me than just making a decision about display lists (even if nvidia still supports them).

I know that there are several libraries available as alternatives to wglUseFontOutlines but they seem to be made for OSs other than Windows (Linux mainly), and they require other libraries that work only if other libraries are present, etc. It is not even clear to me to what form they convert font glyphs. At least wglUseFontOutlines is very simple. One display list for each character in a font is perfect, unless you need to write text in a scripting language.

Does anyone have similar needs? 3D text and the urge to move beyond OpenGL 2.1?

Ilian Dinev
02-05-2010, 05:59 AM
How about GetGlyphOutline() ? Here's an example of custom rasterization of the raw curve data it provides:
http://www.antigrain.com/tips/win_glyph/win_glyph.agdoc.html

Alfonse Reinheart
02-05-2010, 10:47 AM
I re-read this topic with great interest just recently.

You clearly did not read it carefully enough. For example:


If I can't use wglUseFontOutlines with a >=3.0 context

As I previously said:


Um, have you been even reading the thread? Deprecation is dead! If you want to use display lists or fixed function or whatever else, it is still there!


I know that there are several libraries available as alternatives to wglUseFontOutlines but they seem to be made for OSs other than Windows (Linux mainly), and they require other libraries that work only if other libraries are present, etc.

FreeType. It creates images from fonts and gives you font metrics. It doesn't rely on anything else.


At least wglUseFontOutlines is very simple.

Simple doesn't imply "powerful" or even "good". It is only simple to use.

Font outlines don't support kerning or antialiasing (beyond multisample, which affects more than just text).

aronsatie
02-05-2010, 12:42 PM
In fact, I read it quite carefully, and not necessarily just your posts. Dukey wrote that wglUseFontBitmaps cannot be used with a 3.1 context and I assume the same is true for wglUseFontOutlines.

I meant libraries based on Freetype or Freetype2 or whatever and not Freetype itself. And since I wrote that I use extruded 3D text a lot, what good an image created by Freetype from a font would do? I need vertex, normal and texcoord data and not a bitmap image.

BTW wglUseFontOutlines does support kerning and even gives you font metrics (although the kerning pairs you get are imcomplete). And it does not create images, it draws primitives, so antialiasing is not a concern, at least not at that point.

Alfonse Reinheart
02-05-2010, 12:49 PM
Dukey wrote that wglUseFontBitmaps cannot be used with a 3.1 context

And yet, he did not explain why. I don't know of anything that could possibly have changed to cause this, since wglUseFontBitmaps uses OpenGL commands, and those OpenGL commands still exist and do what they have always done.

So until he shows some proof, I'm going to consider his statement to be in doubt.

Brolingstanz
02-05-2010, 02:12 PM
A hearty second for Freetype. There’s always GLU/Mesa for triangulation routines. A search for ‘triangulation library’ turns up GTS, CGAL and a few thousand others.

One could probably whip up a bare bones, brute force CDT implementation in under 300 lines of code or so - more than adequate to the task of triangulating a few glyphs for extrusion or whatnot.

aronsatie
02-06-2010, 07:55 AM
I am not going to pretend that I can whip up a CDT implementation (any force) but I would still like to know why do you think Freetype is the way to go?

Brolingstanz
02-06-2010, 08:21 AM
what's a CDT? I just tossed it in there for a glint of authority.

Alfonse Reinheart
02-06-2010, 11:35 AM
I would still like to know why do you think Freetype is the way to go?

1: It works with arbitrary fonts, not just the ones that Windows has installed.

2: It has full kerning support.

3: It doesn't interface directly with OpenGL. Thus, it uses the parts of OpenGL that you choose to, not the ones it makes you.

aronsatie
02-06-2010, 02:51 PM
OK, these are advantages for sure.

But you wrote that it creates an image from fonts. How can I make it to give me a set of vertex, etc. data instead? Please don't tell me to create a CDT whatever because I have no idea what that is.

Alfonse Reinheart
02-06-2010, 06:24 PM
How can I make it to give me a set of vertex, etc. data instead?

FreeType is a library for accessing font data (metrics, curves, etc) and rasterizing fonts. If you want a "set of vertex data" for a glyph, you will have to generate it from the glyph's actual data (the set of bezier curves and such).

In short, there's no simple function call to do what you want. You can do this with FreeType, but it will require actual effort on your part.

aronsatie
02-07-2010, 05:51 AM
Thanks, now I understand. It is not what I was hoping for because my knowledge in 'CGT' or whatever is very limited. I was hoping for a library that 'converts' a font to a set of buffers I can upload to a VBO but it seems it does not exist.

BTW is moving to OpenGL >=3.0 really worth the effort? Compared to 2.1 + GLSL 1.2, what do you think are the main benefits of the new API? And I don't mean in theory, because it is clear to me that for a brand new project one should always use the latest API. But in practice, do you think it would be worth the considerable effort to convert/redo a project I have been working on for three years that now works reasonably well in 2.1/GLSL 1.2?

I would appreciate your thoughts about this.

skynet
02-07-2010, 06:26 AM
For my personal projects, I have moved to core-GL3.2. I did not regret it yet.
What I like most of it is that it somehow "frees your mind":
1.) You no longer need to decide "do I use fixed function for this, or do I write a shader?". Now everything is done in shaders. And writing a just small shader is most times easier than to configure the FFP to achieve a certain effect.
2.) Which version of an extension do I use (EXT/ARB/Core)? Things like geometry shaders and FBOs are "just there".
3.) In your program, you no longer need to care about mixing various fixed function states and their modern counterparts. No longer you need to care about glEnable(GL_TEXTURE_2D) and the like. I never need to use glUseProgram(0). Shaders are always active, etc...
4.) No more wondering "do I use built-in GL state in my shaders or do I use my own? Which one is faster?". Since most built-in stuff is gone, the answer is clear now :-)

There are some downsides, though. Currently, there is no measurable performance improvement between the core-profile and the compatibility profile. But there's hope, that one day, the drivers will make a distinction and reward my efforts :-)
I used immediate mode for drawing debugging stuff. I needed to implement a small "Immediate Mode Emulation" for that purpose now. Since all matrix-stack stuff is gone, I needed to implement it by myself, using UBOs. Once done, I don't miss it anymore.
What I really miss a bit is glPushAttrib/glPopAttrib... I used it often to temporarily change state and then restore it when done. I have now resorted to some kind of "fixed default state". I just assume some "default state" that must be set is always set this way. Whenever I need to desert from this state, I restore it by hand afterwards.

I really wonder how "foreign" OpenGL code can work together with my code, because I now have to do many assumptions on how state is set, how matrices are transported into my shaders and the like...

On the professional side, things are a bit different. The by far biggest problem is that for GL3.2 you need to assume most recent drivers. And that can be a real pain. Customers are often reluctant to change their (certified) drivers. So you often have to deal with 2-years old drivers :-( So GL3.2 is a no go yet... maybe in 5 years. If I only could tell them, that our software would run 2x times faster with GL3.2 they probably would get interested, though....

Stephen A
02-07-2010, 08:32 AM
I really wonder how "foreign" OpenGL code can work together with my code, because I now have to do many assumptions on how state is set, how matrices are transported into my shaders and the like...

That's my main beef with this bind-to-edit model. 3rd party code wreaks havoc to your assumptions and results in an interoperability nightmare.

The only sane solution is to code defensively, assume that state is always thrashed and take the resulting speed hit. Or you can bite the bullet, fall back to the compatibility profile and use Push/PopAttribs liberally (and take that speed hit, which is generally less).

There's nothing we can do about the drivers however... my solution is to fall back to 2.1 if WGL/GLX_create_context_attribs fails, query for the necessary extensions and bail out with a "update your drivers" message if they are not supported. Hopefully the situation will become better in the future. Not holding my breath, though.

V-man
02-07-2010, 10:13 AM
Thanks, now I understand. It is not what I was hoping for because my knowledge in 'CGT' or whatever is very limited. I was hoping for a library that 'converts' a font to a set of buffers I can upload to a VBO but it seems it does not exist.

BTW is moving to OpenGL >=3.0 really worth the effort? Compared to 2.1 + GLSL 1.2, what do you think are the main benefits of the new API? And I don't mean in theory, because it is clear to me that for a brand new project one should always use the latest API. But in practice, do you think it would be worth the considerable effort to convert/redo a project I have been working on for three years that now works reasonably well in 2.1/GLSL 1.2?

I would appreciate your thoughts about this.


It depends if you are using glBegin/glEnd, if you are using fixed function sometime, if you are using built in stuff with your shaders.
If you are not, then converting is easy.
Sure, do it if it is for learning purposes.

aronsatie
02-07-2010, 10:18 AM
I see. So there is no real performance benefit yet. Since I do almost everything with shaders the only question is how much better a well written 1.5(afaik) shader would perform than a well written 1.2. (By well written I mean complex, one that would need all the resources).

From your post I presume the difference is not that big. I understand perfectly about the driver and compatibility issues. Luckily these do not concern me, I can afford to support only nvidia (my choice) and the latest drivers. Still it seems to me that I should not rush to switch from opengl 2.1/1.2.

peterfilm
02-07-2010, 12:50 PM
On the professional side, things are a bit different. The by far biggest problem is that for GL3.2 you need to assume most recent drivers. And that can be a real pain. Customers are often reluctant to change their (certified) drivers. So you often have to deal with 2-years old drivers :-( So GL3.2 is a no go yet... maybe in 5 years. If I only could tell them, that our software would run 2x times faster with GL3.2 they probably would get interested, though....
it's not just reluctance - laptop drivers are a nightmare. I have a customer with a dell laptop who is unable to upgrade from 175.97 because nvidia screwed up the external monitor handling code (it remains screwed in all subsequent driver versions...go figure). He does a lot of presentations, and it's totally impractical for him to not be able to pipe the display to an external device. I have no answer for this guy, I simply have to maintain a legacy code path for people like him. Laptops are becoming the main computer amongst our customers (desktop replacements), yet the drivers still seem to be a black art for people like nvidia.

Brolingstanz
02-07-2010, 01:17 PM
#&amp;*%^@! laptops. People who use laptops should probably be ferried off to some far away magical island where they can dance with the bears and make little origami figurines. ;-)

peterfilm
02-07-2010, 01:27 PM
:)
well you say that, but I'm using my laptop as a desktop replacement now. It renders faster than my desktop. The mobile gpu's are now just as capable. It's just such a shame that the IHV's don't address these driver problems. They just seem to sweep them under the carpet and blame the laptop manufacturer. Granted, the manufacturers should be shouldering 50% of the blame too, but none of that is any help to the customer. I certainly wouldn't recommend nvidia mobile GPU's to our customers, if it wasn't for the fact that AMD's OpenGL implementation is so lousy.

aronsatie
02-07-2010, 03:38 PM
'Just as capable' is a huge exaggeration but I agree that they are better than ever before. However, until someone makes a card with SDI output (fill&amp;key) that I can put in a laptop, I simply cannot use them. Although I would love to because I have to carry my PC a lot.

peterfilm
02-07-2010, 03:57 PM
huge exaggeration? I have one of these...
http://www.nvidia.com/object/product_quadro_fx_3800_m_us.html
probably outperforms whatever you've currently got in your desktop. I say probably, because I suppose it is remotely possible you've paid out for a quadro 4800 or, even less likely, 5800.

aronsatie
02-09-2010, 10:23 AM
I have a 'lowly' gtx 285. In what respect do you think yours would outperform mine? The specs certainly would not suggest it, and I am not quite certain about the benefits of a quadro over a geforce. AFAIK the chips are the same (the quadro 4800 and 5800 are based on the same chip as mine).

Brolingstanz
02-10-2010, 01:11 PM
Nvidia has a spec sheet on their webstie detailing the differences between the Geforce and Quadro if you're interested...

www.nvidia.com/object/quadro_geforce.html (http://www.nvidia.com/object/quadro_geforce.html)

ZbuffeR
02-10-2010, 01:41 PM
This document is only 7 years old :)

peterfilm
02-10-2010, 01:51 PM
To be honest, I'm prone to exaggeration. It certainly outperforms my desktop!
All the best.

aronsatie
02-13-2010, 09:07 AM
Well, my not knowing the difference comes from suspicions based on what friends of mine with a lot of experience with Quadros and Open GL told me.

Even though they work with Quadros all the time, they are still unsure about if they are any faster then their Geforce counterparts. They consider them better built and more reliable, and they told me that reading back data from Quadros is faster because it is hardware accelerated. For me, the latter would be a huge advantage because I work with broadcast video, so if anyone could confirm it I would certainly start to think about buying a Quadro instead of waiting for Fermi.

Ragnemalm
05-23-2012, 11:42 PM
For my personal projects, I have moved to core-GL3.2. I did not regret it yet.
What I like most of it is that it somehow "frees your mind":
1.) You no longer need to decide "do I use fixed function for this, or do I write a shader?". Now everything is done in shaders. And writing a just small shader is most times easier than to configure the FFP to achieve a certain effect.
2.) Which version of an extension do I use (EXT/ARB/Core)? Things like geometry shaders and FBOs are "just there".
3.) In your program, you no longer need to care about mixing various fixed function states and their modern counterparts. No longer you need to care about glEnable(GL_TEXTURE_2D) and the like. I never need to use glUseProgram(0). Shaders are always active, etc...
4.) No more wondering "do I use built-in GL state in my shaders or do I use my own? Which one is faster?". Since most built-in stuff is gone, the answer is clear now :-)

There are some downsides, though. Currently, there is no measurable performance improvement between the core-profile and the compatibility profile. But there's hope, that one day, the drivers will make a distinction and reward my efforts :-)
I used immediate mode for drawing debugging stuff. I needed to implement a small "Immediate Mode Emulation" for that purpose now. Since all matrix-stack stuff is gone, I needed to implement it by myself, using UBOs. Once done, I don't miss it anymore.
What I really miss a bit is glPushAttrib/glPopAttrib... I used it often to temporarily change state and then restore it when done. I have now resorted to some kind of "fixed default state". I just assume some "default state" that must be set is always set this way. Whenever I need to desert from this state, I restore it by hand afterwards.

I moved to 3.2 recently, and most of the time it is just fine. Transformations are straight math, geometry are always arrays. No problem there. (But your point 2 doesn't make sense. EXT and ARB will always be there for new features and that is something good, not a problem. That is how OpenGL stays modern between major releases.)

I do miss immediate mode a little bit (for quick and safe tests as well as first examples for beginners) but I can make a layer for that any day. It isn't hard.

But display lists is what really disturbs me. There is no way to replace them. I have written a layer between a 2D API and OpenGL (Apple's QuickDraw, really nice API but the original library is old), and with OpenGL 2 that is no problem, I can do pretty much everything. But OpenGL 3.2, no. No display lists, no metafiles (in QuickDraw, Pictures/PICT, in Win32 in-memory EMF, in Core Graphics PDFs).

OpenGL lost its ability to handle metafiles and there is no simple (portable and safe) way to create that on top! Or did I miss something?

thokra
05-24-2012, 12:59 AM
I do miss immediate mode a little bit (for quick and safe tests as well as first examples for beginners) but I can make a layer for that any day.

You don't save that much time using immediate mode and if you want to prototype anything using shader, well, you'll have to use shaders anyway so that already puts you half-way to complying to GL 3.1 rules. Plus, if you start prototyping something in immediate mode and later decide to rewrite the whole thing to use, say, 3.3 core, you'll do extra work. You should simply get used to going directly for the new stuff. For beginners it's not advisable to use the FFP at all! Everyone new to OpenGL since 3.1 got ratified should at least start there.



EXT and ARB will always be there for new features and that is something good, not a problem. That is how OpenGL stays modern between major releases.

No, vendor specific extensions are there for new features, like GL_NV_bindless_texture. EXT extensions, if ever introduced, usually take enough time to call a feature pretty much standard on current hardware. ARB extensions and promotion to core usually take even longer.


But display lists is what really disturbs me. There is no way to replace them.

Although I can't refer to the concrete post, I think aqnuep already established that VBOs may perform as fast as display lists if properly implemented. Unless you profile the difference between using a VBO and a display list on your hardware and current driver, there is no way to tell if and how VBOs are inferior to using a display list.


OpenGL lost its ability to handle metafiles and there is no simple (portable and safe) way to create that on top! Or did I miss something?

Please correct me, but I never ever read anything about direct support for graphics file formats in OpenGL in any way.

Dark Photon
05-24-2012, 05:26 AM
But display lists is what really disturbs me. There is no way to replace them.
To clarify that, there's no "core" way to replace them with equivalent or better performance (AFAIK).

That said, I can get NVidia display list performance with non-display list batches using their bindless extension (specifically GL_NV_vertex_buffer_unified_memory (http://www.opengl.org/registry/specs/NV/vertex_buffer_unified_memory.txt)), but that is not EXT or ARB yet, much less core. Because of this, I consider the removal of display lists from core completely premature.

Display lists are yet another reason to just use the compatibility profile. They're easy, and the performance benefit can be significant.

Dark Photon
05-24-2012, 05:37 AM
I moved to 3.2 recently, and most of the time it is just fine...
By the way, why did we have to go back and resurrect a 2 year old thread here.

Wonder if the new forums would let us put a lock on threads after 3 months or so of inactivity...

Ragnemalm
05-30-2012, 10:21 PM
You don't save that much time using immediate mode and if you want to prototype anything using shader, well, you'll have to use shaders anyway so that already puts you half-way to complying to GL 3.1 rules. Plus, if you start prototyping something in immediate mode and later decide to rewrite the whole thing to use, say, 3.3 core, you'll do extra work. You should simply get used to going directly for the new stuff. For beginners it's not advisable to use the FFP at all! Everyone new to OpenGL since 3.1 got ratified should at least start there.

Immediate mode and shaders are unrelated in this context. Sure, one big change in 3.2 is that you must use shaders, but Immediate Mode is glBegin/glEnd and may include shaders as much as we like/need. What I see in the student labs is that the increased use of buffers give students a lot of trouble. Many are Java-damaged and don't understand concepts like addresses and pointers.

And we do start at 3.2 exactly for that reason, but the start is slower and more complex. Of course that is a concern! I may lose students from the course at the first lab because it is too hard!


Please correct me, but I never ever read anything about direct support for graphics file formats in OpenGL in any way.
Display list support is metafile support! Not in a way that can easily be saved or loaded, but in the same way that QuickDraw, GDI and CG supports in-memory metafiles.

What OpenGL would have needed to have good metafile support is a convenient way to read out the contents of a metafile in a way that can be converted to PDF or whatever - not to remove them. (And yes I know that they are not totally gone, but it disturbs me to write new code with deprecated calls.)

By the way, why did we have to go back and resurrect a 2 year old thread here.

Wonder if the new forums would let us put a lock on threads after 3 months or so of inactivity...
In many other fora, it is considered highly impolite to start a new thread about a subject that has already been discussed, so I have made it a habit to search for related discussions. You can often find past conclusions to build on.

This particular discussion has grown much more current the past year, since it is right now that 3.2 has become universally available, and that is why I have moved my CG course to 3.2 now.

thokra
05-31-2012, 12:41 AM
Immediate mode and shaders are unrelated in this context. Sure, one big change in 3.2 is that you must use shaders, but Immediate Mode is glBegin/glEnd. What I see in the student labs is that the increased use of buffers give students a lot of trouble. Many are Java-damaged and don't understand concepts like addresses and pointers.

And we do start at 3.2 exactly for that reason, but the start is slower and more complex. Of course that is a concern! I may lose students from the course at the first lab because it is too hard!

All I said is that there's nothing to miss about immediate mode and that you fare much better starting at 3.1+ right off the bat. And no, immediate mode and shaders aren't unrelated since an active shader program determines what happens with vertex attributes submitted with immediate mode commands. Using buffers maybe confusing to the beginner at first but that isn't due to Java or any other language. If one doesn't know how computer memory is organized and what addressing means then that's a general shortcoming that will prove to be an obstacle with a lot of other stuff as well, not only OpenGL. Aside from that, there are much more complex things out there than pointers. If you loose students after the first course because using buffers is too hard ... that's not OpenGL's fault.


Display list support is metafile support! Not in a way that can easily be saved or loaded, but in the same way that QuickDraw, GDI and CG supports in-memory metafiles.

What OpenGL would have needed to have good metafile support is a convenient way to read out the contents of a metafile in a way that can be converted to PDF or whatever - not to remove them. (And yes I know that they are not totally gone, but it disturbs me to write new code with deprecated calls.)

Please, please explain to me how supporting display lists, which is basically a collection of GL commands that can be executed together at an arbitrary point of run-time, is equal to supporting a totally unrelated, platform dependent graphics file format. I really don't get it.

Ragnemalm
05-31-2012, 06:46 AM
Using buffers maybe confusing to the beginner at first but that isn't due to Java or any other language. If one doesn't know how computer memory is organized and what addressing means then that's a general shortcoming that will prove to be an obstacle with a lot of other stuff as well, not only OpenGL. Aside from that, there are much more complex things out there than pointers. If you loose students after the first course because using buffers is too hard ... that's not OpenGL's fault.

Yes it is, since OpenGL has become harder to get started with. With Immediate Mode, you can write simple beginner's OpenGL programs that just can't crash. That gives the students a safe and comfortable start, and a "draft mode" to work from. That has a point. We used to start with that in the first lab, and then continue with glDrawElements in the second to move them to better performing OpenGL.

With OpenGL 3.2, we use VAOs and shaders, and the result is all too often that the students get a blank screen, or a crash, even on the very first exercises. I have to question whether that is really the right way to teach CG.

We still have taken the step, I have rewritten my lectures, labs and course book to be 100% OpenGL 3.2, and most students seem to like the move. So far so good, so I am not trying to stay with the old way. But I must evaluate what we did, and I am questioning how to get started in a really smooth and nice way. There is room for improvement.


Please, please explain to me how supporting display lists, which is basically a collection of GL commands that can be executed together at an arbitrary point of run-time, is equal to supporting a totally unrelated, platform dependent graphics file format. I really don't get it.

To me it is totally obvious, because formats like PICT, EMF and PDF are both, they work exactly like display lists when in memory, but can also be on disk. They are all collections of graphics commands. Display lists are the only way I know to do the in-memory part in OpenGL, and if I could extract the contents it could (with some effort and maybe some limitations) be translated to any of these file formats.

I ran into this when translating graphics code relying on metafiles both for in-memory recording of graphics sequences as well as storage. I want to make an OpenGL layer, and with display lists I have a part of it. I suppose the whole problem has to be solved on a higher level (sounds pretty complex), but I fear that I will lose performance.

Alfonse Reinheart
05-31-2012, 07:54 AM
That gives the students a safe and comfortable start, and a "draft mode" to work from.

While simultaneously giving them the illusion that they have even the slightest idea of what's going on with their code. A hefty percentage of questions on this site come from people who think they know what they're doing, thanks to immediate mode and NeHe's tutorials.

I've always been of the belief that learning things the wrong way is potentially dangerous. It may be more complicated initially, but it will be a lot smoother going once you get into the real course work (lighting, textures, etc).

If you want them to have a "draft mode" to start from, then give them working code to begin with. Having them write the boilerplate initially serves little purpose.

Ragnemalm
05-31-2012, 08:16 AM
While simultaneously giving them the illusion that they have even the slightest idea of what's going on with their code. A hefty percentage of questions on this site come from people who think they know what they're doing, thanks to immediate mode and NeHe's tutorials.

I've always been of the belief that learning things the wrong way is potentially dangerous. It may be more complicated initially, but it will be a lot smoother going once you get into the real course work (lighting, textures, etc).

If you want them to have a "draft mode" to start from, then give them working code to begin with. Having them write the boilerplate initially serves little purpose.
We already are giving them working code to start from. And that is why we left Immediate Mode in the first place, to get straight into high performance modern code. And that is also why we turned to glDrawElements at lab 2 even when using GL 2, to make sure they know what they are doing, to get them on the right track as early as possible.

At my very first lecture, a working example with shaders and VAOs is presented (and run, live). But then I have to hide quite a bit of code, namely the shader upload/compiling, otherwise that example is scary. And it takes time until they know what it all means. Comparing the complete minimal 3.2 program to the minimal 2.1 program is pretty horrible.

We can note that as late as the 7th edition (2010), the Red Book starts with Immediate Mode (page 6). Why? Is that just laziness?

menzel
05-31-2012, 08:47 AM
We can note that as late as the 7th edition (2010), the Red Book starts with Immediate Mode (page 6).

I don't recommend the Red Book anymore because of this (and some other problems).


Why? Is that just laziness?

Either that or they hope as well that teaching outdated stuff first and hopeing the audience will forget it later on is a good idea.
If you teach OpenGL, teach 3.2 core (this way also the Mac users can work on there own machines).

Sure, writing everything from scratch is too much for beginners. Give them code to load shaders, give them the shaders, give them a drawCube() and drawSphere() function and let them set colors and matrices via uniforms until they know the basics of transformations. You don't need immediate mode for teaching anymore.

If you want them to define vertices and triangles and don't care about the rest, let them fill an array and provide a drawTriangleArray( float* data, int numberOfVertices ) - at least, this is how GPUs work: you give them data and let them draw everything at once, not vertex by vertex.

thokra
05-31-2012, 09:05 AM
We can note that as late as the 7th edition (2010), the Red Book starts with Immediate Mode (page 6). Why? Is that just laziness?

The book refers to OpenGL 3.0 and 3.1. I guess the idea was, since OpenGL 3.0 only deprecated but didn't remove features they deemed immediate mode stuff still worthy of being included.

I guess the 8th edition will fix that. No idea why that one doesn't include the changes for GL 4.2. One might think they had enough time.

Janika
05-31-2012, 10:52 AM
No idea why that one doesn't include the changes for GL 4.2. One might think they had enough time.

Maybe that's why the delay in releasing the book, is to include 4.2 changes.

Alfonse Reinheart
05-31-2012, 12:25 PM
At my very first lecture, a working example with shaders and VAOs is presented (and run, live).

Pardon me for criticizing your teaching methods, but why would you be showing a code example in your first lecture? Admittedly, it has been a while since I took a course in CG, but I seem to recall that our first lectures were on graphics theory. Actual source code took a while.

thokra
05-31-2012, 12:32 PM
[..]but I seem to recall that our first lectures were on graphics theory.

Same here. We hit it hard with some pounds of Peter Shirley.

Aleksandar
05-31-2012, 01:56 PM
Hey guys, how about moving discussion to another thread? All stuff now relates to teaching OpenGL and have nothing in common with display lists.

Teaching graphics is really interesting topics. Since 2004 I've been thinking of switching to pure shader based OpenGL course as a part of Computer Graphics course on undergraduate studies, but until now nothing is changed. I've been discourage by my coleagues, suggesting that shader based aproach is complicated for that kind of course. Even with immediate mode rendering it is pretty hard for students to do everything I give them for lab exercises. Since this is the first and only CG course, there are a lot of concepts they have to accept.

As you've probably noticed, I want to do everything by myself, and that's the policy I imposed to my students in order to make them understand what's really happening under the hood. Writting a framework for dealing with shaders is not a simple task. But if I do it for them, they'll probably copy that framewor for other projects as being ultimate solution, which is certainly not. We have to give students knowledge how to use OpenGL not certain home-built framework.

Ragnemalm
06-03-2012, 09:10 AM
Hey guys, how about moving discussion to another thread? All stuff now relates to teaching OpenGL and have nothing in common with display lists.

Teaching graphics is really interesting topics. Since 2004 I've been thinking of switching to pure shader based OpenGL course as a part of Computer Graphics course on undergraduate studies, but until now nothing is changed. I've been discourage by my coleagues, suggesting that shader based aproach is complicated for that kind of course. Even with immediate mode rendering it is pretty hard for students to do everything I give them for lab exercises. Since this is the first and only CG course, there are a lot of concepts they have to accept.

As you've probably noticed, I want to do everything by myself, and that's the policy I imposed to my students in order to make them understand what's really happening under the hood. Writting a framework for dealing with shaders is not a simple task. But if I do it for them, they'll probably copy that framewor for other projects as being ultimate solution, which is certainly not. We have to give students knowledge how to use OpenGL not certain home-built framework.
You are absolutely right. My initial problem when finding the thread remains, namely that I feel a need for metafile support like display lists (not least since all other APIs have it) and how I could possibly replace it in 3.2+ (using no deprecated calls), but the teaching part is not entirely relevant, but certainly interesting and important.