OpenGL 3 Updates

You understandably want to know where the OpenGL 3 specification is. I have good news and some bad news. First the bad news. Obviously, the specification isn’t out yet. The OpenGL ARB found, after careful review, some unresolved issues that we want addressed before we feel comfortable releasing a specification. The good news is that we’ve greatly improved the specification since Siggraph 2007, added some functionality, and flushed out a whole lot of details. None of these issues we found are of earth-shocking nature, but we did want them discussed and resolved to make absolutely sure we are on the right path, and we are. Rest assured we are working as hard as we can on getting the specification done. The ARB meets 5 times a week, and has done so for the last two months, to get this out to you as soon as possible. Getting a solid specification put together will also help us with the follow-ons to OpenGL 3: OpenGL Longs Peak Reloaded and Mount Evans. We don’t want to spend time fixing mistakes made in haste.

Here’s a list of OpenGL 3 features and changes that we decided on since Siggraph 2007:
[ul][li] State objects can be partially mutable, depending on the type of the state object. These state objects can still be shared across contexts. This helps in reducing the number of state objects needed in order to control your rendering pipeline. For example, the alpha test reference value is a candidate to be mutable. [] We set a minimum bar required for texturing and rendering. This includes:[LIST][]- 16 bit floating point support is now a requirement for textures and renderbuffers. Supporting texture filtering and blending is still optional for these formats. []- S3TC is a required texture compression format []- Interleaved depth/stencil is a required format for FBO rendering []- At least one GL3-capable visual or pixel format must be exported which supports front-buffered rendering.[/ul][] OpenGL 3 will not have support for the GL_DOUBLE token. This means it will not be possible to send double precision vertex data to OpenGL. [] A format object has to be specified per texture attachment when a Program Environment Object is created. This helps minimize the shader re-compiles the driver might have to do when it discovers that the combination of shader and texture formats isn’t natively supported by the hardware. [] GL 3 will only cache one error, and that is the oldest error that occurred. [] The OpenGL pipeline will be in a valid state once a context is created. Various default objects, created as part of the context creation, will have reasonable default values. These values are such that a simple polygon will be drawn into the window system provided drawable without having to provide a Vertex array object, vertex shader or fragment shader.[] GLSL related changes: [ul][]- GLSL 1.30 will support a #include mechanism. The actual shader source for the #include is stored in a new type of object, A “Text Buffer” object. A text buffer object also has a name property, which matches the string name specified in a #include directive. []- Legacy gl_* GLSL state variables are accessible through a common block.[/ul][/LIST][/li]More details will follow soon in an upcoming OpenGL Pipeline newsletter.

Barthold Lichtenbelt
OpenGL ARB Working Group chair

To make it short, we shoudln’t expect the spec untill about new year and first implementations even later… Still, it is good that they are trying to make a solid product. I guess I can wait.

Hey its better late than never! And if it takes abit longer to get something that is done correct, then yes lets wait.

Why no double vertex data? Is this for speed reasons only? I see with DX10.1 precision of float data will have to be tighter on that hardware… Will this be good enough for everyone? From games to scientific purposes? I am not clear on the #include directive, is this a header file where you have your shaders coded at?

e.g.
#include “lightingVS.xxx”
#include “lightingFS.xxx”
#include “lightingGS.xxx”

thanks for the update!!!

Double data never made sence as there are no implementations that actually use it. Something like this should be an extension. So IHMO it is a good move.

As far as I understood, the include works just like in C (copy-pastes a text from another file), only that is works with named buffers instead of files. A very elegant solution!

Remember, OpenGL 3 will be implementable on currently shipping hardware. The fast majority of that HW does not natively support double precision vertex data. In GL 2, where you can send double vertex data to the GL, that will likely get converted to floats by the driver or HW. Future hardware might fully support double vertex data, and then support for it can be added back into a future OpenGL 3.x version.

Barthold

I’m sure you’re doing the best you can and surely these things require time. We’ll keep using the already great feature set exposed by gl 2.1(plus extensions of course).
Everyone is on their toes to use the new api so make it as good as you can!

Thanks for the update, it’s good to know what is going on with it :slight_smile:

16 bit floating point support is now a requirement for textures and renderbuffers. Supporting texture filtering and blending is still optional for these formats.

Remind me of something. Do R300 (Radeon 9xxx) and NV30 (GeForce FX’s) support 16-bit float textures/renderbuffers? I was hoping that these cards would be the minimum GL 3.0 compatible hardware.

The #include mechanism is intended to make partitioning of shader sources easier, but in GL3 is not going to provide a means to do either a callback or a real access to the file system.

What it does let you do is submit individual buffers of text and attach names to them separately, and then make reference to those named text buffers from another section of text using the #include statement. However at compile time you must be able to provide all of the buffers to GL3 that can be referenced for that compile job, this allows for more than one buffer to have the same name but still let the application disambiguate everything at compile time.

edit: why do it this way? It keeps all the work to do #include resolution on the server side. It is potentially not as flexible as a callback mechanism, but as we know callbacks do not translate well to a client/server model.

I’m 99% sure that R300 supports 16-bit float textures, but without filtering or blending. I never had a NV30, so I don’t know about that.

By the way, for the next Pipeline there is going to be a segment about “GL2 to GL3 migration”. I’d like to hear about specific areas of developer curiosity to try and cover those in some detail where possible, we could conceivably even air some of them briefly here and then in more detail in the Pipeline piece.

That’d be great. I’m sure we’d all like to see some sample code and get a sneak peak at some of the more radical changes we’re in for.

Thanks for the update and the good news!

FP16:
GeForce FX, Radeon 9, Radeon X - no filtering, no blending
Radeon X1 - blending supported
GeForce 6, 7, 8, Radeon X2 - filtering and blending supported

So basically if you implement HDR you have to provide one implementation for GeForce 6/7/8 and Radeon X2 and one separate implementation for Radeon X1.

In my game I perform multiple tests at beginning to determine if FP16 blending/filtering is supported, if it runs reasonably fast and if it crashes or not. Then I know which HDR implementation to use if any.

This has been discussed many times, so I don’t want to go off topic here. I hope life will be easier for us in future. Right now it’s “test if it really works before you use it”.

Yes, some way to ask what level of FP support is available would be vitial imo.

Shouldn’t alpha test be removed? You can easily simulate all possible alpha test modes using texkill functionality in the fragment shader.

Yes, some way to ask what level of FP support is available would be vitial imo.

Asking questions. That’s so DirectX :wink:

In GL 3.0, you simply try to make a format object of the appropriate type. Part of the format object is a specification of what you intend to do with the images you create from it (bilinear filtering, anisotropic, etc). If, when you create this format, GL says, NO, well, you can’t do it.

Shouldn’t alpha test be removed?

No. It’s much faster to have real alpha test than to do it in shaders. Plus, alpha test can be per render target (eventually). Plus, you can easily mix and match alpha test without having to write shaders for it.

It’s not just that.
Try writting a vertex shader that’s compatible with clip planes and works on all hardware supporting GLSL.
On GeForce you need to write to gl_ClipVertex, otherwise it won’t work. On some Radeons you can’t write to gl_ClipVertex - it causes software mode. :mad: I’m using #ifdef for that.

Doh! You got me talking off-topic again :wink:

GL3 will make this easier to assess by way of the format object mechanism as Korval noted. However the per-GPU-type outcomes you list above aren’t likely to change, because the root issue is in the hardware.

Yes, format objects are something I await with great anticipation. I hope it will be sufficient solution not only for supported-unsupported problem, but also for software-hardware problem :slight_smile:

This has been discussed many times, so I don’t want to go off topic here. I hope life will be easier for us in future. Right now it’s “test if it really works before you use it”.

Even worse, sometimes EXT variants work better than ARB ones or vice versa, and there really is no way to test at runtime which implementation works reliably. Format objects FTW!

Anyway, some possible discussion topics regarding migration issues:

  1. How will a GL3 render context be created? Assuming that creation will still go through the wgl/glx/agl/cgl API’s, how will one be able to distinguish between GL2 and GL3 contexts?

  2. Regarding multisampling, will there be any changes to the (context creation/format query/context destruction/final context creation) round trip?

  3. What are the changes in VBO creation, binding and drawing?

  4. Which functions require the consumer to keep data on client memory? (For example, in GL2 vertex arrays are the responsibility of the consumer, which can be problematic in garbage collected environments - VBO’s don’t suffer from this issue).

  5. This is possibly what I’m most interested in: Will the new .spec files be available, to ease creation of bindings for other programming languages? Any chance for the wgl/glx specs?