Use for OpenGL Post-3.0 Profiles

The 3.0 specification defines the concept of Profiles, but it doesn’t really make use of them. It provides the ability to create a context with a particular profile, but it only defines a profile that is all of 3.0.

There is some question as to what alternate profiles should be used for. And I have one.

One of the biggest failings of the deprecation model that 3.0 introduces is that it binds the removal of legacy functionality (along with the potential improvement in driver quality and performance) to enhancements in hardware-based functionality. So a version 3.2 that finally removes “bind-and-modify” and the fixed-function pipeline (using a form of DSA to modify objects) would also bring requirements of DX11-class hardware.

Core extensions are not good enough, because you won’t get getting the benefit of the more stable and usable drivers. You need to know that the legacy stuff is gone before drivers can make assumptions about the state of the world.

But profiles are the perfect way to allow an implementation to provide a (for example) version 3.2 interface but with only the 3.0 (or even 2.1) hardware-specific functions. The profile would simply say that certain functions/enums from 3.2 are not valid (but they should still be querriable).

So for 3.1 hardware that is not 3.2 capable, you would still ask for a 3.2 GL version, but the profile string would be “3.1 features” or some such thing.

This would allow OpenGL to stay interface backwards compatible while providing non-backwards compatible features. It would also be pretty easy on IHVs to implement.

Now, I recall that the GL specification states that a GL implementation exposing a particular version number must provide at least a full profile of that version. I would say that this should be changed, because it would prevent the above from working (as a 3.2 implementation on 3.1 hardware would need to have software-fallbacks).

Great idea.
I think I’d prefer it if there was a second parameter to a profile, so you could specify a feature version and an interface version explicitly, but that ain’t gonna happen, I would imagine.

Let’s see if I got this right:

Your 3.2 profile removes legacy stuff and adds DX11 hardware. You are assuming that removing legacy stuff improves the driver quality, so now you want to be able to create a 3.2 context on DX10 hardware also.

Wouldn’t this be solved by simply having one more profile in between? Let’s say 3.15 only removes legacy stuff from 3.1 but doesn’t add anything new. 3.2 would add DX11 to 3.15.

Then, you still wouldn’t get a 3.2 context on DX10, but the streamlined 3.15 profile would work.

CatDog

I agree with Korval’s idea.
I dont like the way DX ties the API to the hardware generation.
If someone already had renderers written for the previous versions then they can just include all of them with the product.
But when someone new wants to start using OpenGL they want to be able to write a single renderer that will work with all of the different hardware that their customers have.
Yes, they still need to have different code paths to support different hardware features, but this is much simpler than having to change how you use the API at the same time.
If some hardware supports a shader feature and some doesn’t then we should just load a different shader, not have to change all of the renderer code because we are forced to use a different API.
Improved driver features like immutable objects, binary shaders or buffer subranges that are not hardware dependant should be made available on all of the hardware generations that are supported by a particular driver version.

Let’s say 3.15 only removes legacy stuff from 3.1 but doesn’t add anything new. 3.2 would add DX11 to 3.15.

And then what happens when 3.3 comes out with object immutability and DX12-class hardware? Is there going to be a 3.17 that provides that to DX10-class hardware and a 3.25 for DX11-class hardware?

See, the entire point of this is to keep the two lines of development separate. To divorce hardware upgrades from API.

I agree with Korval’s idea.

Which is also what D3D11 seems to be finally doing.
In the GameFest presentation they talk about supporting D3D10.1, D3D10 and even D3D9 HW. The new features (like tessellation) will require D3D11 HW, but all the other API improvements will be available to old HW as well.

Yes. :slight_smile:

Well, if you really want to branch the development then I’d go with knackereds idea instead. But as he said, that ain’t gonna happen.

CatDog

Which is also what D3D11 seems to be finally doing.
In the GameFest presentation they talk about supporting D3D10.1, D3D10 and even D3D9 HW. The new features (like tessellation) will require D3D11 HW, but all the other API improvements will be available to old HW as well.
[/QUOTE]

What?
DX is software that you install and it has nothing to do with the hardware. You can have DX9 installed while you have a DX7 GPU and you can run a DX9 programs on it. So you get the benefits of the new API on old hw. Of course, your SM 2 shader will fail because the GPU doesn’t support it.

On Vista, same situation : you can install DX11 and use DX11 while you have a old GPU.

I thought caps bits were gone from 10? So the hardware either supports the full 10 spec or it will fail to create the device (or create the reference impl).

Yes. :slight_smile:

That’s a lot of specs to write, when all you really need to do is provide a list of entrypoints and enums that are not valid to call in a particular profile.

That’s right. That ‘yes’ wasn’t meant to be realistic - I just tried to fit your suggestion to what GL3 already offers.

My guess is that even listing invalid features is too much to ask for. Otherwise there would be a more elegant way than querying for 3.2 and getting a partially inoperable quasi-3.2-but-really-3.1 context. Not allowing things like that appears to be a conscious decision by the ARB.

CatDog

Not allowing things like that appears to be a conscious decision by the ARB.

Possibly, but that doesn’t mean it shouldn’t be suggested. I know nVidia and ATi want to use APIs like Microsoft did with DX10 and Vista: use the API to get us to buy more hardware. But the rest of the ARB may be more reasonable.

Does anyone else matter on the ARB? Other than the two main IHV’s in the world I mean.

Does anyone else matter on the ARB?

As the single biggest user of OpenGL, Apple matters.

And all those CAD-developers, don’t forget about the CAD-developers !

why didn’t you just say Apple then?

They are much more coarse grained, you have basically 3 “cap bits” / feature levels, and not as fine grained as the DX9 ones.