Are they...dead?

Any idea how long it’s before the next GL version? I’m asking this because I noticed a long delay in the release of any D3D updates since the last one. What’s going on? Dead season for the APIs?

Indeed, the new frontier is mobile. No need for new APIs :slight_smile:

GL ES 3.0 is fairly imminent. And D3D 11.1 will be out with Win8, whenever that is.

GL 4.2 just came out last year at SIGGRAPH. Up until 4.1, we had been getting releases every 6 months, bouncing between SIGGRAPH and GDC. They slowed down to 1 per year for 4.2.

Considering that the next GL version is having its entire specification rewritten (to be better organized), I wouldn’t hold my breath for GL 4.3 at GDC.

I’m also more waiting for ES 3.0, hopefully as a subset of 3.3 with the usual mobile specific limitations (float accuracy, less attributes and fragment shader out required etc.). But a more unified API would be great.
OpenGL 4.2 supports all major features of current GPUs, 4.3 would only be a minor update (better readable spec, query of frag data out names :wink: etc.). For GL5 we will have to wait for the next generation of GPUs (ATIs 7xxx that was just released and NVidias Kepler is anounced for Q1 2012). Those will support MMUs, so we will see some kind of hardware supported mega-texturing (google for ‘partially resident textures’). But first we will see extensions for those new features and when they prove to be stable OpenGL 5 can be written. So I wouldn’t expect GL 5 to see the light of day this year. Maybe a minor GL 4 update, hopefully an ES update.

If ATI already has hardware out with new features, and NVIDIA’s right around the door, then we’re at least going to see a bunch of NV extensions from GDC or around Kepler’s release. The ARB seems to be fairly responsive to these sorts of things, so odds are good of a GL 4.3 to match D3D 11.1, exposing the new features.

Alfonse: afaik the partially resident texture feature of the 7xxx chips are not exposed in Direct3D and will not be part of 11.1 but an OpenGL extension is planned.
I don’t know, how long these things will stay as extensions before they can get adapted to core, but I think I’ve read somewhere, that the ARB wanted to go to a versioning scheme where a new major number requires newer hardware, so new features of these GPU generations would require GL 5.x.
But numbering aside, I can’t wait to even see extensions for these texture management because to me a lot of questions are still open (e.g. can only texture data be swapped or arbitrary buffers as well? what do I get in case of a ‘cache miss’? a lower mipmap level and the GPU will fetch the correct part of the texture over the next frames (similar to software megatexturing) or will it stall the pipeline? can I choose? if so, via API globally for a texture of in a shader with different sampling functions?).

afaik the partially resident texture feature of the 7xxx chips are not exposed in Direct3D and will not be part of 11.1 but an OpenGL extension is planned.

A proprietary extension (or at best EXT). There will be no standardization of this unless NVIDIA’s going to implement it too.

Yes, I meant a proprietary extension. If we are lucky, an EXT that will be supported by NVidia as well, if not two proprietary extensions with similar features. Going from one or two proprietary extensions to an EXT to an ARB and maybe to core can take time, that’s why I wouldn’t expect GL5 with such features this year. But maybe NVidia, ATI and the ARB will surprise us with an ARB right from the begining (I’m a dreamer :wink: )?

#ifdef OT

But maybe NVidia, ATI and the ARB will surprise us with an ARB right from the begining (I’m a dreamer :wink: )?

I think the ELO said it best: Hold on tight to your dreams. :wink: I guess the only thing that would really, really stun everyone out there is the news that the ARB decided to come up with a new API.

#endif

What’s bothering me is, why should one ask for a new core spec so soon? Is there any serious project or company out there which had time to adopt GL4 in all its glory? I’m not saying the current spec and the features exposed aren’t useful, quite the contrary, but it would be nice to have some overview of who is really using them for industry-strength products. For DX11 there are at least some games that explicitly provide a render path - albeit one cannot judge which particular features are really in use. But from what I know, there doesn’t seem to be any strong indication that many of the currently exposed features are being used in many meaningful products. Anyone got some info here?

I believe there is nothing wrong with first providing stable extensions first, then assemble the most promising and come up with a new core spec - as long as it doesn’t take them as long as from 2.1 to 3.1+.

Nothing yet happened. Not even for D3D! I’m wondering what’s going on out there? Are they hiding something? No updates, it’s like the industry of computer graphics is going dead silently.

Why does something not happening on your personal schedule mean that “the industry of computer graphics is going dead silently?” Why should something have happened?

Are they hiding something?

Yes. They’re hiding the fact that graphics are good enough for now. They’re hiding the fact that between image_load_store and OpenCL, you can pretty much do whatever you want in as fast a way as the hardware allows.

Well, both AMD and NVIDIA have released some extensions in the recent past so there is a lot of things going on. Not to mention that other extensions are on the way, at least AMD’s virtual texture implementation, GL_AMD_sparse_texture which has been already announced at GDC in March.

Though, somewhat agree with Alfonse that the more flexible features you have (which we have plenty already) the less new features you might need. However, don’t forget that both AMD and NVIDIA have released their new GPU generation just in the recent past (Southern Islands in December and Kepler in March) and this may mean even more new extensions.

Yes. They’re hiding the fact that graphics are good enough for now.

Yes it’s and always has been the case, it’s something relative. However, the API is not relatively perfect yet. That’s why I’m curious why it’s taking so long to polish some stuff instead of hacking it, such as the shader calls and needlessly long function names. Or at least do something about the antiquated OpenGL32.DLL. Don’t we need a unified context manager that belongs to the OpenGL rather than the ignorant platforms?

However, the API is not relatively perfect yet.

None of what you suggested is going to happen. So stop expecting it to and just accept what you have now.

None of what you suggested is going to happen.

Your confidence explains a lot. :slight_smile:

OpenGL32.DLL belongs to Microsoft. Ask MS to do something about it, not people on this board.

OpenGL32.DLL belongs to Microsoft. Ask MS to do something about it, not people on this board.

I’m not asking you or anyone on this board to do something about it. Take it easy :wink:

Don’t even bother with OpenGL32.DLL. The whole WGL goes away in WinRT (aka metro API) :slight_smile:

The whole WGL goes away in WinRT (aka metro API)

How to use OpenGL then on WinRT? What’re the alternative calls?

You don’t use OpenGL on WinRT. You can only get OpenGL when using Win32. So OpenGL doesn’t exist for applications developed under WinRT.