PDA

View Full Version : OpenGL 3.2 support in new nVidia linux beta driver



Heiko
07-22-2009, 02:34 PM
Website phoronix claims that the new nvidia beta drivers for linux have support for the unreleased OpenGL 3.2 standard. The change log of nvidia doesn't say anything about the new spec, but one of the screenshots on Phoronix does indeed show that the nvidia panel says it supports version 3.2 of OpenGL.

Read it here: http://www.phoronix.com/scan.php?page=article&item=nvidia_190_opengl32&num =1

Any thoughts? I was indeed expecting something for siggraph as is mentioned in the article, but didn't expect to hear about OpenGL 3.2 for the first time like this. According to the article no new hardware is required for the new spec (which makes sense because OpenGL 3.0/3.1 already requires DirectX 10 class hardware... besides AMD's DirectX 10.1 hardware there is nothing newer than that on the market yet).

I wonder which of the extensions will become requirements for OpenGL 3.2... perhaps they finaly include the geometry shader, because AMD has taken its first steps in their drivers towards support of them...

Alfonse Reinheart
07-22-2009, 04:26 PM
Well, it's not like NVIDIA doesn't sit on the ARB. They know what's coming down the pipe. And they're more proactive than ATI about OpenGL, so they're probably planning on GL 3.2 support soon after the spec hits.

Stephen A
07-22-2009, 04:36 PM
Check out the gl.spec file from the registry. It already mentions OpenGL 3.2 and lists new deprecated functions (including stuff introduced in 3.0, IIRC).

No new entry points at this time, but I'd expect new stuff at some point after September.

overlay
07-22-2009, 04:50 PM
... or maybe some announcement in two weeks at the OpenGL BOF
at Siggraph :-)

http://www.khronos.org/news/events/detail/siggraph_2009_new_orleans/

Brolingstanz
07-22-2009, 05:27 PM
Let's hope they nail the lid down on SM4-4.1 as SM5 is just around the corner.

(Course I'm still stumbling around in 2.x land so what do I care. ;-))

LangFox
07-22-2009, 06:10 PM
Expecting something new and powerful.

Scribe
07-22-2009, 06:16 PM
Great news, though already depreciating functions from 3.0 is a bit annoying.

I currently have the Official OpenGL Programming guide for 3.0 and 3.1 on pre-order for release at the end of august on Amazon. Are they going to hold off print until the 3.2 spec is out? I'm going to be a bit miffed that I spent £35 on a new guide that contains new features that are already depreciated and doesn't include the features of the current spec.

It would be nice to have some kind of Official response on this.

Alfonse Reinheart
07-22-2009, 06:38 PM
I currently have the Official OpenGL Programming guide for 3.0 and 3.1 on pre-order for release at the end of august on Amazon. Are they going to hold off print until the 3.2 spec is out? I'm going to be a bit miffed that I spent £35 on a new guide that contains new features that are already depreciated and doesn't include the features of the current spec.

So, OpenGL should just stop getting new features and extensions until a book gets printed?

3 versions of GL within the same year has never happened before. If GL 3.2 makes a few books out of date, so be it.

Heiko
07-23-2009, 03:41 AM
So what extensions do you realistically expect to move into the core with OpenGL 3.2?

Personally I don't see the bindless graphics extension moving into the core. It would be great if they did I guess, but it is a too drastic move which I don't see happening soon. I think they might move the geometry shader into the core, which could be a logical step. On the other hand isn't the geometry shader being replaced by newer DirectX 11 class hardware? Or does it have right of existence together with DirectX 11 capabilities?

What about creating binary blobs for glsl shaders so they can be precomputed? I think this will enter OpenGL/GLSL at some point, but could this be happening with OpenGL 3.2? Another thing I can think of is release the restriction of tight coupling between vertex shaders and fragment shaders. Separate vertex programs and fragment programs would allow more flexibility when using the programs together (and not having to link all possible permutations of combined programs).

One last thing I'd like to see (but which I'm not expecting tbh) are texture samplers that aren't bound to a texture. This way you would not need multiple textures which are essentially the same, but only use different samplers.

At this point, these are all the major changes I can think of for OpenGL 3.2. But (except the bindless graphics) we haven't seen any of these things in extensions yet, correct? And the beta driver of nvidia doesn't show any new extensions that could cover these points either I believe. It could be that there won't be any extensions for a new OpenGL spec and that all changes are just made to the core. Could also be that there are just some changes to GLSL plus some minor OpenGL changes such as certain texture formats moved into core.

Unfortunately I don't have nvidia hardware that is capable of OpenGL 3.*, so I can't play with the new beta driver.

Stephen A
07-23-2009, 03:52 AM
I would surprised if anisotropic filtering doesn't make it to core in 3.2. This extension has been supported by everyone for over a decade and it's too damn useful.

It's too early to tell what else will be included, but we might see shader binaries (it's supported in OpenGL ES, so yeah). Other potential features are sampler states (I hope!), some form of tesselation (EXT probably, rather than core), and maybe, just maybe, some improved support for threading (either via display lists or some other form).

I would *really* love to see DSA too, but I doubt that's possible in such a short timeframe.

Heiko
07-23-2009, 04:25 AM
I would surprised if anisotropic filtering doesn't make it to core in 3.2. This extension has been supported by everyone for over a decade and it's too damn useful.

It's too early to tell what else will be included, but we might see shader binaries (it's supported in OpenGL ES, so yeah). Other potential features are sampler states (I hope!), some form of tesselation (EXT probably, rather than core), and maybe, just maybe, some improved support for threading (either via display lists or some other form).

I would *really* love to see DSA too, but I doubt that's possible in such a short timeframe. AMD already has a tesselation extension available for quite a while now (unfortunately they have a 6 month delay for implementing a new OpenGL api, still no OpenGL 3.1 support...). Does the nvidia hardware even support tesselation? That is part of DirectX 10.1 right? Which isn't supported by nvidia (at least not completely, but complete support is comming in September I believe with new hardware from nvidia).

Some form of threading would be great as well and is necessary to compete with DirectX 11 I'd say (though, I'm not sure if OpenGL should be aimed at competing with DirectX as its used in a broader range of different applications).

Eosie
07-23-2009, 05:39 AM
I would definitely expect GL_EXT_separate_shader_objects (mentioned here (http://www.geeks3d.com/20090616/nvidia-forceware-190-15-brings-opengl-3-1-support-and-new-extensions/)).

Groovounet
07-23-2009, 05:40 AM
tesselation is part of DirectX 11.
No chance for this with OpenGL 3.2

I think we could expect GL_EXT_provoking_vertex, GL_EXT_texture_snorm, GL_EXT_vertex_array_bgra GL_EXT_texture_swizzle and maybe something related to GL_NV_explicit_multisample. On top of that the big new feature I expect is shader binary.

GL_NV_copy_image promoted to ARB, I expect this extension to be like GL_ARB_copy_buffer for images for more interoperability with OpenCL. I definitely expect nVidia OpenCL implementation released to public.

Other possible big thing: WGL_AMD_gpu_association and WGL_NV_gpu_affinity as an ARB extension ... if they managed to agree!

I think geometry shader will stay as it is. I hope (be don't realy believed in it) an updated DSA extension but not into core.

Eosie
07-23-2009, 05:50 AM
Different blend modes per render target might also appear (GL_AMD_draw_buffers_blend), as was suggested here:
http://www.opengl.org/discussion_boards/...5720#Post255720 (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Main=49668&Number=2557 20#Post255720)

Groovounet
07-23-2009, 06:04 AM
But this is not compatible with nVidia cards yet ...

Heiko
07-23-2009, 06:39 AM
But this is not compatible with nVidia cards yet ...


Different blend modes per render target might also appear (GL_AMD_draw_buffers_blend), as was suggested here:
http://www.opengl.org/discussion_boards/...5720#Post255720 (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Main=49668&Number=2557 20#Post255720)

According to the topic pointed to by Eosie it should be (unless it is a DirectX 10.1 function, in that case you are right, but the topic mentions just DirectX 10).

Groovounet
07-23-2009, 06:45 AM
It's a Direct3D 10.1, that's why I said it's not compatible with nVidia hardware yet.

Demirug
07-23-2009, 08:25 AM
Nvidia has some new OEM hardware that supports 10.1 (G210 & GT210). Beside of these the Direct3D 10.1feature level says all or nothing. Therefore itís quite possible that a chip that doesnít support Direct3D 10.1 still supports some of the features.

Heiko
07-23-2009, 08:58 AM
Nvidia has some new OEM hardware that supports 10.1 (G210 & GT210). Beside of these the Direct3D 10.1feature level says all or nothing. Therefore itís quite possible that a chip that doesnít support Direct3D 10.1 still supports some of the features. As far as I know the existing nVidia DirectX 10 hardware (not the G210 and GT210) supports some of the functions that are part of DirectX 10.1, but not all of them. I don't know which functions are supported though.

Ilian Dinev
07-23-2009, 10:18 AM
I bet it'd be access to individual MSAA/CSAA samples. But this is something all nV cards since GF8x00 have.

Heiko
07-23-2009, 12:53 PM
Wit the newest AMD drivers (9.7, released yesterday) I was able to get a beta OpenGL 3.1 context. GLSL version still reported 1.30 though, but at least it seems progression is made on AMD's side (of course I was greedy and tried 3.2 as well... but that didn't work ;)).

zeoverlord
07-23-2009, 01:33 PM
I think well definitely will see geometry shaders, it's long overdue, also a bunch of EXT extensions once will also be merged to the core, maybe also a few new ones that makes life easier with glsl 1.4 especially regarding MRT/fbo and textures.
But outside of that maybe there will be some news on the texture object front.

glfreak
07-24-2009, 08:30 AM
Personally I would love to see 4 things:

Geometry shaders

Tessellation shaders

GPU/Video resources feedback queries

Proper way to enable FSAA

Strictly get rid of deprecated features, no deprecation mode enforced.

barthold
07-24-2009, 09:46 AM
All,

The ARB will have an update on the state of OpenGL 3 at the OpenGL BOF at Siggraph on Wednesday 8/5.

Hope to see you there!

Barthold
(with my ARB hat on)

glfreak
07-24-2009, 10:38 AM
Yeah :)

Rob Barris
07-24-2009, 12:46 PM
All,

The ARB will have an update on the state of OpenGL 3 at the OpenGL BOF at Siggraph on Wednesday 8/5.

Hope to see you there!

Barthold
(with my ARB hat on)

Wish I could make it this year. Have a great show!

glfreak
07-24-2009, 01:13 PM
How can I join the ARB anyway?

Stephen A
07-25-2009, 02:26 AM
By paying a lot of money. Check the Khronos website for instructions and requirements.

glfreak
07-27-2009, 10:50 AM
Direct3D made it be introducing a new kick-ass feature called GS, though not available on every HW at the time of release.

Lets have the ability to instantiate new fragments with different positions and possibly colors based on a fragment generated by the rasterizer.

Lets call it "Raster Shader" :)

Or make it part of the FS, by adding something like:

gl_NewFradCoord[1..MAX_FRAGS]
gl_NewFradColor[1..MAX_FRAGS]

MAX_FRAGS implementation dependent on how many frags can be instantiated per rasterizer fragment.

Alfonse Reinheart
07-27-2009, 11:17 AM
I think the idea that the ARB is going for with GL post 3.0 is that each major version number will involve adding features for next-generation graphics cards. So all versions of 3.x will run on basically the same hardware, while all versions of 4.x will run on higher-level hardware.

There will be extensions, whether ARB or whomever, that expose the next level of hardware before being incorporated into the core.

Brolingstanz
07-27-2009, 03:35 PM
Well I guess we've seen the first cycle through the deprecation gristmill with uniform buffers. Pretty smooth if you ask me, and mighty nice to see uniform blocks in the latest NV drivers.

efikkan
07-28-2009, 05:11 PM
Take a look here: http://www.opengl.org/registry/specs/ARB/geometry_shader4.txt
Notice the dates and changes in the latest revisions.

According to the link, we can expect GLSL 1.5 in OpenGL 3.2, I hope this will bring new interesting features.

Heiko
07-29-2009, 04:14 AM
This extension interacts with geometry shader support in OpenGL 3.2.

Looks like the geometry shader will hit the core.

Groovounet
07-29-2009, 06:44 AM
Nice new !!! :)

Brolingstanz
07-29-2009, 02:18 PM
Life is good. ;-)

efikkan
07-29-2009, 05:35 PM
The documentation for GL_ARB_geometry_shader4 describes wich parameters to use, and for instance "GL_GEOMETRY_VERTICES_OUT" is specified as supported in OpenGL 3.2. Notice this is not an ARB or EXT enum, so it has to be a part of the 3.2 spesification. (The ARB and EXT enums are included in glext.h) I think it would be strange if such enums becomes part of the spesification if the geometry shader still remains an extension.

Any comments on this?

Brolingstanz
07-29-2009, 07:48 PM
That's a very astute observation, efikkan.

Chris Lux
07-30-2009, 04:36 AM
i think extensions written against OpenGL 3.0 all are missing the _EXT or _ARB suffixes, so this has nothing to say, sadly.

Eosie
07-30-2009, 06:19 AM
No. Each extension that is missing _ARB suffixes is a strict subset of OpenGL 3.x. Such extensions are commonly written against OpenGL 2.x, and their purpose is to support a subset of 3.x functionality on older hardware and maintain 100% code compatibility.

glfreak
07-30-2009, 11:08 AM
I think from 3.1 and over the new features will be adding more deprecated features to the list :) until it becomes only one extension called "ARB compatibility." :D

Well without potential features being added and ahead of current consumer HW, I see no hope.

GS is still missing.

No stable GLSL spec...

Alfonse Reinheart
07-30-2009, 12:29 PM
No stable GLSL spec...

What does that mean?

efikkan
07-30-2009, 04:32 PM
The 3.2 spesification supported in this driver might just be a draft. But I wish nVidia published temporary documentation for the new features so developers like us can test it out, and maybe spot weaknesses. It would be nice if developers could start experimenting with the new features.

Alfonse Reinheart
07-30-2009, 07:38 PM
But I wish nVidia published temporary documentation for the new features so developers like us can test it out, and maybe spot weaknesses.

It is a beta driver; it isn't mean to be widely distributed to begin with. It was leaked from NVIDIA. Documenting a pre-release driver is not a good idea.

glfreak
07-30-2009, 07:40 PM
Well starting from GLSL 1.2, then 1.3, and a few months 1.4 and now before we even have a working GL 3.1 drivers out (except for nvidia) 1.5, every version deprecating things in its predecessor.

Maybe for good I dunno.

I wish NVIDIA takes over the OpenGL, and becomes the driving force behind its specification.

Brolingstanz
07-30-2009, 09:33 PM
We know that market force dictates control, and we know that we submit our vote with the "long green" ballot.

Anyways, I think the deprecation mechanism is going to work out better than a lot of folks expected, including myself.

glfreak
07-30-2009, 11:51 PM
It needs more than just promoting vendor specific extensions. New core API functionality has to be introduced, such as geometry shading, direct state access, and moving the object/bind mechanism to deprecated stuff.

GPU feedback, better FSAA integration than having to create the context twice or so...

Heiko
07-31-2009, 01:20 AM
GS is still missing.

No stable GLSL spec...


As we discussed in this topic we have reason to believe that Geometry Shaders will be part of the core spec of OpenGL 3.2. So if that is the case it will not be missing.

efikkan
07-31-2009, 05:20 AM
But I wish nVidia published temporary documentation for the new features so developers like us can test it out, and maybe spot weaknesses.

It is a beta driver; it isn't mean to be widely distributed to begin with. It was leaked from NVIDIA. Documenting a pre-release driver is not a good idea. Why isn't it a good idea to release a draft of the new spesification? The driver and the upcoming OpenGL spesification will be more thoroughly tested, and developers will have a chance to send their feedback.

bertgp
07-31-2009, 08:01 AM
Why isn't it a good idea to release a draft of the new spesification? The driver and the upcoming OpenGL spesification will be more thoroughly tested, and developers will have a chance to send their feedback.

Indeed! Test early and test often! Less chance of doing something that your customer doesn't want that way too.

Alfonse Reinheart
07-31-2009, 12:50 PM
Graphics drivers are not something to be taken lightly. Graphics driver bugs can make your computer unusable. At no time do NVIDIA or ATI want anyone to use beta drivers, drivers that are known to have bugs in them. And they certainly do not want to make these leaked drivers seem official by providing documentation and downloads for them.

Plus, the rules of Khronos seem to prevent releasing information on upcoming specifications outside of official channels.

Jan
08-01-2009, 07:31 AM
"At no time do NVIDIA or ATI want anyone to use beta drivers"

Oh really ? Just yesterday nVidia gave me a beta-driver for my notebook through the general user download-site. I checked no "include beta drivers" box.

It works fine, though.

Jan.

Heiko
08-01-2009, 09:25 AM
"At no time do NVIDIA or ATI want anyone to use beta drivers"

Oh really ? Just yesterday nVidia gave me a beta-driver for my notebook through the general user download-site. I checked no "include beta drivers" box.

It works fine, though.

Jan.

True. nVidia uses public beta's (they probably also have non-public beta drivers, but I don't know about them). AMD uses non-public beta drivers.

New features in nVidia's beta drivers are also announced (like when OpenGL 3.1 and OpenCL entered the nvidia drivers). It is only because OpenGL 3.2 is not officially announced that there is no documentation for it yet.

Brolingstanz
08-01-2009, 03:07 PM
No stable GLSL spec...

Just to follow up... if you notice errors or omissions in the specs you can file a bug report with a Khronos bugzilla account (even the likes of yours truly has one).

barthold
08-01-2009, 08:26 PM
Why isn't it a good idea to release a draft of the new spesification? The driver and the upcoming OpenGL spesification will be more thoroughly tested, and developers will have a chance to send their feedback.

Indeed! Test early and test often! Less chance of doing something that your customer doesn't want that way too.

That is an excellent question. I'll answer separately for the specification and drivers.

The OpenGL ARB is part of Khronos, as you know. Khronos has a set of intellectual property (IP) rules aimed at protecting any member who contributes ideas to the specification and any member who uses the final specification to implement OpenGL. Part of the process of releasing specifications outside of Khronos is a 30 day "ratification period". In those 30 days, all Khronos member companies are supposed to look for any IP that they own that has made it into the specification. After those 30 days, the Khronos board of Promoters votes to ratify and publish the specificiation. If a member company does not say anything during the ratification period, then they have automatically essentially licensed their IP for free to anyone who uses the specification to implement OpenGL. This is called a "reciprocal license". If a member company does not want to license their IP, then the rules spell out clearly what they need to do to keep their IP protected.

The exact wording, in all its nitty gritty details, you can find here, if interested: http://www.khronos.org/files/member_agreement.pdf

What this basically means is that no document that the ARB works on, be it an update of the core specification, the Shading Language, or any ARB extension, can be shown to anyone outside of Khronos, until the 30 day ratification period has passed. As I said before, this process is there to protect everyone involved with Khronos. Without this process, Khronos would not be able to function. There are over 100, some small and some very big, companies part of Khronos. They all have IP portfolios, and they value those highly. There has to be a set of rules governing IP involved with the work that Khronos does.

In the balance, I think the Khronos process works really well. The old OpenGL ARB (before joining Khronos) did not have a good IP framework, and as a result anytime someone even hinted at a patent, the affected feature was put on hold, and not talked about again.

Feedback from others does make it into the specification, however. Your constructive input on these forums is very valuable. Many ARB members read it. One example of such input is the thread "Talk about your applications" http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=246133#Post2461 33 Keep it coming!

Other ways that input makes it into the specification is through vendor extensions. Those are often written and implemented by a vendor(s) because there is a real developer need. The important and succesfull extensions make it into the core specification eventually. Maybe not the next release, but it can be the release after that.

Ok, lets talk about drivers. Drivers are released often. Bugs are found, sadly enough. If you file a bug with the driver vendor, or mention it on this forum, there's a good chance it'll be acted upon and fixed in the next driver release. AMD, S3 Graphics and NVIDIA all have provided OpenGL 3 (beta) drivers soon after the specification was released. Thus you, as a developer, do get access to drivers to test out quickly.

Hope this helps!

Regards,
Barthold
OpenGL ARB WG Chair

Rob Barris
08-02-2009, 11:23 PM
I would surprised if anisotropic filtering doesn't make it to core in 3.2. This extension has been supported by everyone for over a decade and it's too damn useful.


If it is supported by everyone, the only benefit to moving it into core would be tidiness, really. There is a stronger value to move something into core when some subset of the vendors are dragging heels on supporting a feature that everyone has in their silicon already (or in the other API) - in those cases it creates pressure for the vendor to add support for it so that they can claim compliance with the latest specification. This isn't the case with aniso filtering.

i.e. - if it's supported everywhere, then use it and be happy :)

There is a corpo-political reason why it hasn't been put in the core specification, it is certainly a topic that has come up more than a few times in working group discussions. It wasn't simply forgotten or overlooked, there was a roadblock to making that change.

Jan
08-03-2009, 02:08 AM
Understandable. But maybe, for the sake of tidyness ;-) it could at least be given ARB status some time.

Anyway, i still think the LP object model is the most important API feature, that needs to be introduced soon. Immutable state-objects and all that stuff would be extremely helpful in reducing buggy code (from both sides, driver writers and application programmers) and ensuring best performance practices.

Also all shader interfacing is a nightmare in more complex applications (ie. bind vertex arrays, uploading uniforms). VAOs are from the idea a solution to the current mess, but only performance wise (on paper). For fast switches, you need hundreds of VAOs, and so far i have found no one who says, that he could detect a speed up.

Jan.

Heiko
08-03-2009, 05:25 AM
I would surprised if anisotropic filtering doesn't make it to core in 3.2. This extension has been supported by everyone for over a decade and it's too damn useful.


If it is supported by everyone, the only benefit to moving it into core would be tidiness, really. There is a stronger value to move something into core when some subset of the vendors are dragging heels on supporting a feature that everyone has in their silicon already (or in the other API) - in those cases it creates pressure for the vendor to add support for it so that they can claim compliance with the latest specification. This isn't the case with aniso filtering.

i.e. - if it's supported everywhere, then use it and be happy :)

There is a corpo-political reason why it hasn't been put in the core specification, it is certainly a topic that has come up more than a few times in working group discussions. It wasn't simply forgotten or overlooked, there was a roadblock to making that change.



From that I read:
- no anisotropic filtering in the core
- as what we already figured would be probable: geometry shader will enter the core (AMD is dragging heels on implementing that one...)

Dan Bartlett
08-03-2009, 06:21 AM
Check out the extension registry, 3.2 specs are up!!

mfort
08-03-2009, 06:36 AM
Thanks God (read: ARB) for ARB_sync extension!

/marek

efikkan
08-03-2009, 07:19 AM
Great! OpenGL 3.2 and GLSL 1.50 is available, and GLSL 1.50 features geometry shaders. :)

ZbuffeR
08-03-2009, 08:14 AM
TEXTURE_CUBE_MAP_SEAMLESS sounds good, however I can't believe it took so much time :-)

Can somebody explain what can be the uses for the new "multisample textures" ? Maybe to do custom averaging, especially on HDR or special gamma settings ?

martinsm
08-03-2009, 08:23 AM
Deferred shading + proper multisampling.
http://www.humus.name/index.php?page=3D&ID=81

elFarto
08-03-2009, 09:55 AM
Holy sweet moses, they've actually added this feature at last:

DrawElementsBaseVertex,
DrawRangeElementsBaseVertex, and DrawElementsInstancedBaseVertex
also source their indices from that buffer object, adding the basevertex offset to
the appropriate vertex index as a final step before indexing into the vertex buffer;
this does not affect the calculation of the base pointer for the index array.
Regards
elFarto

Heiko
08-03-2009, 03:34 PM
I didn't expect the specs to be released today (as I thought siggraph would start a couple of days later). And because I was busy all day long, I completely missed it. Now, about 30 minutes before I have to sleep I spotted a news item about new specs!

Anyway... from what I read here on the forums: multisamples sounds cool, now I really should get started on that deferred shading engine ;). Geometry shader is great as well. The other features I have to read into first to comment on them.

Now bring us those drivers AMD! (soon please, soon!)

Mars_999
08-03-2009, 09:03 PM
Holy sweet moses, they've actually added this feature at last:

DrawElementsBaseVertex,
DrawRangeElementsBaseVertex, and DrawElementsInstancedBaseVertex
also source their indices from that buffer object, adding the basevertex offset to
the appropriate vertex index as a final step before indexing into the vertex buffer;
this does not affect the calculation of the base pointer for the index array.
Regards
elFarto

YES!!! About F'n time!!!!!