PDA

View Full Version : NVIDIA releases OpenGL 3.3 drivers



barthold
03-18-2010, 01:29 PM
NVIDIA is pleased to announce the immediate availability of drivers supporting OpenGL 3.3 for Geforce and Quadro, on Windows and Linux platforms. Download the NVIDIA OpenGL 3.3 drivers here: http://developer.nvidia.com/object/opengl_driver.html

Barthold
(with my NVIDIA hat on)

Ilian Dinev
03-18-2010, 01:46 PM
Thanks for the update!
Got my hands immediately on occlusion2, and already replacing my custom #include and attrib/fragdata-binding handlers.
Cheers!

Edit: whoops, the non-core include-extension is not available yet ^^".

Alfonse Reinheart
03-18-2010, 03:14 PM
Edit: whoops, the non-core include-extension is not available yet ^^".

That's not a part of 3.3, is it? ;)

Jan
03-18-2010, 03:15 PM
Nope.

Groovounet
03-18-2010, 03:48 PM
Edit: whoops, the non-core include-extension is not available yet ^^".

That's not a part of 3.3, is it? ;)

And this is maybe for the best... I'm really not sure about this extension.

----A bit of cynicium for fun! (sorry I had to!)----

Can't wait to see nVidia releasing GeForce 500 as well!
Ok, my main card is not a GeForce for the first time ever and I not sure about having a oven in my main config.
Anyway, I still have my old GeForce next to my Radeon! ;)

That's the kind of news that make you think that the best of nVidia is the developers support!

Dark Photon
03-18-2010, 04:43 PM
That's the kind of news that make you think that the best of nVidia is the developers support!
Definitely! And that's GTX 480 comin' up next Friday.

Thanks for the drivers, NVidia!


>>> Edit: whoops, the non-core include-extension is not available yet ^^".
>> That's not a part of 3.3, is it? wink
> No.
And this is maybe for the best... I'm really not sure about this extension.
Eh, it's really not bad. The driver doesn't "hit the disk" or anything. IIRC, you load the include files and give the content strings and the "filenames" to the driver, and it just uses that as a dictionary to resolve #includes. So the driver does the strcat'ing instead of you. Similar to the Cg interface IIRC. Not something you have to use if you don't like it.

Nice thing is that presumably it works with the preprocessor so you can do conditional includes and such to speed up compilation time without having to use an external preprocessor or write your own. Cleaner than #ifdefs around large blocks of inline code.

Groovounet
03-18-2010, 04:56 PM
Ahaha I'm really not waiting for GeForce GTX 480 but for the following generation that was my cynical point ;) Or maybe just to laugh and get depressed :p

Simon Arbon
03-18-2010, 05:12 PM
Great to see that they still support 2.1 hardware, this saves a lot of work in alternate code paths for new projects.


For OpenGL 2.1 capable hardware, these new extensions are provided:

ARB_texture_swizzle (also in core OpenGL 3.3)
ARB_sampler_objects (also in core OpenGL 3.3)
ARB_occlusion_query2 (also in core OpenGL 3.3)
ARB_timer_query (also in core OpenGL 3.3)
ARB_explicit_attrib_location (also in core OpenGL 3.3)

Groovounet
03-18-2010, 05:40 PM
Yes, this is really great!

Dark Photon
03-18-2010, 05:59 PM
In addition to the new 3.3 extensions:

ARB_texture_swizzle
ARB_sampler_objects
ARB_occlusion_query2
ARB_timer_query
ARB_explicit_attrib_location
ARB_blend_func_extended
ARB_instanced_arrays
ARB_shader_bit_encoding
ARB_texture_rgb10_a2ui
ARB_vertex_type_2_10_10_10_rev

the driver also comes with the 4.0 extension:

ARB_transform_feedback2

and these other new extensions:

NV_texture_multisample
NVX_gpu_memory_info

Andybody seen some beef on the latter two?

Groovounet
03-18-2010, 06:15 PM
I think ARB_transform_feedback2 is supported by GT200 but not by G80.

Rob Barris
03-18-2010, 09:37 PM
Nice thing is that presumably it works with the preprocessor so you can do conditional includes and such to speed up compilation time without having to use an external preprocessor or write your own. Cleaner than #ifdefs around large blocks of inline code.

Bingo

barthold
03-18-2010, 10:19 PM
I think ARB_transform_feedback2 is supported by GT200 but not by G80.

Correct.

Rob A
03-19-2010, 04:21 PM
The driver download page says that the newly released drivers support GeForce 8000 and up, but when I attempt to update from my previous driver it says the device isn't supported.

I have a GeForce 8600M GT, 32-bit XP SP3.

Is this not going to work? Is the current driver release only for the desktop card models?

Groovounet
03-24-2010, 09:51 AM
It's probably desktop only drivers. There are trick to make them run on laptop I think modifying some inf files.

You would have to wait a little for laptop I guess but OpenGL 3.3 will run.

Aleksandar
03-24-2010, 01:03 PM
They work perfectly fine with my 8600M GT, as it can be seen from the following link :)
http://sites.google.com/site/opengltutorialsbyaks/download/extension-viewer

In fact you don't need to wait. You can modify .inf file by yourself (but the intervention is not trivial, because you have to find closest match device and use this driver for your card), or go to the site http://www.laptopvideo2go.com/, and download already modified file.

All GL 3.3 extensions are supported except GL_ARB_shading_language_include (or to be more correct, they are in the list of supported ones, but I didn't try to use them yet to see if they are correctly implemented ;) ).

Groovounet
03-28-2010, 05:41 AM
It seams that this drivers doesn't support the "index" qualifier from GL_ARB_blend_func_extended.

And something else, I'm not sure from where it can. At some point my screen running with a GeForce starts to blick (displaying the normal content and then black. about 1 second for each. After while (few minutes) it went in a green and purple color mode... This is beyond the OpenGL drivers (who where still doing a good job). It happened 3 or 4 times.

No, it's not a problem of the screen which works correctly plug to a Radeon or my MacBook.

barthold
03-29-2010, 08:38 AM
> It seams that this drivers doesn't support the "index" qualifier from GL_ARB_blend_func_extended.

Can you elaborate on this?

> At some point my screen running with a GeForce starts to blick

Have you seen this with any other drivers? Let's not pollute this thread with this topic. Can you send me a PM about it?

Thanks,
Barthold
(with my NVIDIA hat on)

Groovounet
03-29-2010, 09:41 AM
If I'm right, this should build:


#version 330 core
#define COLOR 0

uniform sampler2D Diffuse;

in vec2 VertTexcoord;
layout(location = COLOR, index = 0) out vec4 Color;

void main()
{
Color = texture(Diffuse, VertTexcoord);
}

But the compiler say:
0(8): error C3008: unknown layout specifier 'index = 0'

barthold
03-30-2010, 09:47 AM
Ah yes, index=0 should be allowed (it is the default) and not generate a compiler error. Already fixed, you'll see it in the next driver update.

Barthold
(with my NVIDIA hat on)

elFarto
04-02-2010, 10:13 AM
I'm using 197.15, and I'm not sure if the following is expected or not.

I'm calling glGetActiveUniformBlock with GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES to get a list of active uniforms in the uniform block, then passing that list to glGetActiveUniformsiv to get the type and offset. The issue is that the glGetActiveUniformsiv calls error with GL_INVALID_VALUE if one of the uniforms in the block isn't referenced in the shader.


layout(shared) uniform PerMaterial
{
vec3 ambientColour;
vec3 diffuseColour;
vec3 specularColour;
float specPower;
};
Is this expected behaviour?

Thanks & Regards
elFarto

Alfonse Reinheart
04-02-2010, 11:24 AM
The issue is that the glGetActiveUniformsiv calls error with GL_INVALID_VALUE if one of the uniforms in the block isn't referenced in the shader.

That's contrary to the spec. Because you gave it a "shared" layout, the uniforms are all automatically considered active. Otherwise sharing wouldn't work.

elFarto
04-02-2010, 11:52 AM
That's contrary to the spec. Because you gave it a "shared" layout, the uniforms are all automatically considered active. Otherwise sharing wouldn't work.
Good, that's what I thought should happen.

I've had it confirmed that this as expected on ATI cards.

Regards
elFarto

skynet
04-03-2010, 12:40 PM
This is a known bug, see here:
http://www.opengl.org/discussion_boards/...true#Post274302 (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=274302&Searchpa ge=1&Main=52901&Words=GL_INVALID_VALUE&Search=true #Post274302)

pbrown
04-03-2010, 02:38 PM
This is a known bug, see here:
http://www.opengl.org/discussion_boards/...true#Post274302 (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=274302&Searchpa ge=1&Main=52901&Words=GL_INVALID_VALUE&Search=true #Post274302)

That's correct. It has already been fixed and will appear in our next OpenGL driver release.



That's contrary to the spec. Because you gave it a "shared" layout, the uniforms are all automatically considered active. Otherwise sharing wouldn't work.

As far as I can tell, an implementation is not required to report unreferenced uniforms in "shared" uniform blocks as active. It is required to allocate buffer storage for unreferenced uniforms because the same block layout may be used by other shaders where those uniforms are active. NVIDIA's drivers currently treat all such uniforms as active, but that isn't required by the spec.

Regardless, there still was a driver bug in this case, as the driver was assigning and reporting an active uniform index for the unreferenced uniform but then rejecting that index in glGetActiveUniformsiv().

It would not be a bug if a driver handled such uniforms by returning INVALID_INDEX in glGetUniformIndices() and not enumerating them in glGetActiveUniformBlocks(..., GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES, ...).

04-05-2010, 05:13 AM
I will really love to see it in the next openGl driver release

oscarbg
04-06-2010, 10:22 AM
Any hope of having "precise" qualifier outside of GPU_EXT_shader5 extension? it's not a gpu feature only a compiler fature..
i.e. for not only fermi and cypress gpus I want in gt200 for example..
it's not good since double precision emulation on d3d10 gpus using
float-float approaches gets optimized by Nvidia compiler!
Example code optimized:

vec2 dblsgl_add (vec2 x, vec2 y)
{
precise vec2 z;
float t1, t2, e;

t1 = x.y + y.y;
e = t1 - x.y;
t2 = ((y.y - e) + (x.y - (t1 - e))) + x.x + y.x;
z.y = e = t1 + t2;
z.x = t2 - (e - t1);
return z;
}

vec2 dblsgl_mul (vec2 x, vec2 y)
{
precise vec2 z;
float up, vp, u1, u2, v1, v2, mh, ml;

up = x.y * 4097.0;
u1 = (x.y - up) + up;
u2 = x.y - u1;
vp = y.y * 4097.0;
v1 = (y.y - vp) + vp;
v2 = y.y - v1;
//mh = __fmul_rn(x.y,y.y);
mh = x.y*y.y;
ml = (((u1 * v1 - mh) + u1 * v2) + u2 * v1) + u2 * v2;
//ml = (fmul_rn(x.y,y.x) + __fmul_rn(x.x,y.y)) + ml;

ml = (x.y*y.x + x.x*y.y) + ml;

mh=mh;
z.y = up = mh + ml;
z.x = (mh - up) + ml;
return z;
}

Alfonse Reinheart
04-06-2010, 12:08 PM
Seriously, you don't need to post this everywhere. The GL suggestions forum was fine.

oscarbg
04-06-2010, 07:38 PM
ok sorry

James A.
05-01-2010, 04:35 AM
Hi,

I've started to try some stuff out in 3.3 but have fallen at the starting blocks. I can't seem to create a 3.3 context :(

I'm working on a Win7 64bit PC, with an NVIDIA GeForce 9600GT and 197.45 drivers. I can set up 3.0, 3.1 and 3.2 contexts without any problems.

After calling wglCreateContext glGetString returns only a 3.2 version, if i try to create a 3.3 context with wglCreateContextAttribsARB it returns NULL.

Any ideas what could be wrong?

barthold
05-05-2010, 05:41 PM
James, you need driver 197.44 (not 197.45). You can get that here:

http://developer.nvidia.com/object/opengl_driver.html

Regards,
Barthold
(with my NVIDIA hat on)

James A.
05-15-2010, 06:10 AM
Thank seems to have done the job, thank you very much :)