NVIDIA releases OpenGL 4.0 drivers

NVIDIA is proud to announce the immediate availability of OpenGL 4 drivers for Linux as well as OpenGL 4 WHQL-certified drivers for Windows. Additionally, support for eight new extensions is provided:

  • ARB_texture_compression_bptc – provides new texture compression formats for both fixed-point and high dynamic range floating-point texels.
  • EXT_shader_image_load_store - allows GLSL- and assembly-based shaders to load from, store to, and perform atomic read-modify-write operations to texture images.
  • EXT_vertex_attrib_64bit - provides OpenGL shading language support for vertex shader inputs with 64-bit floating-point components and OpenGL API support for specifying the value of those inputs.
  • NV_vertex_attrib_integer_64bit - provides support for specifying vertex attributes with 64-bit integer components, analogous to the 64-bit floating point support added in EXT_vertex_attrib_64bit.
  • NV_gpu_program5 - provides assembly programmability support for new hardware features provided by NVIDIA’s OpenGL 4.0-capable hardware in vertex, fragment, and geometry programs.
  • NV_tesssellation_program5 - provides assembly programmability support for tessellation control and evaluation programs.
  • NV_gpu_shader5 - provides a superset of the features provided in ARB_gpu_shader5 and GLSL 4.00. This includes support for a full set of 8-, 16-, 32-, and 64-bit scalar and vector integer data types, and more. Additionally, it allows patches (as used in tessellation) to be passed on to the geometry shader, used as input to transform feedback, and rasterized as a set of control points.
  • NV_shader_buffer_store – extends the bindless graphics capabilities of the NV_shader_buffer_load extension. This extension provides the ability to store to buffer object memory, and to perform atomic read-modify-write operations, using either GLSL- or assembly-based shaders.

The drivers and extension documentation can be downloaded from http://developer.nvidia.com/object/opengl_driver.html

Happy coding!

Barthold
(with my NVIDIA hat on)

Nice, AMD and nVidia going strong with OpenGL driver support! Now if only Intel could catch up and we could get some OpenGL 3.x / OpenGL 4.x for Mac OSX…

But, things are improving for OpenGL, keep up the good work!
(and I hope we’ll see atomic read/modify/write back in OpenGL 4.1… I have hope :)).

NV_shader_buffer_store and EXT_shader_image_load_store look very nice.

EXT_shader_image_load_store even fulfils one of the proposals on the wiki, “write to specific samples within a shader”.

Regards
elFarto

*** Great work! ***

I’m surprised to see this fine granularity of int support but we don’t have half support. Are half supposed to be directly supported in the hardware? Or maybe if was just on old chips that didn’t have a full support of single-float?

We might have some clues here on wasn’t coming for OpenGL 3.4 and 4.1!

With these drivers we also fixed the issues reported with our earlier OpenGL 3.3 drivers. If you were one of the bug reporters, I’d like to know if your issue is indeed now fixed.

Thanks,
Barthold
(with my NVIDIA hat on)

On the site it’s listed as only supported on OpenGL 4.0 hardware. Is there any technical reason for not making this available on 3.3 hardware? As an owner of a GTX285, I’m immensely disappointed.

On the site it’s listed as only supported on OpenGL 4.0 hardware. Is there any technical reason for not making this available on 3.3 hardware?[/QUOTE]

Yes, these formats require decompression hardware that will generally only be found on OpenGL 4.0-capable hardware.

If you were one of the bug reporters, I’d like to know if your issue is indeed now fixed.

I confirm that with 197.44, you can retrieve the uniform offset of all uniforms in a shared-layout uniform block, even if the uniforms itself are not referenced by the shader.

Sampler object fixed too!

Thanks guys for confirming the bug fixes.

Barthold
(with my NVIDIA hat on)

I didn’t report this one, but it looks like 197.44 fixes a GLSL bug where breaking out of a for loop would increment the loop counter an additional time.

197.44 don’t expose GL_ARB_gpu_shader_fp64 on a gtx 275…
Is Nvidia going in future to expose double precision extension GL_ARB_gpu_shader_fp64 on gtx 280 cards supporting doubles on CUDA?
Also forcing it I get:
0(5) : warning C7547: extension GL_ARB_gpu_shader_fp64 not supported in profile
gp4fp

sweet, any ETA for Cg support on NV_gpu_program5 and related?

Is EXT_direct_state_access orthogonal w.r.t. ARB_texture_multisample? I don’t see a glTextureImage2DMultisampleEXT in glew…

You can’t expect the extension to be updated for every extension that comes out. Even though it has been updated to some since its initial release, but even that caused versioning problems with extension loaders.

In short, no. DSA does not have that function.

Hi,
while reworking some of my texture binding functionality i noticed a bug in the latest OpenGL 3.3 and 4.0 beta drivers (197.44) with binding sampler objects.

When running on a last generation card (FX5800 in my case) and a OpenGL 3.3 context the following way to bind a sampler object fails with GL_INVALID_VALUE:

glBindSampler(0, _sampler_id);

This is the exact way to do it according to the spec. Doing it like with texture objects works fine:

glBindSampler(GL_TEXTURE0, _sampler_id);

However when running the same code on a GTX 480 with the same driver also on a 3.3 context works as expected and the second way correctly throws the invalid value error.

Regards
-chris

Yeah, there’s a thread about this. NVIDIA (and the ARB) is aware of the problem and will have a fix in their next driver revision.

ok, i did not see that thread. i think the info that it works both ways in nvidia drivers depending on what hardware you run it is new ;).

There shouldn’t be any difference in BindSampler behavior on NVIDIA GPUs between hardware versions, GeForce/Quadro, or OpenGL context versions. The first 3.3 driver did erroneously accept TEXTURE0 as described in the thread linked by Alphonse.

My guess is that your FX5800 was actually running with the older OpenGL 3.3-only driver (197.15?). Maybe something like the following happened?

  • plug in GeForce GTX 480
  • install 197.44
  • everything works as expected
  • replace with Quadro FX 5800
  • after reboot, Windows plug-n-play ends up using the 197.15 driver, which accepts only TEXTURE0

I have seen something like this happen to me in the past, though I’m not sure how such a situation arises.

you are right, the machine i put the FX5800 in was actually running 197.15. it is not my main development machine so i did not catch that. thanks!