NVIDIA releases OpenGL 4.3 beta drivers

NVIDIA is proud to announce the immediate availability of OpenGL 4.3 beta drivers for Windows and Linux.

You will need any one of the following Fermi based GPU to get access to the full OpenGL 4.3 and GLSL 4.30 functionality:
Quadro series: 6000, 600, 5000, 410, 4000, 400, 2000D, 2000
GeForce 600 series: GTX 690, GTX 680, GTX 670, GT 645, GT 640, GT 630, GT 620, GT 610, 605
GeForce 500 series: GTX 590, GTX 580, GTX 570, GTX 560 Ti, GTX 560 SE, GTX 560, GTX 555, GTX 550 Ti, GT 545, GT 530, GT 520, 510
GeForce 400 series: GTX 480, GTX 470, GTX 465, GTX 460 v2, GTX 460 SE v2, GTX 460 SE, GTX 460, GTS 450, GT 440, GT 430, GT 420, 405

For OpenGL 3 capable hardware, these new extensions are provided:
ARB_arrays_of_arrays
ARB_clear_buffer_object
ARB_copy_image
ARB_ES3_compatibility
ARB_explicit_uniform_location
ARB_fragment_layer_viewport
ARB_framebuffer_no_attachments
ARB_internalformat_query2
ARB_invalidate_subdata
ARB_program_interface_query
ARB_robust_buffer_access_behavior
ARB_stencil_texturing
ARB_texture_buffer_range
ARB_texture_query_levels
ARB_texture_storage_multisample
ARB_texture_view
ARB_vertex_attrib_binding
KHR_debug

For OpenGL 4 capable hardware, these new extensions are provided:
• ARB_compute_shader
• ARB_multi_draw_indirect
• ARB_shader_image_size
• ARB_shader_storage_buffer_object

[i]The drivers and extension documentation can be downloaded from Page Not Found | NVIDIA

GLEW 1.9.0 has also been released which includes support for OpenGL 4.3:
http://glew.sourceforge.net/

This will make using the new API easier.

Wow! That was the quickest GLEW update in history :slight_smile:

Just tried out glObjectLabel(), and it doesn’t work with GL_BUFFER or GL_TEXTURE - it throws an INVALID_ENUM for both (at least the feedback is nice: “GL_INVALID_ENUM error generated. ObjectLabel: invalid <identifier> enum value”). I also tried GL_TEXTURE_2D just in case, but this didn’t work either (nor should it, from the spec).

It works with GL_FRAMEBUFFER, GL_PROGRAM and GL_SHADER, though - at least it doesn’t throw a GL error. However, a later performance message “Program/shader state performance warning: Fragment Shader is going to be recompiled because the shader key based on GL state mismatches.” doesn’t reference my shader name at all, which seems to defeat the purpose. It’d also be nice to know what part of the GL state it’s referring to.

It’d also be nice to know what part of the GL state it’s referring to.

It seems to always say this for me. I’m always getting that for any programs I create. But at least it’s an initialization-time thing.

I installed NVIDIA beta driver 304.32 on Lubuntu 12.04 Linux using the xorg-edgers PPA so that I wouldn’t have to fool with .run scripts and turning off X servers. On my laptop I have an old GeForce 8600M GT card, which is OpenGL 3.3 class HW. The NVIDIA OpenGL driver page says, “For OpenGL 3 capable hardware, these new extensions are provided:” and gives a list of 18 extensions. Unfortunately none of them are shown as available by either glxinfo or nvidia-settings.

Anyone have OpenGL 3 class HW that’s showing the new extensions, on either Windows or Linux?

Hm, perhaps I will disable that warning.

Correction to my original post - glObjectName(GL_FRAMEBUFFER, …) produces an invalid enum as well. I mistook that error for a GL_TEXTURE error.

It appears we missed support for GL_TEXTURE, GL_FRAMEBUFFER and GL_RENDERBUFFER. All the others are supported. This is a trivial fix and it’ll be in the next beta update next week. Sorry for the trouble and thanks for reporting the bug.

We’ll also audit our error messages and make sure they use the object label instead of the number if it exists. This fix will come later. The extension is still useful because middleware tools can query objects to get names which the app sets. But clearly having the names in the debug_output strings makes sense.

Thanks! I also wouldn’t mind something like “object name (object id)” in the debug output.

What is the best way to file a bug report to NVIDIA for their beta drivers? I used their generic “Submit feedback on an NVIDIA product” form, but the release notes for the driver direct one to an NVIDIA developer site. This site seems to be isolated from the main NVIDIA site; the developer area reached from the main site is different. I used to have an NVIDIA developer account eons ago, but it seems that has lapsed. I don’t see any obvious way to create an account for the nvdeveloper.nvidia.com site, and it doesn’t accept the username and password I have for the main site. So, I have the feeling that the left hand is separated from what the right hand is doing, and nobody’s actually going to read the feedback I just filed. What’s the best way?

Please send Piers or me a private message so we can take care of this and file a bug.

(note beside, our developer forums were recently compromised by a third party which might explain why your account is not working anymore)

[QUOTE=Brandon J. Van Every;1241294]I installed NVIDIA beta driver 304.32 on Lubuntu 12.04 Linux using the xorg-edgers PPA so that I wouldn’t have to fool with .run scripts and turning off X servers. On my laptop I have an old GeForce 8600M GT card, which is OpenGL 3.3 class HW. The NVIDIA OpenGL driver page says, “For OpenGL 3 capable hardware, these new extensions are provided:” and gives a list of 18 extensions. Unfortunately none of them are shown as available by either glxinfo or nvidia-settings.

Anyone have OpenGL 3 class HW that’s showing the new extensions, on either Windows or Linux?[/QUOTE]

I think if you installed a driver with version 304.32 you probably didn’t use the OpenGL 4.3 beta driver, which is version 304.15.00.02 for Linux. See here for the driver location:
http://www.nvidia.com/content/devzone/opengl-driver-4.3.html

Found another small one. I’m not sure when this crept in, but I was previously using 295.49 without a hitch. I have a vertex shader with the following outputs:

flat out ivec4 pickID;
     out float pickZ;

When I call glGetProgramiv( pid, GL_TRANSFORM_FEEDBACK_VARYING_MAX_LENGTH, &max_len) on the parent program of the vertex shader (no other shader stages), it sets max_len = 6. According to the GL spec it should return the length of the largest string including the null terminator, in this case 7. When I then pass max_len to glGetTransformFeedbackVarying() as the bufSize parameter, ‘pickID’ is cut off to ‘pickI’ (which is expected if bufSize==6) and this messes up further rendering.

In the meantime I’ve accounted for the null-terminator by adding one to max_size. I can afford the extra byte :slight_smile:

[QUOTE=malexander;1241428]
When I call glGetProgramiv( pid, GL_TRANSFORM_FEEDBACK_VARYING_MAX_LENGTH, &max_len) on the parent program of the vertex shader (no other shader stages), it sets max_len = 6. According to the GL spec it should return the length of the largest string including the null terminator, in this case 7. [/QUOTE]

I’ve confirmed that this is a recent regression, and might only be in the OpenGL 4.3 beta driver. I’ll try to get this into the next beta driver.

You’re also welcome to send me bug reports. My NVIDIA email ID is the same as my user ID on these forums.

pbrown:

I have an issue with ComputeShader, Buffer and the usage hint of Shader Buffer:

I’m pretty sure I’m doing ugly stuff, but they behavior of the code changes totally depending on the usage hint of the buffer.

Should i send you an email?

[QUOTE=Piers Daniell;1241407]I think if you installed a driver with version 304.32 you probably didn’t use the OpenGL 4.3 beta driver, which is version 304.15.00.02 for Linux. See here for the driver location:

When I installed the exact driver version 304.15.00.02, I did get the 18 new extensions for my 3.x class HW. Unfortunately it also sent me up a driver installation and package update learning curve that destabilized my system. Tried to use Duplicity to restore it and it totally failed, although I didn’t lose any personal data. I don’t recommend the xorg-edgers stuff. They don’t actually have the beta driver, and all their extra package junk is pretty much what destabilized my system. Hopefully I’ll have better luck working from mainstream Lubuntu 12.04. Got a fresh system now. Once I’ve found a more reliable system backup program to use, I’ll try the NVIDIA .run install again, and then hopefully have a viable development system.

[QUOTE=guibou;1241454]pbrown:

I have an issue with ComputeShader, Buffer and the usage hint of Shader Buffer:

Should i send you an email?[/QUOTE]

Someone already spotted that, and Piers root-caused and fixed the bug. Thanks for the helpful report and reproducer. If you have more issues that you suspect are likely driver things, feel free to shoot me an email.

Pat

EDIT: Something I thought was actually related to this driver was actually present in an earlier driver, making a seperate post

First is Nvidia monitoring OpenGL 4.3.0.0 sample pack by g-truc and aware of three samples not working…

I’m concentrating on two I’m more interested first in 430-image_store sample I get I think and error in assembly
line 22, column 5: error: supported only on load, store, and atomic instructions
which is:

IMQ.COH R0.xy, images[R0.x], 2D;

this is related to new function

ivec2 Size = imageSize(Diffuse);

in shader which I fix removing coherent in image definition

layout(binding = 0, rgba8) coherent uniform image2D Diffuse; //dont work

to this:

layout(binding = 0, rgba8) uniform image2D Diffuse; //it works!!

Full details (saying no where it fails) and patch applied:

#version 420 core
#extension GL_ARB_shader_image_size : require

#define FRAG_COLOR        0
#define DIFFUSE            0

in vec4 gl_FragCoord;
//layout(binding = 0, rgba8) coherent uniform image2D Diffuse; //dont work
layout(binding = 0, rgba8) uniform image2D Diffuse; //it works!!


layout(location = FRAG_COLOR, index = 0) out vec4 Color;

const int Border = 8;

void main()
{
    ivec2 Size = imageSize(Diffuse);

    if(gl_FragCoord.x < Border)
        Color = vec4(1.0, 0.0, 0.0, 1.0);
    if(gl_FragCoord.x > Size.x - Border)
        Color = vec4(0.0, 1.0, 0.0, 1.0);
    if(gl_FragCoord.y < Border)
        Color = vec4(1.0, 1.0, 0.0, 1.0);
    if(gl_FragCoord.y > Size.y - Border)
        Color = vec4(0.0, 0.0, 1.0, 1.0);
    else
        Color = imageLoad(Diffuse, ivec2(gl_FragCoord.xy));
}

Fragment info

Internal error: assembly compile error for fragment shader at offset 611:
– error message –
line 22, column 5: error: supported only on load, store, and atomic instructions
– internal assembly text –

!!NVfp5.0
OPTION ARB_shader_image_size;
OPTION NV_shader_atomic_float;
# cgc version 3.1.0001, build date Aug  8 2012
# command line args: 
#vendor NVIDIA Corporation
#version 3.1.0.1
#profile gp5fp
#program main
#semantic Diffuse : IMAGE[0]
#var float4 Color : $vout.COL0 : COL0[0] : -1 : 1
#var int Diffuse.__remap :  : c[0] : -1 : 1
#var float4 gl_FragCoord : $vin.WPOS : WPOS : -1 : 1
PARAM c[1] = { program.local[0] };
TEMP R0;
TEMP RC, HC;
IMAGE images[] = { image[0..7] };
OUTPUT result_color0 = result.color;
MOV.S R0.x, c[0];
SLT.F R0.z, fragment.position.x, {8, 0, 0, 0}.x;
TRUNC.U.CC HC.x, R0.z;
IMQ.COH R0.xy, images[R0.x], 2D;
IF    NE.x;
MOV.F result_color0, {1, 0, 0, 0}.xyyx;
ENDIF;
ADD.S R0.x, R0, -{8, 0, 0, 0};
I2F.S R0.x, R0;
SGT.F R0.x, fragment.position, R0;
TRUNC.U.CC HC.x, R0;
IF    NE.x;
MOV.F result_color0, {0, 1, 0, 0}.xyxy;
ENDIF;
SLT.F R0.x, fragment.position.y, {8, 0, 0, 0};
TRUNC.U.CC HC.x, R0;
IF    NE.x;
MOV.F result_color0, {1, 0, 0, 0}.xxyx;
ENDIF;
ADD.S R0.x, R0.y, -{8, 0, 0, 0};
I2F.S R0.x, R0;
SGT.F R0.x, fragment.position.y, R0;
TRUNC.U.CC HC.x, R0;
IF    NE.x;
MOV.F result_color0, {0, 1, 0, 0}.xxyy;
ELSE;
TRUNC.S R0.xy, fragment.position;
MOV.S R0.z, c[0].x;
LOADIM.U32.COH R0.x, R0, images[R0.z], 2D;
UP4UB.F result_color0, R0.x;
ENDIF;
END
# 31 instructions, 1 R-regs