NVIDIA releases OpenGL 4.2 drivers

NVIDIA is proud to announce the immediate availability of OpenGL 4.2 drivers for Windows and Linux.

You will need any one of the following Fermi based GPU to get access to the full OpenGL 4.2 and GLSL 4.20 functionality:

[ul][li]Quadro Plex 7000, Quadro 6000, Quadro 5000, Quadro 4000, Quadro 2000, Quadro 600[]GeForce 500 series (GTX 590, GTX 580, GTX 570, GTX 560 Ti, GTX 560, GTX 550 Ti, GT 545, GT 530, GT 520)[]GeForce 400 series (GTX 480, GTX 470, GTX 465, GTX 460 SE v2, GTX 460 SE, GTX 460, GTS 450, GT 440, GT 430, GT 420, 405)[/ul][/li]For OpenGL 2 capable hardware, these new extensions are provided:

[ul][li]ARB_compressed_texture_pixel_storage (also in core OpenGL 4.2) []ARB_conservative_depth (also in core OpenGL 4.2) []ARB_internalformat_query (also in core OpenGL 4.2) []ARB_map_buffer_alignment (also in core OpenGL 4.2) []ARB_shading_language_420pack (also in core OpenGL 4.2) [*]ARB_texture_storage (also in core OpenGL 4.2)[/ul][/li]For OpenGL 3 capable hardware, these new extensions are provided:

[ul][li]ARB_base_instance (also in core OpenGL 4.2) []ARB_shading_language_packing (also in core OpenGL 4.2) []ARB_transform_feedback_instanced (also in core OpenGL 4.2) [/ul][/li]For OpenGL 4 capable hardware, these new extensions are provided:

[ul][li]ARB_shader_atomic_counters (also in core OpenGL 4.2) []ARB_shader_image_load_store (also in core OpenGL 4.2) []ARB_texture_compression_bptc (also in core OpenGL 4.2)[/ul][/li]The drivers and extension documentation can be downloaded from http://developer.nvidia.com/object/opengl_driver.html

does this driver support OpenCL 1.1?
(edit: yes it does)

The site seems to be down. Is this a server hitch or something else?

It appears to be working for me now. Is it okay for you now?

Yeah, it’s back up now. Thanks!

GJ Nvidia!
Unfortunatly you released mobile 400 too late for me :frowning:

But keep going!

Well, my program stopped working. I found out that the problem is in this call:

glVertexAttribPointer(0, 2, GL_FLOAT, false, 24, (void*)0);//GL_INVALID_OPERATION

And I checked to make sure I have a buffer object bound to GL_ARRAY_BUFFER before making the call.
I’m using a core 3.2 profile.

Make sure you have a valid VAO bound. For core profiles the spec requires a VAO to be bound, otherwise it will result in INVALID_OPERATION when glVertexAttribPointer or other vertex functions are called. We used not check for this, but now we do to be more spec compliant. This doesn’t apply to the compatibility profile.

You’re right, the spec requires it. The driver never complained before… :confused:

when using the binding layout qualifier for images as follows:


#version 420 core

layout(rgba16ui, binding = 0) writeonly uniform uimage2D _stuff;
...

the following glsl error is generated:


error C1315: can't apply layout to global variable '_stuff'

This is legal according to the GLSL 4.20 spec, but in the ARB_shader_image_load_store extension the binding is not listed.

It was a known an famous NVIDIA drivers bug. It’s good to see it fixed despite that VAOs are nothing else than annoying most of the time.

A workaround and quick fix for you application is to create and bind a VAO at the beginning, (right after the context creation) you can then forget about it.

No the ultimate fixed but it get you application running again with no other change.

A little bit of fun, the following image show Fermi rasterizer pattern using the atomic counter:

I have the following problem with immutable textures: I create a mipmapped 2d texture using glTexStorage2D, requesting the full amount of mip map levels. After this is upload the data of the single mip maps using glTexSubImage. The problem is the resulting texture objects does not have any mip maps. When i exchange the glTexStorage call by calls to glTexImag2D for all mip map levels the texture is correctly initialized with all mip maps filled.

NOT working:


glTexStorage2D(object_target(),
               init_mip_levels,
               util::gl_internal_format(in_desc._format),
               in_desc._size.x, in_desc._size.y);
// no error reported
for (unsigned i = 0; i < init_mip_levels; ++i) {
    math::vec2ui lev_size      = util::mip_level_dimensions(in_desc._size, i);
    const void*  init_lev_data = in_initial_mip_level_data[i];
    glTexSubImage2D(object_target(),
                    i,
                    0, 0,
                    lev_size.x, lev_size.y,
                    gl_base_format,
                    gl_base_type,
                    init_lev_data);
}
// still no errors reported

working:


for (unsigned i = 0; i < init_mip_levels; ++i) {
    math::vec2ui lev_size      = util::mip_level_dimensions(in_desc._size, i);
    glTexImage2D(object_target(),
                 i,
                 util::gl_internal_format(in_desc._format),
                 lev_size.x, lev_size.y,
                 0,
                 gl_base_format,
                 gl_base_type,
                 0);
}

// no error reported
for (unsigned i = 0; i < init_mip_levels; ++i) {
    math::vec2ui lev_size      = util::mip_level_dimensions(in_desc._size, i);
    const void*  init_lev_data = in_initial_mip_level_data[i];
    glTexSubImage2D(object_target(),
                    i,
                    0, 0,
                    lev_size.x, lev_size.y,
                    gl_base_format,
                    gl_base_type,
                    init_lev_data);
}
// still no errors reported

I have the following problem with immutable …

Well, it works in my code. I’ve implemented it today.

The only road block was the incompatible internal format. It requires sized format. Such as GL_RGBA8 instead of GL_RGBA. The documentation states that correctly. (My fault).

I am going to test the performance changes …

Could you check that you really see trilinear or anisotropic filtering on your tests? I always see the base level as if I had only a single level texture in every test I did using sampler objects and plain texture parameters.

Works well for me like what follows but I haven’t use the sampler object with it yet:

	gli::texture2D Image = gli::load(TEXTURE_DIFFUSE);

	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

	glGenTextures(1, &TextureName);
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, TextureName);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_R, GL_RED);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_GREEN);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_BLUE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_A, GL_ALPHA);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 1000);
	glTexStorage2D(GL_TEXTURE_2D, GLint(Image.levels()), GL_RGBA8, GLsizei(Image[0].dimensions().x), GLsizei(Image[0].dimensions().y));

	for(std::size_t Level = 0; Level < Image.levels(); ++Level)
	{
		glTexSubImage2D(
			GL_TEXTURE_2D, 
			GLint(Level), 
			0, 0, 
			GLsizei(Image[Level].dimensions().x), 
			GLsizei(Image[Level].dimensions().y), 
			GL_BGRA, GL_UNSIGNED_BYTE, 
			Image[Level].data());
	}
	
	glPixelStorei(GL_UNPACK_ALIGNMENT, 4);

Ok i triple checked using sampler objects and no sampler objects with essentially the code by Groovounet. I still do not get trilinear filtering (checked with colored mip levels), i can clearly see massive texture aliasing and only see the date from level 0. When switching to the glTexImage loop of my original post everything works as expected.

I am on Windows 7 x64 using the 280.28 driver. The context is a 4.2 core profile context (also checked with compatibility).

It requires sized format.

It does? Oh thank God. It’s about time they shoved those unsized formats out the door.

Ok, Groovounet my friend ;), you have the exact same problem… you did not test what i was describing:

  1. your dds image did not contain mipmaps!
  2. you didn’t even try to enable trilinear filtering!

you can find attached a modified sample and sample image which clearly shows the problem i described.

the fun part is this:


#if 1
    glTexStorage2D(GL_TEXTURE_2D, GLint(Image.levels()), GL_RGBA8, GLsizei(Image[0].dimensions().x), GLsizei(Image[0].dimensions().y));
#else
    for(std::size_t Level = 0; Level < Image.levels(); ++Level)
    {
        glTexImage2D(
            GL_TEXTURE_2D, 
            GLint(Level), 
            GL_RGBA8, 
            GLsizei(Image[Level].dimensions().x), 
            GLsizei(Image[Level].dimensions().y),
            0, 
            GL_BGRA, GL_UNSIGNED_BYTE, 
            0);
    }
#endif

LOL, i made a couple of tests and I was about to write that I had the same problem but you beat me that it! :stuck_out_tongue: