Official NVIDIA OpenGL 3.0 beta driver thread

We’ve posted OpenGL 3.0 beta drivers here:

http://developer.nvidia.com/object/opengl_3_driver.html

Note that you’ll also need nvemulate to turn on OpenGL 3.0:

http://developer.nvidia.com/object/nvemulate.html

This will run on G80 and up only, since OpenGL 3.0 requires G80 class hardware.

Use this thread to discuss driver specific issues only please. Make a new thread, or use an existing one, for general OpenGL 3.0 questions. Finally, a friendly request. Please leave any non-constructive comments out, it really is not going to help anything.

Thanks, and happy coding!

Barthold
With my NVIDIA hat on

Any idea on when we can expect an Linux OpenGL 3.0 driver?

-rw

Thanks Nvidia for the quick release.

For any newbies wanting to play with GL 3 don’t forgot to do something like this to get a GL 3 context:

int attribs[3];
attribs[0] = WGL_CONTEXT_MAJOR_VERSION_ARB;
attribs[1] = 3;
attribs[2] = 0; //terminate first pair

WGL_CONTEXT_MINOR_VERSION_ARB defaults to 0 anyway so no need to specify that one.

wglCreateContextAttribsARB(hdc,hglrc,attribs);

Happy coding.

ATI if you are reading, hurry up and stop shafting the GL community.

If you’re running under Vista make sure you run the nvemulate tool as administrator.

Will those drivers work with a mobile 9500GS card? My Asus laptop has a 9500GS, and I sold my desktop to get ready to build my new Intel i7 quad core desktop powerhouse!! :wink:

They should work with a 9500gs according to NVidia.

Found 2 possible bugs with the 3.0 VAOs in the 177.89 drivers:

Main one is that calling glDisableVertexAttribArray(1); with no VAO bound breaks any existing VAOs that had attrib 1 active with a pointer set.

small test case, using glfw2.6 modified to use a gl3 context :


#include <stdio.h>
#include <GL/glfw.h>
#include "glext.h"

PFNGLGENBUFFERSPROC glGenBuffers = 0;
PFNGLBINDBUFFERPROC glBindBuffer = 0;
PFNGLBUFFERDATAPROC glBufferData = 0;
PFNGLGENVERTEXARRAYSAPPLEPROC glGenVertexArrays = 0;
PFNGLBINDVERTEXARRAYAPPLEPROC glBindVertexArray = 0;;
PFNGLDISABLEVERTEXATTRIBARRAYPROC glDisableVertexAttribArray = 0;
PFNGLENABLEVERTEXATTRIBARRAYPROC glEnableVertexAttribArray = 0;
PFNGLVERTEXATTRIBPOINTERPROC glVertexAttribPointer = 0;

PFNGLCREATEPROGRAMPROC glCreateProgram = 0;
PFNGLCREATESHADERPROC glCreateShader = 0;
PFNGLSHADERSOURCEPROC glShaderSource = 0;
PFNGLCOMPILESHADERPROC glCompileShader = 0;
PFNGLATTACHSHADERPROC glAttachShader = 0;
PFNGLBINDATTRIBLOCATIONPROC glBindAttribLocation = 0;
PFNGLLINKPROGRAMPROC glLinkProgram = 0;
PFNGLDELETESHADERPROC glDeleteShader = 0;
PFNGLUSEPROGRAMPROC glUseProgram = 0;

void init_extensions() {
  glGenBuffers = glfwGetProcAddress("glGenBuffers");
  glBindBuffer = glfwGetProcAddress("glBindBuffer");
  glBufferData = glfwGetProcAddress("glBufferData");
  glGenVertexArrays = glfwGetProcAddress("glGenVertexArrays");
  glBindVertexArray = glfwGetProcAddress("glBindVertexArray");
  glDisableVertexAttribArray = glfwGetProcAddress("glDisableVertexAttribArray");
  glEnableVertexAttribArray = glfwGetProcAddress("glEnableVertexAttribArray");
  glVertexAttribPointer = glfwGetProcAddress("glVertexAttribPointer");

  glCreateProgram = glfwGetProcAddress("glCreateProgram");
  glCreateShader = glfwGetProcAddress("glCreateShader");
  glShaderSource = glfwGetProcAddress("glShaderSource");
  glCompileShader = glfwGetProcAddress("glCompileShader");
  glAttachShader = glfwGetProcAddress("glAttachShader");
  glBindAttribLocation = glfwGetProcAddress("glBindAttribLocation");
  glLinkProgram = glfwGetProcAddress("glLinkProgram");
  glDeleteShader = glfwGetProcAddress("glDeleteShader");
  glUseProgram = glfwGetProcAddress("glUseProgram");
}

const float cube[] = {-1.0, -1.0, -1.0,
                      -1.0, -1.0,  1.0,
                      -1.0,  1.0, -1.0,
                      -1.0,  1.0,  1.0,
                       1.0, -1.0, -1.0,
                       1.0, -1.0,  1.0,
                       1.0,  1.0, -1.0,
                       1.0,  1.0,  1.0, };

const int cubei[] = {0,  6,  4,
                     0,  2,  6,
                     0,  3,  2,
                     0,  1,  3,
                     2,  7,  6,
                     2,  3,  7,
                     4,  6,  7,
                     4,  7,  5,
                     0,  4,  5,
                     0,  5,  1,
                     1,  5,  7,
                     1,  7,  3,};

unsigned int vao = 0;
unsigned int program = 0;

// attrib indexes other than 1 don't trigger crash
#define ATTRIB 1

void fill_buffers() {
  unsigned int i=0,v=0;

  glGenBuffers(1, &v);
  glGenBuffers(1, &i);

  glBindBuffer(GL_ARRAY_BUFFER, v);
  glBufferData(GL_ARRAY_BUFFER, 24*sizeof(float), cube, GL_STATIC_DRAW);

  glEnableClientState(GL_VERTEX_ARRAY);
  glVertexPointer(3, GL_FLOAT, 0, 0);

  glEnableVertexAttribArray( ATTRIB );
  glVertexAttribPointer(ATTRIB, 3, GL_FLOAT, 0, 3*sizeof(float), 0);

  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, i);
  glBufferData(GL_ELEMENT_ARRAY_BUFFER, 36*sizeof(int), cubei, GL_STATIC_DRAW);
}


int main( void )
{
  int width, height;

  glfwInit();

  glfwOpenWindow( 640, 480, 0,0,0,0, 0,0, GLFW_WINDOW );

  init_extensions();

  glfwGetWindowSize( &width, &height );
  glViewport( 0, 0, width, height );

  glGenVertexArrays(1, &vao);
  glBindVertexArray(vao);
  fill_buffers();

  //*** DrawElements below crashes if both of these lines are here :
  glBindVertexArray(0);                //***
  glDisableVertexAttribArray( ATTRIB ); //***

  glBindVertexArray(vao);
  glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0);

  glfwSwapBuffers();

  // crashes on exit if we leave vao bound
  glBindVertexArray(0);

  glfwTerminate();
  return 0;
}

Only attrib 1 seems to be a problem, no crashes with any others.

Other possible bug is that the above code crashes if the vao is still bound when it exits.

-b-

Hi 3B,

Coincidentally, I found the same crash bug yesterday internally at NVIDIA. It’ll be fixed in an upcoming release. I haven’t been able to reproduce the crash at exit you mentioned. If you can provide a win32 executable that reproduces it, I’ll give it a spin. Thanks.

Will GL 3.0 driver support the 8600M GT? I tried to install under WinXP the driver on a Acer 5929g with the above mentioned card but the installation refused to continue.

Thanks

mobile support is almost always missing from the drivers installer, but the driver itself does support the hardware. you just have to edit the nv4_disp.inf file of the driver setup to support your card/chip. look at the driver inf files at laptopvideo2go.com for the additional lines (PCI id etc.) but i recommend not using the infs from there as they modified to much in there making the drivers completely useless in some regards.

Would it also be possible to use this driver with and Apple MacBook Pro with an 8600M GT running Vista32 (with the inf-mod mentioned above)?

Why shouldn’t it be possible?

ok, http://www.3bb.cc/tmp/vao-exit-crash.zip is a binary of previous program chopped down to just the exiting with vao bound bit:

#include <stdio.h>
#include <GL/glfw.h>
#include "glext.h"

PFNGLGENVERTEXARRAYSAPPLEPROC glGenVertexArrays = 0;
PFNGLBINDVERTEXARRAYAPPLEPROC glBindVertexArray = 0;;

int main( int argc, char ** argv )
{
  unsigned int width, height, vao;

  glfwInit();

  glfwOpenWindow( 640, 480, 0,0,0,0, 0,0, GLFW_WINDOW );

  glGenVertexArrays = glfwGetProcAddress("glGenVertexArrays");
  glBindVertexArray = glfwGetProcAddress("glBindVertexArray");

  glfwGetWindowSize( &width, &height );
  glViewport( 0, 0, width, height );

  glGenVertexArrays(1, &vao);
  glBindVertexArray(vao);

  if ( argc > 1 )  glBindVertexArray(0);

  return 0;
}

compiled with gcc version “(GCC) 4.2.1-sjlj (mingw32-2)”.

When run with an argument it binds VA 0 before exiting, and doesn’t crash.
With no argument it leaves the VAO bound, and crashes.

Thanks 3B, I’ve reproduced the crash and will get it fixed for the next release.

Any word on when we’ll have Linux drivers? I’d love to help beat on the GL3 support with our multinode NVidia systems, but without Linux support, not much I can do…

I have a 7600gs (driver ver. 177.89, xp sp3). VAO entry points can only be retreived using the “ARB” ending in their names. According to the extension specification issue #6, this is false and function names of this extension should not have the ARB ending.

Yep, same problem with GeForce8800gts (windows Xp 32)

I’m not sure if this is a bug since I’ve have found no trace of the GL_ARB_framebuffer_object extension specification. However, glGenerateMipmapEXT was part of GL_EXT_framebuffer_object, and since glGenerateMipmap is part of the OpenGL3.0 spec and is not deprecated, I assume there should be a glGenerateMipmap(ARB) entry point in the presence of GL_ARB_framebuffer_object. There is no such entrypoint with or without “ARB” ending.

7600gs, 177.89, xp sp3 32

ARB_fbo is still in the middle of its 30-day review period by the Khronos promoter companies. Assuming all goes well I would anticipate seeing it posted to the registry within a week or two.

Any word on when we will be getting a driver with EXT_direct_state_access?

Regards
elFarto