PDA

View Full Version : "NVIDIA releases OpenGL 3.2 beta drivers"



Groovounet
08-05-2009, 09:59 AM
I'm posting this because I'm pretty sure that some nVidia guys while read it and this is actually a suggestion for the future release!

I'm soooooo but sooo feed up by some nVidia communication bad practices such as this news. I actually thing that I'm event more feed up by all this stupid websites forward this news and it's quite a shame to read it event on OpenGL.org.

Drivers 190.56 are NOT OpenGL 3.2 drivers. It's OpenGL 3.1 drivers with some features of OpenGL 3.2. Just like ATI drivers are ... We got the same news with OpenGL 3.0 and OpenGL 3.1 releases.

I love so much lot of nVidia stuff, cards, docs, projects, I just suggest nVidia to stop fallacious communication. Sooner or later I believe, it is just going to be bad for the company reputation.

rsp91
08-05-2009, 01:48 PM
Actually all that is left is some context creation flag related thing and moving ARB extension gemoetry shader support to core.
This is according to http://developer.nvidia.com/object/opengl_3_driver.html

Doesn't seem like a far fetched OpenGL 3.2? Of course GL_VERSION as OpenGL 3.1 would be more appropriate.

Alfonse Reinheart
08-05-2009, 03:40 PM
I would also point out that it is a beta driver. That means it may not do everything it promises to do correctly.

AlexN
08-06-2009, 08:45 AM
GLSL 1.50 is not completely exposed in this driver. Trying to use the new "interface" varyings in particular gives an error that I need to enable the extension GL_NV_gpu_shader5, which is not exposed.

(I understand these are beta drivers, I'm just trying to make the issue known)

mfort
08-07-2009, 12:59 AM
I was about to try the new ARB_sync.
To my surprise this extension is not supported :-(
The driver is far from OpenGL 3.2 spec.

(win32/xp/GF8800)

Gedolo
08-07-2009, 04:48 AM
Hmm, beta

Groovounet
08-07-2009, 05:51 AM
Hmm, alpha?

Most features have been approved the 3th of July and the 24th of july ... could we actually expect a full implementation that soon?

No and that's fine, development take time, having features as soon as possible is great and that all we could expect.

I hope pointing out this will change the balance in the debate of drivers with ATI suposed to be 6 months late over nVidia ... Well that not the case even if I agree nVidia is more advanced on that topic.

barthold
08-07-2009, 01:53 PM
I was about to try the new ARB_sync.
To my surprise this extension is not supported :-(
The driver is far from OpenGL 3.2 spec.

(win32/xp/GF8800)


ARB_sync is part of core OpenGL 3.2. Therefore, you do not need to look for the extension. The entry points are there.

Regards,
Barthold
(With my NVIDIA hat on)

mfort
08-08-2009, 02:26 AM
ARB_sync is part of core OpenGL 3.2. Therefore, you do not need to look for the extension. The entry points are there.


I am sorry. Shame on me. I was mistaken by OpenGL Extension Viewer that reports OpenGL version 3.1.0 and no ARB_sync extension in the list. Therefore I gave up too early.

I wrote small test app. The GL_VERSION actually returns 3.2.0 and all ARB_sync API is there. My test shows that the ARB sync works fine! So I can start coding.

Thanks!

/marek

Dark Photon
08-11-2009, 11:28 AM
Current 3.2 beta drivers have an issue that a 3.2 context (without FORWARD_COMPATIBLE_BIT) does not honor compatible behavior, whereas 3.1 and 3.0 contexts do. Hopefully an easy fix-up.

Symptom is things like glGetString( GL_EXTENSIONS ) returning NULL, GL errors being thrown. Flipping back to a 3.1 context remedies that.

(NVidia 190.18.03 drivers)

barthold
08-12-2009, 05:47 PM
Current 3.2 beta drivers have an issue that a 3.2 context (without FORWARD_COMPATIBLE_BIT) does not honor compatible behavior, whereas 3.1 and 3.0 contexts do. Hopefully an easy fix-up.

Symptom is things like glGetString( GL_EXTENSIONS ) returning NULL, GL errors being thrown. Flipping back to a 3.1 context remedies that.

(NVidia 190.18.03 drivers)

Are you creating a Core profile or Compatibility profile? glGetString(GL_EXTENSIONS) is deprecated, and hence not available in the Core profile.

Thanks!
Barthold
(with my NVIDIA hat on)

glfreak
08-13-2009, 08:27 AM
Talking about drivers, here's an interesting link:

http://www.mcadforums.com/forums/files/autodesk_inventor_opengl_to_directx_evolution.pdf

In a nutshell, OpenGL is better be a software renderer only.

Dark Photon
08-13-2009, 09:13 AM
Are you creating a Core profile or Compatibility profile? glGetString(GL_EXTENSIONS) is deprecated, and hence not available in the Core profile.
Sorry, I think I'm at fault here for not understanding something.

I confess I didn't realize the distinction between compatibility and forward-compatibile. I assumed that by not specifying forward-compatible that I got backward compatibility (as was the case <= GL 3.1).

Now I think I see the distinction:

* PROFILE_MASK = COMPATIBILITY -> Backward compatibility included
* PROFILE_MASK = CORE -> Backward compatibility killed
* FLAGS = 0 -> Deprecated but still supported APIs included
* FLAGS = FORWARD_COMPAT -> Deprecated but still supported APIs killed

So for the most lax profile (include all the old stuff), you want PROFILE_MASK = COMPATIBILITY and no FLAGS = FORWARD_COMPAT. And for the most strict profile, you want PROFILE_MASK = CORE and FLAGS = FORWARD_COMPAT.

(updated) I was using the default options of PROFILE_MASK = CORE and no FLAGS = FORWARD_COMPAT, which kills backward compatibility.

Groovounet
08-13-2009, 09:39 AM
Very insightful link even if I wonder why it is posted here.

"We use OpenGL SW (GDI Generic) in all our QA because at least we get the same consistent result".

Back to my previous job we were also using the OpenGL Software drivers so be sure that at least it run ... well so slowly that it could not be used ...

I would say that all these releases OpenGL spec must be seen as a nightmare.

Autodesk is building long term softwares based on a really old code (probably older than 15 years for some) which must be quite horrible. That was exactly this case at my previous job so going into new features even shader was so so so long an painful.

What such software need: stability. They are not going to update their "rendering engine (mess)" for each release.

However, I believe that the main fault is on their side like it was for my previous company. "OpenGL is just an API" "Out software goal is not OpenGL / Direct3D rendering, it's raytracing, modeling tools". So basically, if I had a look on the code I would expect to see no OpenGL engine at all, OpenGL code everywhere in 20 000 000 lines of code.

Well, I exaggerate a bit but that the basic idea. Very complicated code to maintain leads to nothing good. I'm using 3DS Max 2010 time to time, I quite like it, but the OpenGL renderer is not a stable option, the Direct3D renderer is "ok" but doesn't look good, the few shader effects look like hack in the code, the overall rendering is freaky slow.

Finally, I think that OpenGL is valid option for these softwares for compatibility using the fixed pipeline. I agree that Direct3D 10 would be a great platform for advanced rendering because the constrain a really strict.

glfreak
08-13-2009, 10:25 AM
I totally agree. But it seems that the new GL spec's did not help much in driver quality and stability. And one thing drew my attention in that paper, is that GL lacks of QA. It's the job of the API users to make sure things work, and they are left with a big Q whether the bug is on their side or the driver's.

I'm not here criticizing the API itself, nor the IHVs who implement them. Some party should take charge of a solid GL SDK, and leave the IHVs with minimal driver implementation...like in D3D :D

mfort
08-13-2009, 10:52 AM
I'd rather see Khronos investing time/money in development of conformance test suite for OpenGL.

I am always nervous when installing new drivers. Usually something gets broken with new drivers. (using NV)
We have a list of HW/drivers that works well. When user calls to support line, the first question we ask is the HW/drv.

glfreak
08-13-2009, 11:29 AM
Conformance test! Excellent!

Jan
08-13-2009, 12:00 PM
Excellent idea indeed. It's just, that that idea has been around for ages, but the ARB has not the resources to implement such a thing. That is why we don't have a conformance test now and i am pretty sure, we won't get one in the future.

It's sad, but the fact is, that D3D IS a better option, if you can afford to only support Windows (Vista).

Jan.

Dark Photon
08-13-2009, 12:48 PM
Are you creating a Core profile or Compatibility profile? glGetString(GL_EXTENSIONS) is deprecated, and hence not available in the Core profile.

Thanks!
Barthold
(with my NVIDIA hat on)
Ok, thanks to your tip, I see now I was trying to create a "core" context (since that is the default for glXCreateContextAttribs for a 3.2 context.

For now, I want a compatibility context, which I gather I would normally create (per-spec) via:


static int Context_attribs[] =
{
GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
GLX_CONTEXT_MINOR_VERSION_ARB, 2,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
None
};
Context = glXCreateContextAttribsARB( getXDisplay(), FBConfig, 0,
True, Context_attribs );

but per the NVidia driver release page, I need to use the old glXCreateContext() call for now since glXCreateContextAttribs doesn't yet support PROFILE_MASK (attempting to do so generates a BadValue X error from GLX).

Thanks for the tip.

Rob Barris
08-13-2009, 12:58 PM
I totally agree. But it seems that the new GL spec's did not help much in driver quality and stability. And one thing drew my attention in that paper, is that GL lacks of QA. It's the job of the API users to make sure things work, and they are left with a big Q whether the bug is on their side or the driver's.


Specification writing doesn't fix bugs, only the IHV-developer feedback loop and IHV-invested engineering time can do that. Forum posts about bugs don't count.

glfreak
08-13-2009, 01:19 PM
It's sad, but the fact is, that D3D IS a better option, if you can afford to only support Windows (Vista).


Is there any other real platforms than Windows?

Stephen A
08-13-2009, 03:34 PM
You'd be surprised.

glfreak
08-13-2009, 03:37 PM
I meant graphics platforms, well other than Apple. they have their own "API."

barthold
08-13-2009, 03:37 PM
Are you creating a Core profile or Compatibility profile? glGetString(GL_EXTENSIONS) is deprecated, and hence not available in the Core profile.
Sorry, I think I'm at fault here for not understanding something.

I confess I didn't realize the distinction between compatibility and forward-compatibile. I assumed that by not specifying forward-compatible that I got backward compatibility


Ah, I see the confusion. Indeed, that assumption is incorrect.



(as was the case <= GL 3.1).


Not entirely correct either. If you create an OpenGL 3.1 context, you get backwards compatibility only if the ARB_compatibility extension is also supported.



Now I think I see the distinction:

* PROFILE_MASK = COMPATIBILITY -> Backward compatibility included
* PROFILE_MASK = CORE -> Backward compatibility killed
* FLAGS = 0 -> Deprecated but still supported APIs included


With FLAGS = 0 you mean no parameters at all passed to CreateContextAttribsARB? If so, you get the highest supported version of OpenGL that is backwards compatible. If you mean with FLAGS = 0 that you leave out a PROFILE_MASK, then you get a Core profile (and you need to ask for GL 3.2 as well).


* FLAGS = FORWARD_COMPAT -> Deprecated but still supported APIs killed

So for the most lax profile (include all the old stuff), you want PROFILE_MASK = COMPATIBILITY and no FLAGS = FORWARD_COMPAT. And for the most strict profile, you want PROFILE_MASK = CORE and FLAGS = FORWARD_COMPAT.


Yes. Although I am not sure it makes sense to ship an application with the forward compatible flag set. It is useful during development if you do not want to use deprecated features, and want to get warned (or get errors) if you accidentally do so. But that is up to you.

Barthold
(with my ARB hat on)

Rob Barris
08-13-2009, 05:27 PM
I meant graphics platforms, well other than Apple. they have their own "API."

What do you mean ? The Apple platforms use OpenGL and OpenGL ES.

Heiko
08-13-2009, 11:50 PM
It's sad, but the fact is, that D3D IS a better option, if you can afford to only support Windows (Vista).


Is there any other real platforms than Windows?

What do you think about Linux? Universities all over the world use Linux for scientific visualization for example. Their only option is to use OpenGL. Besides, I think the open source community is still growing as it still makes huge progress in terms of usability.

Nevertheless, I think it would be great if some OpenGL driver certification program would exist, because it would probably increase the quality of the drivers.

HenriH
08-14-2009, 12:42 AM
It's sad, but the fact is, that D3D IS a better option, if you can afford to only support Windows (Vista).


Is there any other real platforms than Windows?

Mac, iPhone, Android, Symbian, Maemo, Linux, game consoles.

Tom Flynn
08-14-2009, 10:39 AM
It's sad, but the fact is, that D3D IS a better option, if you can afford to only support Windows (Vista).


Is there any other real platforms than Windows?

Mac, iPhone, Android, Symbian, Maemo, Linux, game consoles.


Plasma TVs, Blu-Ray players, and other consumer electronic devices. Many of which are running some version of an embedded Linux with OpenGLES.

Dark Photon
08-14-2009, 09:48 PM
Now I think I see the distinction:

* PROFILE_MASK = COMPATIBILITY -> Backward compatibility included
* PROFILE_MASK = CORE -> Backward compatibility killed
* FLAGS = 0 -> Deprecated but still supported APIs included


With FLAGS = 0 you mean no parameters at all passed to CreateContextAttribsARB?
I mean calling it like this:



static int Context_attribs[] =
{
GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
GLX_CONTEXT_MINOR_VERSION_ARB, 2,
None
};
Context = glXCreateContextAttribsARB( display, FBConfig, 0,
True, Context_attribs );


with no GLX_CONTEXT_FLAGS_ARB attrib.


If so, you get the highest supported version of OpenGL that is backwards compatible. If you mean with FLAGS = 0 that you leave out a PROFILE_MASK, then you get a Core profile (and you need to ask for GL 3.2 as well).

Ok, that part about defaulting to the highest version available implicitly is good to know.

Thanks, Barthold.

Dark Photon
08-14-2009, 10:01 PM
I have another NVidia OpenGL 3.2-related question that came up yesterday. Some GeForce 7 internal users were reporting that running our app (now using the new context creation API with 3.x capable drivers) was resulting in a fatal Xlib error when the GL context is being allocated (but only on pre-GeForce 8 cards!)

I've attached full source for a test prog below, along with the working (GeForce 8) and broken (GeForce 7) output.

In short, running on 190.18.03 drivers (Linux) on a GeForce 8+ advertises GL 3.2.0 (per GL_VERSION), whereas on a GeForce 7 it advertises GL 2.1.2 (per GL_VERSION). On both cards:

1) ARB_create_context is supported,
2) glXGetProcAddress( "glXCreateContextAttribsARB" ) returns a valid pointer, and
3) glXCreateContextAttribsARB "appears" to create a 3.0 context,

however, on the GeForce 7, glXIsDirect returns "False", and a glXMakeCurrent triggers a BadAlloc Xlib error, killing the app.

So my question is: what is the correct way to allocate a 3.x context on a GeForce 8+ and a 2.x context on a GeForce 7-, without resorting to GL_RENDERER or GL_VERSION parsing hackery? It's coded to fall-back to the old context create if the 3.x context create fails. But it's not returning failure. It's dumping the app with an Xlib error.

The source code is below, but first, the GeForce 8 output:



...
Creating dummy old-style context
GL_VERSION = 3.2.0 NVIDIA 190.18.03
Deleting dummy old-style context
Creating context
Created GL 3.0 context
Verifying that context is direct
Making context current


and the broken output on GeForce 7:


Creating dummy old-style context
GL_VERSION = 2.1.2 NVIDIA 190.18.03
Deleting dummy old-style context
Creating context
Created GL 3.0 context
Verifying that context is direct
WARNING: Indirect GLX rendering context obtained
Making context current
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 144 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 38
Current serial number in output stream: 39
libxcb: WARNING! Program tries to lock an already locked connection,
which indicates a programming error.
There will be no further warnings about this issue.


and finally, here's the full program. Compile with:

g++ -o gl3_geforce7_broke gl3_geforce7_broke.cxx -lGL -lX11



#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#define GL_GLEXT_PROTOTYPES 1
#define GLX_GLXEXT_PROTOTYPES 1
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <GL/gl.h>
#include <GL/glx.h>

#define GLX_CONTEXT_MAJOR_VERSION_ARB 0x2091
#define GLX_CONTEXT_MINOR_VERSION_ARB 0x2092
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);

int main (int argc, char ** argv)
{
Display *display = XOpenDisplay(0);

if ( !display )
{
printf( "Failed to open X display\n" );
exit(1);
}

// Query/print GLX version
int major, minor;

if ( !glXQueryVersion( display, &amp;major, &amp;minor ) )
{
printf( "glXQueryVersion failed\n" );
exit(1);
}
printf( "GLX version = %d.%d\n", major, minor );

if ( major < 1 || major == 1 &amp;&amp; minor < 3 )
{
printf( "GLX version is too old; must be > 1.3\n" );
exit(1);
}

// Print GLX extensions
//const char *extensions = glXQueryExtensionsString( display,
// DefaultScreen( display ) );
//printf( "%s\n", extensions );

// Get a matching FB config
static int visual_attribs[] =
{
GLX_X_RENDERABLE , True,
GLX_DRAWABLE_TYPE , GLX_WINDOW_BIT,
GLX_RENDER_TYPE , GLX_RGBA_BIT,
GLX_X_VISUAL_TYPE , GLX_TRUE_COLOR,
GLX_RED_SIZE , 8,
GLX_GREEN_SIZE , 8,
GLX_BLUE_SIZE , 8,
GLX_ALPHA_SIZE , 8,
GLX_DEPTH_SIZE , 24,
GLX_STENCIL_SIZE , 8,
GLX_DOUBLEBUFFER , True,
//GLX_SAMPLE_BUFFERS , 1,
//GLX_SAMPLES , 4,
None
};

printf( "Getting framebuffer configs\n" );
int fbcount;
GLXFBConfig *fbc = glXChooseFBConfig( display, DefaultScreen( display ),
visual_attribs, &amp;fbcount );
if ( !fbc )
{
printf( "Failed to retrieve a framebuffer config\n" );
exit(1);
}
printf( "Found %d matching FB configs.\n", fbcount );

// Pick the FB config/visual with the most samples per pixel
printf( "Getting XVisualInfos\n" );
int best_fbc = -1, worst_fbc = -1, best_num_samp = -1, worst_num_samp = 999;

for ( int i = 0; i < fbcount; i++ )
{
XVisualInfo *vi = glXGetVisualFromFBConfig( display, fbc[i] );
if ( vi )
{
int samp_buf, samples;
glXGetFBConfigAttrib( display, fbc[i], GLX_SAMPLE_BUFFERS, &amp;samp_buf );
glXGetFBConfigAttrib( display, fbc[i], GLX_SAMPLES , &amp;samples );

printf( " Matching fbconfig %d, visual ID 0x%2x: SAMPLE_BUFFERS = %d,"
" SAMPLES = %d\n",
i, vi -> visualid, samp_buf, samples );

if ( best_fbc < 0 || samp_buf &amp;&amp; samples > best_num_samp )
best_fbc = i, best_num_samp = samples;
if ( worst_fbc < 0 || !samp_buf || samples < worst_num_samp )
worst_fbc = i, worst_num_samp = samples;
}
XFree( vi );
}

// Get a visual
int fbc_id = best_fbc;
//int fbc_id = worst_fbc;

XVisualInfo *vi = glXGetVisualFromFBConfig( display, fbc[ fbc_id ] );
printf( "Chosen visual ID = 0x%x\n", vi->visualid );

printf( "Creating colormap\n" );
XSetWindowAttributes swa;
swa.colormap = XCreateColormap( display, RootWindow( display, vi->screen ),
vi->visual, AllocNone );
swa.background_pixmap = None ;
swa.border_pixel = 0;
swa.event_mask = StructureNotifyMask;

printf( "Creating window\n" );
Window win = XCreateWindow( display, RootWindow( display, vi->screen ),
0, 0, 100, 100, 0, vi->depth, InputOutput,
vi->visual,
CWBorderPixel|CWColormap|CWEventMask, &amp;swa );
if ( !win )
{
printf( "Failed to create window.\n" );
exit(1);
}

XStoreName( display, win, "GL 3.0 Window");

printf( "Mapping window\n" );
XMapWindow( display, win );

// See if GL driver supports glXCreateContextAttribsARB()
// Create an old-style GLX context first, to get the correct function ptr.
glXCreateContextAttribsARBProc glXCreateContextAttribsARB = 0;

printf( "Creating dummy old-style context\n" );

GLXContext ctx_old = glXCreateContext( display, vi, 0, True );

glXMakeCurrent( display, win, ctx_old );

printf( "GL_VERSION = %s\n", glGetString( GL_VERSION ) );

glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc)
glXGetProcAddress( (const GLubyte *) "glXCreateContextAttribsARB" );

GLXContext ctx = 0;

// If it doesn't, just use the old-style 2.x GLX context
if ( !glXCreateContextAttribsARB )
{
printf( "glXCreateContextAttribsARB() not found"
" ... using old-style GLX context\n" );
ctx = ctx_old;
}

// If it "does", try to get a GL 3.0 context!
else
{
printf( "Deleting dummy old-style context\n" );

glXMakeCurrent( display, None, 0 );
glXDestroyContext( display, ctx_old );

static int context_attribs[] =
{
GLX_CONTEXT_MAJOR_VERSION_ARB, 3,
GLX_CONTEXT_MINOR_VERSION_ARB, 0,
//GLX_CONTEXT_FLAGS_ARB , GLX_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
None
};

printf( "Creating context\n" );
ctx = glXCreateContextAttribsARB( display, fbc[ fbc_id ], 0,
True, context_attribs );
if ( ctx )
printf( "Created GL 3.0 context\n" );
else
{
// Couldn't create GL 3.0 context. Fall back to old-style 2.x context.
printf( "Failed to create GL 3.0 context"
" ... using old-style GLX context\n" );
ctx = glXCreateContext( display, vi, 0, True );
}
}

XFree( fbc );

// Verifying that context is a direct context
printf( "Verifying that context is direct\n" );
if ( ! glXIsDirect ( display, ctx ) )
{
printf( "WARNING: Indirect GLX rendering context obtained\n" );
//exit(1);
}

printf( "Making context current\n" );
glXMakeCurrent( display, win, ctx );

glClearColor ( 0, 0.5, 1, 1 );
glClear ( GL_COLOR_BUFFER_BIT );
glXSwapBuffers ( display, win );

sleep( 1 );

glClearColor ( 1, 0.5, 0, 1 );
glClear ( GL_COLOR_BUFFER_BIT );
glXSwapBuffers ( display, win );

sleep( 1 );

ctx = glXGetCurrentContext( );
glXMakeCurrent( display, 0, 0 );
glXDestroyContext( display, ctx );
}

barthold
08-14-2009, 10:50 PM
Ok, that part about defaulting to the highest version available implicitly is good to know.

Thanks, Barthold.


It is all spelled out in the WGL/GLX_ARB_context_creation spec. See http://www.opengl.org/registry/specs/ARB/wgl_create_context.txt

To quote from the spec:

The default values for WGL_CONTEXT_MAJOR_VERSION_ARB and WGL_CONTEXT_MINOR_VERSION_ARB are 1 and 0 respectively. In this case, implementations will typically return the most recent version of OpenGL they support which is backwards compatible with OpenGL 1.0 (e.g. 3.0, 3.1 + GL_ARB_compatibility, or 3.2 compatibility profile)

Barthold
(with my ARB hat on)

barthold
08-14-2009, 10:56 PM
I have another NVidia OpenGL 3.2-related question that came up yesterday. Some GeForce 7 internal users were reporting that running our app (now using the new context creation API with 3.x capable drivers) was resulting in a fatal Xlib error when the GL context is being allocated (but only on pre-GeForce 8 cards!)


Yes, that does not look right. Asking for a 3.0 context on a Geforce 7 should fail. The create context spec says

"On failure glXCreateContextAttribsARB returns NULL and generates an X error with extended error information."

Thanks for the source code. We'll take a look asap.

Barthold
(with my NVIDIA hat on)

Dark Photon
08-17-2009, 06:03 AM
Yes, that does not look right. Asking for a 3.0 context on a Geforce 7 should fail. ... We'll take a look asap.

Thanks, Barthold.

Dark Photon
08-25-2009, 09:10 AM
Also just hit a similar (probably the same) bug with another user, this one on G80+ hardware (3.x capable).

* Requesting a 3.1 context from 3.0- (but not 3.1-) capable drivers on 3.x-capable hardware (G80+) (e.g. returns a bogus non-null context pointer that results in an X error when bound.

This similar to the previously-reported problem where:

* Requesting a 3.x context from 3.x-capable drivers on 2.1-only-capable hardware (pre-G80) returns bogus non-null context pointer that results in an X error when bound...

So it seems the general bug is: On an OpenGL 3.x-capable driver, if you request a GL context with a GL version greater than the version the driver supports on this hardware, then it returns a bogus context that terminates the app with an X error rather than returning NULL pointer for the context.

Implemented an ugly GL_VERSION check hack to work-around this. Now never asks for something greater than this.

barthold
08-25-2009, 01:12 PM
Dark Photon,

First of all, we fixed this (hopefully) in the new drivers we just posted:

http://developer.nvidia.com/object/opengl_3_driver.html

glXCreateContextAttribsARB() should return a NULL pointer now if it cannot create a context.

More details:

Up to now, GLX context creation was implemented by NVIDIA as an asynchronous X request (i.e., the X request is sent to the X server, and the client implementation returns to the application before the X server processes the request). If creation of the context failed, an X error was generated and making current to it would fail.

> Implemented an ugly GL_VERSION check hack to work-around this.

A possible better workaround would be using XSetErrorHandler(3) to register for the context creation X error, and check for an error, as follows:

static Bool errorOccurred = False;

static int ErrorHandler(Display *dpy, XErrorEvent *ev) {
// Ignore the error, but make note of it.
errorOccurred = True;

return 0;
}

int main(int argc, char *argv[])
{
Display *dpy;
GLXFBConfig *fbConfigs;
GLXContext ctx;
int (*oldHandler)(Display *, XErrorEvent *);

// Skipping not relevant code

errorOccurred = False;
oldHandler = XSetErrorHandler(&amp;ErrorHandler);

if (!oldHandler) {
XFree(fbConfigs);
printf("Failed to install X error event handler\n");
return -1;
}

ctx = glXCreateContextAttribsARB(dpy,
fbConfigs[0],
NULL,
True,
contextAttribs);

XSync(dpy, False);
XSetErrorHandler(oldHandler);

XFree(fbConfigs);

if (!ctx || errorOccurred) {
printf("Context creation failed\n");
return -1;
}

}


Barthold
(with my NVIDIA hat on)

Dark Photon
08-26-2009, 09:58 AM
First of all, we fixed this (hopefully) in the new drivers we just posted:

http://developer.nvidia.com/object/opengl_3_driver.html

glXCreateContextAttribsARB() should return a NULL pointer now if it cannot create a context.
Thanks, Barthold. Strangely, on a GeForce 7900 GTX, I still get an X error when I request glXCreateContextAttribs to create a 3.0 or 3.1 context on the 190.18.04 driver:


X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 144 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 44
Current serial number in output stream: 45
A 2.1 context created via glXCreateContextAttribs on a 7900 works well as you'd expect.


Up to now, GLX context creation was implemented by NVIDIA as an asynchronous X request
Interesting. Thanks for the background.


> Implemented an ugly GL_VERSION check hack to work-around this.

A possible better workaround would be using XSetErrorHandler(3) to register for the context creation X error, and check for an error, as follows
Looks like I may need to do something like this for now anyway.

Am I correct in assuming that the application's X and GLX state is not corrupted in any way if I bang on glXCreateContextAttribs a few times until it yields no X error after XSync, and then go on with the resulting GLX context? If so, I'll definitely code-up that technique. More robust, and that way I don't need to tell users and developers to avoid early 3.x drivers.

barthold
08-27-2009, 03:42 PM
> Strangely, on a GeForce 7900 GTX, I still get an X error when I request glXCreateContextAttribs to create a 3.0 or 3.1 context

You should still expect an X error. That is the behavior defined by the spec. However, in the latest driver glXCreateContextAttribsARB should now return NULL as well, also as defined by the spec. Are you getting NULL returned?

Dark Photon
08-28-2009, 11:01 AM
> Strangely, on a GeForce 7900 GTX, I still get an X error when I request glXCreateContextAttribs to create a 3.0 or 3.1 context
You should still expect an X error. That is the behavior defined by the spec. However, in the latest driver glXCreateContextAttribsARB should now return NULL as well, also as defined by the spec. Are you getting NULL returned?
Yes (just popped in a GeForce 7 and checked).

It's unfortunate the extension is written as it is, since the driver has to take an async error and force it sync, but then rethrow the X error to the app, which it must catch async and ignore, to even use the NULL return of the context create call. A NULL return alone would have been sufficient.

At any rate, thanks for all the help! Our app's coded to treat either NULL or the X error as failure, and repeatedly backs off with a lower version request.

barthold
10-05-2009, 11:35 AM
FYI, the new drivers we just posted completes the implementation of GLX context creation. Our implementation of glXCreateContextAttribsARB as of 190.18.05 does now also support GLX_CONTEXT_PROFILE_MASK_ARB as an attribute value.

http://developer.nvidia.com/object/opengl_3_driver.html

Regards,
Barthold
(with my NVIDIA hat on)