PDA

View Full Version : Vertex shader integer input broken



Dragon
12-28-2014, 01:43 PM
Banged by head for hours against the wall to figure out why a shader didn't work just to figure out that as it looks like GLSL can not deal with integer vertex input attributes although the specs clearly state it should. Take this code:


glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, 16, pointer );
glVertexAttribPointer( 1, 1, GL_INT, GL_FALSE, 16, pointer+12 );

This defines a vec3 and int input attribute. The shader looks like this:


#version 140

uniform samplerBuffer texWeightMatrices;

in vec3 inPosition;
in int inWeight;

out vec3 outPosition;

void main( void ){
vec4 row1 = texelFetch( texWeightMatrices, inWeight*3 );
vec4 row2 = texelFetch( texWeightMatrices, inWeight*3 + 1 );
vec4 row3 = texelFetch( texWeightMatrices, inWeight*3 + 2 );
outPosition = vec4( inPosition, 1.0 ) * mat3x4( row1, row2, row3 );
gl_Position = vec4( 0.0, 0.0, 0.0, 1.0 );
}

This totally fails resulting in wrong values written to the result VBO. Doing this on the other hand:


// same as above
in float inWeight;
// same as above
int weight = int( inWeight ) * 3; // and now using weight instead of inWeight*3


This works correctly. According to GLSL specs "4.3.4 Inputs" though this should be correct:

Vertex shader inputs can only be float, floating-point vectors, matrices, signed and unsigned integers and integer vectors. They cannot be arrays or structures.

What's going on here? Why is it impossible to use "in int" although the specs clearly say so?

Alfonse Reinheart
12-28-2014, 05:19 PM
glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, 16, pointer );
glVertexAttribPointer( 1, 1, GL_INT, GL_FALSE, 16, pointer+12 );

This defines a vec3 and int input attribute.

No, it does not. It defines two floating-point vertex shader inputs, one of which is stored in the buffer object as 3 floats, and the other of which is stored as 1 non-normalized 32-bit signed integer.

If you want to specify a vertex attribute (https://www.opengl.org/wiki/Vertex_Specification#Vertex_format) that connects to an actual integer in GLSL, you must use glVertexAttribIPointer. Note the "I" present before "Pointer". That means you're specifying data that will be retrieved in GLSL by an integer variable (signed or unsigned).

If you're wondering why OpenGL makes you specify redundant information, it doesn't. What this does is let you specify that a floating-point input in GLSL comes from normalized or non-normalized integer data in your VBO. It's a way of doing compression. For example, it allows you to store the 4 components of a color as normalized, unsigned bytes, using only 4 bytes that feeds an entire "vec4" in GLSL.

Dragon
12-29-2014, 12:49 PM
That indeed does fix the problem. I didn't see this function call since it is not part of any extension. I work only with extensions no core features at all. But in the glext.h from the OpenGL website the function you've mentioned is only listed in core 3.0 but not in any extension. That's very unfortunate. People can easily miss this function call if looking at extensions.

arekkusu
12-29-2014, 01:23 PM
It was introduced in EXT_gpu_shader4 (https://www.opengl.org/registry/specs/EXT/gpu_shader4.txt), way back in 2006.

Alfonse Reinheart
12-29-2014, 02:03 PM
That indeed does fix the problem. I didn't see this function call since it is not part of any extension. I work only with extensions no core features at all.

You should really stop doing that. There are a lot of features that have been added directly into core OpenGL, without a matching extension.

Extensions should be the exception in your code, not the rule. Especially since you seem to be using modern OpenGL 3.x+ style (in/out variables, etc).

Dragon
12-29-2014, 04:48 PM
You should really stop doing that. There are a lot of features that have been added directly into core OpenGL, without a matching extension.

Extensions should be the exception in your code, not the rule. Especially since you seem to be using modern OpenGL 3.x+ style (in/out variables, etc).
I can't support this thesis though one bit. I've seen different GPU drivers which did for example not implement GL4.x but exposing extensions from it. If I would request a certain core level I would miss out on functionality. This is why I use only extensions and check if functions exist. Everything else just had not been portable enough to be useful. OpenGL is great but sticking to core did never work so far... especially on Windows with the most broken OpenGL implementation in existence :/

Alfonse Reinheart
12-29-2014, 07:16 PM
I can't support this thesis though one bit.

But you do, whether you believe you do or not.

Your shader uses in and out. There is no OpenGL extension that specifies these keywords. The closest you get is the terrible ARB_geometry_shader4 (defining "varying in" and "varying out"), which is so dissimilar from the actual core GL 3.2 geometry shader feature that I don't even link to them on the Geometry Shader wiki article (https://www.opengl.org/wiki/Geometry_Shader), for fear of confusing people.

If your shader uses those keywords on global variables, then it depends on at least GLSL 1.30 (aka: GL 3.0). Your shader code (and therefore your OpenGL code) is not written against some extension (or at least, not just an extension); it is written against GL 3.0. So you're already making a minimum assumption of GL 3.0.

And speaking of geometry shaders, as I stated above, the core GS functionality is exposed in a very different way from the ARB_geometry_shader4 extension. I'm not sure how compatible the extension functionality will be with more advanced stuff (the interaction between ARB_tessellation_shader and ARB_geometry_shader4 is not specified, for example). Furthermore, Mac OSX does not implement the ARB_geometry_shader4 extension (https://developer.apple.com/opengl/capabilities/). Not even on the older 3.2 implementations. (https://developer.apple.com/opengl/capabilities/GLInfo_1085_Core.html)

So if you're doing this "extension only" coding for compatibility, you're not helping yourself. Granted, GS functionality isn't the most useful thing in the world, but that's not the only stuff that was incorporated with significant changes (ARB_fbo has some wording changes compared to the core FBO functionality, for example).

At the very least, you can't just ignore the version number and rely on backwards-compatibility extensions, because there has been some useful bits and pieces that never saw an extension form. You need to treat the OpenGL version as another extension, another variable which could expose functionality.

Dragon
12-30-2014, 03:00 AM
But you do, whether you believe you do or not.

Your shader uses in and out. There is no OpenGL extension that specifies these keywords. The closest you get is the terrible ARB_geometry_shader4 (defining "varying in" and "varying out"), which is so dissimilar from the actual core GL 3.2 geometry shader feature that I don't even link to them on the Geometry Shader wiki article (https://www.opengl.org/wiki/Geometry_Shader), for fear of confusing people.

If your shader uses those keywords on global variables, then it depends on at least GLSL 1.30 (aka: GL 3.0). Your shader code (and therefore your OpenGL code) is not written against some extension (or at least, not just an extension); it is written against GL 3.0. So you're already making a minimum assumption of GL 3.0.

And speaking of geometry shaders, as I stated above, the core GS functionality is exposed in a very different way from the ARB_geometry_shader4 extension. I'm not sure how compatible the extension functionality will be with more advanced stuff (the interaction between ARB_tessellation_shader and ARB_geometry_shader4 is not specified, for example). Furthermore, Mac OSX does not implement the ARB_geometry_shader4 extension (https://developer.apple.com/opengl/capabilities/). Not even on the older 3.2 implementations. (https://developer.apple.com/opengl/capabilities/GLInfo_1085_Core.html)

So if you're doing this "extension only" coding for compatibility, you're not helping yourself. Granted, GS functionality isn't the most useful thing in the world, but that's not the only stuff that was incorporated with significant changes (ARB_fbo has some wording changes compared to the core FBO functionality, for example).

At the very least, you can't just ignore the version number and rely on backwards-compatibility extensions, because there has been some useful bits and pieces that never saw an extension form. You need to treat the OpenGL version as another extension, another variable which could expose functionality.
This doesn't make sense. When I started using #130, #140 and even #150 (when required) I've worked with GPU drivers not exposing the required core profiles yet I can work with these GLSL versions and extensions linked to it. If I used core I would not be able to do what I did up to now.

Granted my current GPU is high-end so it has >4.3 but I can not rely on that. I even have to bundle glext.h since Windows still has a pathetic GL implementation. If I use core profile there I'm stuck in like 1.2 or maybe 2.x if I'm lucky not even having the entry points.

If you know a "sane" and "portable" way to get the required functionality from an installed GPU driver without relying on extensions you're kindly invited to tell me. But up to now I know no such way.

mhagain
12-30-2014, 05:07 AM
I can't support this thesis though one bit. I've seen different GPU drivers which did for example not implement GL4.x but exposing extensions from it. If I would request a certain core level I would miss out on functionality. This is why I use only extensions and check if functions exist. Everything else just had not been portable enough to be useful. OpenGL is great but sticking to core did never work so far... especially on Windows with the most broken OpenGL implementation in existence :/

I really hope that this doesn't mean that you're using GL_ARB_shader_objects...

Dragon
12-30-2014, 07:05 AM
I really hope that this doesn't mean that you're using GL_ARB_shader_objects...
You want to tell me that stuff like glCompileShader is not supported anymore by core gl > 3.0 ?

GClements
12-30-2014, 08:06 AM
You want to tell me that stuff like glCompileShader is not supported anymore by core gl > 3.0 ?
glCompileShader() still exists. glGetObjectParameter() has been replaced with glGetShader() and glGetProgram(), several other "object" functions have similarly been split into distinct shader and program versions.

In any case, if you're relying upon the existence of that extension, there's no actual reason for modern implementations to provide it when they provide equivalent functionality through the core API.

The extension need not be returned by glGetStringi(GL_EXTENSIONS, index) (note that glGetString(GL_EXTENSIONS) may fail with GL_INVALID_ENUM in recent versions), and wglGetProcAddress("glCompileShaderARB") may fail even if wglGetProcAddress("glCompileShader") works. If glCompileShaderARB() exists, it doesn't necessarily support the same GLSL versions as glCompileShader() (i.e. up to the version indicated by glGetString(GL_SHADING_LANGUAGE_VERSION)).

If using GLSL 3.0 code works with glCompileShaderARB() on your system, good for you. But you're not providing any "compatibility" by using the extension instead of the core function.

Also:

since Windows still has a pathetic GL implementation
Windows' OpenGL version is whatever the driver provides. The DLL only exports OpenGL 1.1 functions; anything added in later versions must be obtained via wglGetProcAddress(). This doesn't mean that such functions are extensions, though.

Alfonse Reinheart
12-30-2014, 08:50 AM
This doesn't make sense. When I started using #130, #140 and even #150 (when required) I've worked with GPU drivers not exposing the required core profiles yet I can work with these GLSL versions and extensions linked to it. If I used core I would not be able to do what I did up to now.

Granted my current GPU is high-end so it has >4.3 but I can not rely on that. I even have to bundle glext.h since Windows still has a pathetic GL implementation. If I use core profile there I'm stuck in like 1.2 or maybe 2.x if I'm lucky not even having the entry points.

If you know a "sane" and "portable" way to get the required functionality from an installed GPU driver without relying on extensions you're kindly invited to tell me. But up to now I know no such way.

I understand the problem now. You seem to be confusing "extension" with "loading function pointers." They're not the same thing.

The Windows OpenGL32.dll only directly exposes functions in OpenGL 1.1. To access core functions from versions higher than this, you must retrieve the function pointers with wglGetProcAddress. (https://www.opengl.org/wiki/Load_OpenGL_Functions) Or use a library to do it. This is the same mechanism you use for loading extension function pointers.

That doesn't make version 3.0 an extension to OpenGL. The version 3.0 functions are loaded manually, but they're still core functions, not extensions. You just access them through the same mechanism.

So when I was talking about sticking to core OpenGL, I didn't mean that you wouldn't have to load any function pointers. Only that you wouldn't be using extensions.

mhagain
12-30-2014, 09:59 AM
You want to tell me that stuff like glCompileShader is not supported anymore by core gl > 3.0 ?

glCompileShaderARB(GLhandleARB shaderObj) is the one I mean; that's from the old GL_ARB_shader_objects (https://www.opengl.org/registry/specs/ARB/shader_objects.txt) extension and the reason why I ask is because this is the only expression of GLSL objects as an extension; glCompileShader (i.e without the -ARB suffix) and friends were never available via an extension but were only ever part of core GL 2.0 (so right away you're using core GL and not extensions if they're what you're using).

The reason why this is important is because you should never rely on newer functionality interoperating with the GL_ARB_shader_objects extension (unless it in turn is explicitly specified to do so). If you're relying on extensions you should check the "Dependencies" section of their own specifications, check which GL version(s) (or other extensions) are required for supporting the new extension, and check which GL version the extension specification is written against. If you see that an extension requires GL2.0 and/or is written against the GL2.0 spec, for example, then you should never rely on that extension working with GL_ARB_shader_objects. It might on some hardware/drivers, but it might explode on others, and even if it does work, vendors are under no obligation to ensure that it continues to do so.

All this is academic of course since (as Alfonse commented) you do seem to be confusing "use the extension loading mechanism" with "use extensions".

Dragon
12-30-2014, 10:31 AM
Basically I check if extensions are listed to see if I can expect certain function pointers to be present or not. I've tried relying on the opengl version but (1) you can't retrieve the version prior to creating a context and (2) I've had more than one GPU driver telling to support a core profile but retrieving certain functions from it returned NULL. That's why I safe-guard by checking if an extensions is listed before trying to query functions. I also don't rely on functions existing if the driver claims to support a certain core version since this failed more than once so far. Furthermore I never query a function by a single name. So if I want glCompileShader I do something like this:


pglCompileShader = GetProcAddress("glCompileShader");
if( ! pglCompileShader ){
pglCompileShader = GetProcAddress("glCompileShaderARB");
if( ! pglCompileShader ){
pglCompileShader = GetProcAddress("glCompileShaderExt");
if( ! pglCompileShader ){
fail
}
}
}

This had been the only solution working so far with these requirements:
- no failing incorrectly if a function is not supported (optional functionality should not make the game bail out)
- failing if driver claims core but fails to deliver a function (for optional functionality treat it as extension not existing)
- retrieving core function over extension function if existing (extension function as fallback if core not found).

You see a better way to deal with problematic GPU drivers and core/extension muddle?

Alfonse Reinheart
12-30-2014, 12:21 PM
I've tried relying on the opengl version but (1) you can't retrieve the version prior to creating a context and

You can't retrieve function pointers or check the extension string without a context either. Not on Win32.


(2) I've had more than one GPU driver telling to support a core profile but retrieving certain functions from it returned NULL.

This sometimes happens with extensions too.


So if I want glCompileShader I do something like this:

You realize that those are two different functions with two different behaviors, yes?

For example, you cannot assume that glCompileShaderARB can be fed shaders from GLSL versions other than 1.10. Oh sure, it might work. And it might not. It might work today and break tomorrow. It might work for most of your post-1.10 shaders, then break for something new you pass. Or whatever. The spec says that it only handles GLSL version 1.10, period.

More important is the fact that the two functions do not have the same interface. glCompileShader takes a GLuint, while glCompileShaderARB takes a GLhandleARB, which is usually a 32-bit signed integer, but on MacOSX (for a reason that escapes me) is a pointer. Your code will fail in a 64-bit MacOSX build, since GLuints are always 32-bits in size.

Only you won't get a compilation error. Since you cast the function pointer to a function that takes a GLuint, C will dutifully obey you. And thus, the error you get is that you'll call a function that expects a parameter of a different size. Which leads to undefined behavior in C, and God only knows what kind of error you'll get at runtime. It could be somewhere in the driver, long after the function call returned. It could be stack trashing that only shows up a few million lines of code away. It could be nothing... until an unfortunate memory allocation or rare combination of code causes it to appear. And every time you attempt to debug it or change your code to find it, it will appear in a completely different location.

In short, what you're doing is both dangerous and non-portable.

Also, there is no glCompileShaderEXT and there never was. You cannot blindly assume that every core function has an ARB and EXT equivalent.


- no failing incorrectly if a function is not supported (optional functionality should not make the game bail out)
- failing if driver claims core but fails to deliver a function (for optional functionality treat it as extension not existing)
- retrieving core function over extension function if existing (extension function as fallback if core not found).

You see a better way to deal with problematic GPU drivers and core/extension muddle?

It all depends on how you define "better". I prize de-facto correctness (ie: writing towards the specification), because at least then if my program doesn't do what it's supposed to, I know who to blame. Also, if I need to do implementation-specific fixes, I have a spec-correct baseline to start from.

Thus, if an implementation claims to provide version X.Y, but not all of the X.Y functions are actually exposed, then I would say that the implementation is too broken for the program to continue or recover. Even if the functions are for optional features of my program. An implementation that is so broken that it can't even expose broken versions of functions that it claims to support should not be trusted. I certainly would be disinclined to trust that the functions such an implementation actually provides do what they say that they do.

Extensions should only be used as "fallbacks" to core functions if those are so-called "backwards-compatibility" extensions. That is, if they lack an extension suffix. Otherwise, you're code should be doing the actual work of using the extension feature.

You should never assume that, for example, "glBindBufferRangeEXT" (from EXT_transform_feedback) does the exact same thing as "glBindBufferRange". For example, there is nothing that says that glBindBufferRangeEXT can ever be used with the GL_UNIFORM_BUFFER binding point. Whereas ARB_uniform_buffer_object does state that it can be used with glBindBufferRange.

To be sure, it will probably work. But it might not. And you certainly have no foundation to stand on when submitting a bug report to an IHV if you want to make it work. The various specifications have no guarantees on the matter.

Yes, this means that if you can't get some functionality through core or a BC-extension, then you're going to have to put different branches for different extensions into your actual rendering code, rather than just swapping function pointers and pretending that there's no difference. Yes, this means more work for you. But it also means that your actual rendering code will be better able to handle oddball failures, and it won't try to demand the implementation do things that it doesn't have to.

You don't want to accidentally rely on implementation-defined behavior.

Dragon
12-30-2014, 02:14 PM
I doubt there is more work to handling missing functions than it is right now. But that's not the main problem. I've heard here that you can't compile and link shaders beyond 1.10 anymore. But how are you supposed to work with shaders if you can't use those functions anymore? I'm just asking since for example core 3.3 lists glCompileShader. Your statements just leave me confused and not giving any answers.

Alfonse Reinheart
12-30-2014, 05:05 PM
I doubt there is more work to handling missing functions than it is right now. But that's not the main problem. I've heard here that you can't compile and link shaders beyond 1.10 anymore.

No, you didn't. You heard:


I really hope that this doesn't mean that you're using GL_ARB_shader_objects...

And then you misinterpreted it as:


You want to tell me that stuff like glCompileShader is not supported anymore by core gl > 3.0 ?

For the last time, glCompileShader does not come from ARB_shader_objects! glCompileShaderARB and glCompileShader are not the same function. They do not have identical behavior; they do not have identical interfaces; and trying to pretend that they do can only end in tears.

Mhagain's point, which he clarified, was simply that: you can't use functions from the ARB_shader_objects with any GLSL versions > 1.10. And your code that tries to pretend that these two very different functions are the same is wrong.

mhagain
12-30-2014, 05:23 PM
I doubt there is more work to handling missing functions than it is right now. But that's not the main problem. I've heard here that you can't compile and link shaders beyond 1.10 anymore. But how are you supposed to work with shaders if you can't use those functions anymore? I'm just asking since for example core 3.3 lists glCompileShader. Your statements just leave me confused and not giving any answers.

You're still a bit mixed-up here it seems.

You can't use glCompileShaderARB to compile shaders beyond 1.10. Well, it might work on some drivers, but it's not guaranteed to, so you shouldn't.

You can use glCompileShader to compile shaders beyond 1.10.

glCompileShaderARB is the extension version, glCompileShader isn't. If a wglGetProcAddress call fails to return a valid function pointer for glCompileShader you can't just try use glCompileShaderARB instead, because they're not necessarily the same function. They might be with some drivers, but they're not guaranteed to be: the functions just aren't interchangable like that, so it's not safe to just assume that you can fallback to trying -ARB if a core entry point fails. Some day you're going to ship a program that will blow up in a user's face if you go on doing that.

Dragon
12-31-2014, 05:02 AM
This is all nice and fine but it doesn't answer the problem. I'm more or less using core 3.3 with some extensions here and there should they exists (optimize some stuff if they are found). With extensions you see if it exists and if the function should exist and if it doesn't you know the driver is broken and ignore the broken extension. But if you stick only to core without extension... how can you know what functions "do have" to exist? How to find out what is the "highest" core context you can create? In some cases an extension does not even export a function but only new symbols (seamless cube-map for example). With extensions having functions I could go around and just try to get every function of the extensions and call it existing if they are not NULL (which does fail on some drivers as I figured out). But for function-less extensions you can't do this. So how is this supposed to work in a portable way if you need to rely only on core?

Alfonse Reinheart
12-31-2014, 08:26 AM
But if you stick only to core without extension... how can you know what functions "do have" to exist?

Generally speaking, if you're using some form of OpenGL loading library (https://www.opengl.org/wiki/OpenGL_Loading_Library), you don't need to, since that's its job. If you're writing your own, then you need to use the OpenGL specification for that version as your guide to what ought to exist. Alternatively, you can iterate through the XML file in the registry (https://www.opengl.org/registry/) to query exactly which functions are in each version of OpenGL. It's in a Subversion version control, so you'll need that to download them. But the XML format is pretty readable, for both machines and people.


How to find out what is the "highest" core context you can create?

That's a much easier question: try. wgl/glXCreateContextAttribsARB takes a major and minor version number. This represents the minimum version that you're looking for. If the implementation cannot supply that version number, then context creation will fail. And you can either try again with a lower version number or error out if that was the absolute minimum you support. Your context creation code will generally need to loop anyway, in case some of the other attributes can't be used on the context. So just add versioning to that loop as well.

Once you've created your actual context, it's simply a matter of getting the major and minor version numbers from the context.

GClements
12-31-2014, 10:26 AM
But if you stick only to core without extension... how can you know what functions "do have" to exist?
Refer to the documentation (https://www.opengl.org/sdk/docs/man3/). If a function is listed there but isn't present in 3.0, the documentation will state the minimum required version. Similarly, if the function itself is present in 3.0 but specific parameter combinations require a later version, the documentation will state that.

OpenGL versions are much like extensions except that they're identified by a number rather than name. Each new version typically adds some new functions as well as extending the behaviour of some existing functions. The presence of a function can be determined by whether a query for its pointer returns null (at least on Windows), but extensions to the behaviour of existing functions can only be determined by checking the version.

On platforms other than Windows, being able to get a pointer to a function doesn't mean that the function "exists" in the sense that it can be used. On X11, glXGetProcAddress() will return a non-null pointer if the function is present within the library. But the library function just sends a request to an X server, which may or may not implement the function. In order to actually use that function on a given context, you need to check that the context supports it (it's entirely possible for a single application to have connections to multiple X servers, each of which may support a different version (or may not support OpenGL at all).

Dragon
12-31-2014, 02:42 PM
On platforms other than Windows, being able to get a pointer to a function doesn't mean that the function "exists" in the sense that it can be used. On X11, glXGetProcAddress() will return a non-null pointer if the function is present within the library. But the library function just sends a request to an X server, which may or may not implement the function. In order to actually use that function on a given context, you need to check that the context supports it (it's entirely possible for a single application to have connections to multiple X servers, each of which may support a different version (or may not support OpenGL at all).
That one is quite tricky. I don't want to segfault in such a situation. So you mean there is no sane way to check for the existence of a function?


try. wgl/glXCreateContextAttribsARB takes a major and minor version number. This represents the minimum version that you're looking for. If the implementation cannot supply that version number, then context creation will fail. And you can either try again with a lower version number or error out if that was the absolute minimum you support. Your context creation code will generally need to loop anyway, in case some of the other attributes can't be used on the context. So just add versioning to that loop as well.

Once you've created your actual context, it's simply a matter of getting the major and minor version numbers from the context.
So let's say I want to check what version exists. Let's say I start at 4.5 going down the version numbers trying to create contexts. If I say get a context at 4.0 does this not require the version to be "core 4.0" and not some random nonsense? I use only core so no compatibility bit or debug bit set.

GClements
12-31-2014, 03:50 PM
That one is quite tricky. I don't want to segfault in such a situation. So you mean there is no sane way to check for the existence of a function?
Whether the function exists and whether the corresponding functionality is supported are two different things.

glXGetProcAddress() will return NULL if the library doesn't provide the function. But just because the library provides the function, that doesn't mean that the X server will understand the request which the function generates (if it doesn't, it will respond with a BadRequest error).

So if you want to use e.g. glCompileShader(), you need to do two things: check that glXGetProcAddress("glCompileShader") returns a non-NULL pointer, and check that the context supports OpenGL 2.0 or later (e.g. via glGetString(GL_VERSION)). If you have connections to multiple displays, the latter check needs to be performed separately for each display (but the former doesn't; unlike on Windows, function pointers can't be different for different contexts, as the functions are part of the client library rather than the driver).



So let's say I want to check what version exists. Let's say I start at 4.5 going down the version numbers trying to create contexts. If I say get a context at 4.0 does this not require the version to be "core 4.0" and not some random nonsense? I use only core so no compatibility bit or debug bit set.
It depends upon the interface. The latest GLX interface (glXCreateContextAttribsARB) can return a context which implements a later version, subject to some restrictions (which boil down to: all functionality which is present in the requested version must still be present in the provided version).

Regardless of how you create the context, once created and bound, glGetString(GL_VERSION) will report the actual version in use. Any functionality specified by that version will be available.

mbentrup
01-01-2015, 08:38 AM
glXGetProcAddress() will return NULL if the library doesn't provide the function.

At least on Linux glXGetProcAddress() will never return NULL. If the requested function is unknown to the GL driver, it will return a stub that just throws an error on every call.


The latest GLX interface (glXCreateContextAttribsARB) can return a context which implements a later version, subject to some restrictions (which boil down to: all functionality which is present in the requested version must still be present in the provided version).

Regardless of how you create the context, once created and bound, glGetString(GL_VERSION) will report the actual version in use. Any functionality specified by that version will be available.

Returning the highest (compatible) version GL context has actually been the way GLX works since version 1.0, the new feature in GLX_ARB_create_context is that you can request a minimum version. The old GLX way always requested an OpenGL 1.1 context, and so you wouldn't have been able to use any higher OpenGL version until GLX_ARB_create_context was added.

Also this means it makes no sense to generate contexts starting with version 4.5 and working downward until you get a version that works. Instead you should just specify the minimum version that your code requires, and if that worked, then check the GL_VERSION string to see if you got something better.

Dragon
01-01-2015, 05:50 PM
At least on Linux glXGetProcAddress() will never return NULL. If the requested function is unknown to the GL driver, it will return a stub that just throws an error on every call.
I do get NULL on Linux and different drivers. Why should it return a stub if you ask for a method the driver does not even know the calling signature? If you have the wrong calling signature you're going to seriously mess up the stack.


Returning the highest (compatible) version GL context has actually been the way GLX works since version 1.0, the new feature in GLX_ARB_create_context is that you can request a minimum version. The old GLX way always requested an OpenGL 1.1 context, and so you wouldn't have been able to use any higher OpenGL version until GLX_ARB_create_context was added.

Also this means it makes no sense to generate contexts starting with version 4.5 and working downward until you get a version that works. Instead you should just specify the minimum version that your code requires, and if that worked, then check the GL_VERSION string to see if you got something better.
I don't see this behavior. I tried asking for 3.3 and got "3.3 core profile". If I put in something like 1.4 I get a "4.3 compat profile".

Alfonse Reinheart
01-01-2015, 07:07 PM
Why should it return a stub if you ask for a method the driver does not even know the calling signature?

There's a reason for that, actually.

In Windows land, wglGetProcAddress requires that there be a valid OpenGL rendering context current. The functions it returns can still be called after destroying said context, but you must have created a context before you can query function pointers.

In GLX land this is not true. It is legal to call glXGetProcAddress without having a context open. And this is not unreasonable. It allows you to query GLX extensions without having to create a context (unlike Win32, where you have to create a dummy context, get your WGL functions, then make a real context, and get function pointers for that).

And the function pointer you get back needs to be a valid pointer to something. So for OpenGL function (as opposed to GLX functions), many implementations of GLX will simply return a stub. The stub will query the current context; if there isn't one, it will error out. If there is one, but the driver won't return a function pointer for that function's name, it will error out.


I don't see this behavior. I tried asking for 3.3 and got "3.3 core profile". If I put in something like 1.4 I get a "4.3 compat profile".

Both of those are valid behavior, in accord with the create_context_attribs specification(s). It is legal for the implementation to return GL 3.3 if that's what you asked for, even if it could support 4.3. It is also legal for it to return 4.3 if you asked for 3.3.

It returned 4.3 compatibility when you asked for 1.4 because it already had to bump the version up, so it just returned everything.

I'm curious: on what hardware and drivers are you seeing this? I've tested similar code on both NVIDIA and ATI, and neither seemed to have this behavior. I would ask for, for example, GL 3.2 core and get 3.3 from the context.context.

Dark Photon
01-01-2015, 07:10 PM
At least on Linux glXGetProcAddress() will never return NULL. If the requested function is unknown to the GL driver, it will return a stub that just throws an error on every call.

If true, that appears to violate the GLX spec. Which vendor's GL driver are you seeing this on?



A return value of NULL indicates that the specified function does not exist for the implementation. A non-NULL return value for glXGetProcAddress does not guarantee that an extension function is actually supported at runtime. The client must also query glGetString(GL_EXTENSIONS ) or glXQueryExtensionsString to determine if an extension is supported by a particular context.

mbentrup
01-02-2015, 01:56 AM
If true, that appears to violate the GLX spec. Which vendor's GL driver are you seeing this on?

MESA does it, though their stub segfaults instead of returning an OpenGL error: http://cgit.freedesktop.org/mesa/mesa/tree/src/mapi/glapi/glapi_getproc.c#n502

This has been discussed extensively when the proposal for a new Linux OpenGL ABI was made, the drivers are allowed to return non-NULL pointers for unknown functions, and they even considered making this mandatory for the next ABI.

Dragon
01-02-2015, 05:18 AM
I'm curious: on what hardware and drivers are you seeing this? I've tested similar code on both NVIDIA and ATI, and neither seemed to have this behavior. I would ask for, for example, GL 3.2 core and get 3.3 from the context.context.
amd-catalyst-14-4-rev2-linux-amd64-may6 on a HD7970 . Yields "4.3.12874 Core Profile Context 8.723" or "4.4.12874 Compatibility Profile Context 8.723" with glxinfo.

GClements
01-02-2015, 05:31 AM
I do get NULL on Linux and different drivers. Why should it return a stub if you ask for a method the driver does not even know the calling signature? If you have the wrong calling signature you're going to seriously mess up the stack.
Not with the "cdecl" calling convention, which is the only one which Linux uses. This would be an issue for Windows, where the functions queried by wglGetProcAddress() use stdcall.

The biggest difference between glXGetProcAddress() and wglGetProcAddress() is that the functions returned by the latter are part of the driver, which means that they can vary between contexts (which is why you need a context to use that function). The glX version is little more than a wrapper around dlsym() (which is what everyone used before glXGetProcAddress() was added), so it doesn't require a context or even a connection to an X server (it's one of the few glX functions which doesn't take a Display* as an argument).



I don't see this behavior. I tried asking for 3.3 and got "3.3 core profile". If I put in something like 1.4 I get a "4.3 compat profile".
If you request version 3.2 or later but don't request a specific profile with GLX_CONTEXT_PROFILE_MASK_ARB, the default is a core profile. Versions prior to 3.2 don't have profiles, so you get a compatibility profile.

It appears that your implementation opts to just give you everything (i.e. latest version, compatibility profile) if you request a compatibility profile but the requested version if you request a core profile. That would satisfy the requirements of the specification without needing additional logic to determine which versions provide all of the functionality present in the requested version (currently, that's trivial, as nothing has actually been removed from the core profile since its introduction, but that may change in the future).

The exact wording from the extension specification (https://www.opengl.org/registry/specs/ARB/glx_create_context.txt) is


If a version less than or equal to 3.0 is requested, the context
returned may implement any of the following versions:

* Any version no less than that requested and no greater than 3.0.
* Version 3.1, if the GL_ARB_compatibility extension is also
implemented.
* The compatibility profile of version 3.2 or greater.

If version 3.1 is requested, the context returned may implement
any of the following versions:

* Version 3.1. The GL_ARB_compatibility extension may or may not
be implemented, as determined by the implementation.
* The core profile of version 3.2 or greater.

If version 3.2 or greater is requested, the context returned may
implement any of the following versions:

* The requested profile of the requested version.
* The requested profile of any later version, so long as no
features have been removed from that later version and profile.

Dragon
01-02-2015, 01:18 PM
Interesting to know is now if you miss out on functionality. So assuming you request a 3.3 core context and you get one, will you miss out on features/functionality/optimizations in contrary to another driver giving you 4.3 compatibility for example?

Alfonse Reinheart
01-02-2015, 01:46 PM
Please note the difference between what you see happening (in a particular implementation) and what is valid, legal behavior.

What you see happening in that implementation is legal OpenGL behavior. However, it is not the only legal behavior. The driver could also have given you core 4.0. Or core 4.3. Or whatever else it wanted.


So assuming you request a 3.3 core context and you get one, will you miss out on features/functionality/optimizations in contrary to another driver giving you 4.3 compatibility for example?

The implementation could do that; or it could not. The implementation is not required to give you the highest compatibility version if you asked for 1.4 either. It could have given you just 3.0, and that would be legal OpenGL behavior.

In short: there is no way to say, "give me your highest OpenGL version of the core profile." Nor is there a way to say "give me your highest OpenGL version of the compatibility profile." If a particular implementation does it in one case and not another, that's legal OpenGL behavior, but it's not required to do so.

But it is not something that any specification guarantees, so you should not rely on it. If you want to be able to reliably get the highest version, up to a particular point, you'll have to try creating every version 4.5 on down until one works. This is true for both core and compatibility.

GClements
01-02-2015, 06:13 PM
Interesting to know is now if you miss out on functionality. So assuming you request a 3.3 core context and you get one, will you miss out on features/functionality/optimizations in contrary to another driver giving you 4.3 compatibility for example?
There's no point in getting a version beyond that which you actually use. If your code doesn't use any features beyond those which 3.3 provides, it doesn't matter whether you get 3.3 or 4.3.

If there's a difference in performance, you would expect the lower version to do better, as not having to allow for features which only exist in the higher version means having fewer constraints.

Also, it's entirely possible that any non-forward-compatible context always implements the highest supported version, and the driver just adjusts the result of glGetString(GL_VERSION) to report the requested version.

Alfonse Reinheart
01-03-2015, 10:12 AM
There's no point in getting a version beyond that which you actually use. If your code doesn't use any features beyond those which 3.3 provides, it doesn't matter whether you get 3.3 or 4.3.

What he's trying to do is use (for example) 3.3 as a required minimum, but to use any higher features if the option is available.

Remember how the thread started: glVertexAttribIPointer, which does not have a backwards-compatibility extension (or even just an ARB one). So he couldn't just check an extension; he had to rely on the version being there.

And while, the point is moot for this specific case (since it's 3.0 and retains the deprecated features, it's pretty much always going to be returned if it is available), it is a valid issue in general.


Also, it's entirely possible that any non-forward-compatible context always implements the highest supported version, and the driver just adjusts the result of glGetString(GL_VERSION) to report the requested version.

Possible, but that would rely on A) using the results of wgl/glXGetProcAddress as the deciding factor on whether something is available for use (bad for the aforementioned reasons), and B) relying on highly implementation-defined behavior.

It's better to just do the thing that you know will work. And that means starting from the highest version you care about and working your way down.

GClements
01-04-2015, 04:13 PM
What he's trying to do is use (for example) 3.3 as a required minimum, but to use any higher features if the option is available.
Right. The point is that if you ask for 3.3, there's no reason to assume that you'll get anything higher even if it's available.


Possible, but that would rely on A) using the results of wgl/glXGetProcAddress as the deciding factor on whether something is available for use (bad for the aforementioned reasons), and B) relying on highly implementation-defined behavior.
I said that it's possible, not that it should be relied upon. If the reported version is 3.3, you can assume that you have all functionality in 3.3. and nothing more. If you request 3.3, you can assume that either you'll get a context which provides everything in 3.3. (maybe more, maybe not) or you'll get nothing (i.e. failure).


And that means starting from the highest version you care about and working your way down.
It may also be worth giving the user the option to limit the requested version. Maybe the newest version hasn't been as thoroughly debugged and/or optimised as an older version. If the program only gains minimal benefit from the new features, newest may not be best.

mbentrup
01-06-2015, 01:24 AM
It may also be worth giving the user the option to limit the requested version. Maybe the newest version hasn't been as thoroughly debugged and/or optimised as an older version. If the program only gains minimal benefit from the new features, newest may not be best.

The changes of OpenGL 3.3 and later versions have been released separately as ARB extensions too, so restricting the GL version wouldn't necessarily disable any features.