Vertex shader integer input broken

Banged by head for hours against the wall to figure out why a shader didn’t work just to figure out that as it looks like GLSL can not deal with integer vertex input attributes although the specs clearly state it should. Take this code:

glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, 16, pointer );
glVertexAttribPointer( 1, 1, GL_INT, GL_FALSE, 16, pointer+12 );

This defines a vec3 and int input attribute. The shader looks like this:

#version 140

uniform samplerBuffer texWeightMatrices;

in vec3 inPosition;
in int inWeight;

out vec3 outPosition;

void main( void ){
   vec4 row1 = texelFetch( texWeightMatrices, inWeight*3 );
   vec4 row2 = texelFetch( texWeightMatrices, inWeight*3 + 1 );
   vec4 row3 = texelFetch( texWeightMatrices, inWeight*3 + 2 );
   outPosition = vec4( inPosition, 1.0 ) * mat3x4( row1, row2, row3 );
   gl_Position = vec4( 0.0, 0.0, 0.0, 1.0 );
}

This totally fails resulting in wrong values written to the result VBO. Doing this on the other hand:

// same as above
in float inWeight;
// same as above
   int weight = int( inWeight ) * 3; // and now using weight instead of inWeight*3

This works correctly. According to GLSL specs “4.3.4 Inputs” though this should be correct:

Vertex shader inputs can only be float, floating-point vectors, matrices, signed and unsigned integers and integer vectors. They cannot be arrays or structures.

What’s going on here? Why is it impossible to use “in int” although the specs clearly say so?

[QUOTE=Dragon;1263327]

glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, 16, pointer );
glVertexAttribPointer( 1, 1, GL_INT, GL_FALSE, 16, pointer+12 );

This defines a vec3 and int input attribute.[/quote]

No, it does not. It defines two floating-point vertex shader inputs, one of which is stored in the buffer object as 3 floats, and the other of which is stored as 1 non-normalized 32-bit signed integer.

If you want to specify a vertex attribute that connects to an actual integer in GLSL, you must use glVertexAttribIPointer. Note the “I” present before “Pointer”. That means you’re specifying data that will be retrieved in GLSL by an integer variable (signed or unsigned).

If you’re wondering why OpenGL makes you specify redundant information, it doesn’t. What this does is let you specify that a floating-point input in GLSL comes from normalized or non-normalized integer data in your VBO. It’s a way of doing compression. For example, it allows you to store the 4 components of a color as normalized, unsigned bytes, using only 4 bytes that feeds an entire “vec4” in GLSL.

That indeed does fix the problem. I didn’t see this function call since it is not part of any extension. I work only with extensions no core features at all. But in the glext.h from the OpenGL website the function you’ve mentioned is only listed in core 3.0 but not in any extension. That’s very unfortunate. People can easily miss this function call if looking at extensions.

It was introduced in EXT_gpu_shader4, way back in 2006.

You should really stop doing that. There are a lot of features that have been added directly into core OpenGL, without a matching extension.

Extensions should be the exception in your code, not the rule. Especially since you seem to be using modern OpenGL 3.x+ style (in/out variables, etc).

[QUOTE=Alfonse Reinheart;1263375]You should really stop doing that. There are a lot of features that have been added directly into core OpenGL, without a matching extension.

Extensions should be the exception in your code, not the rule. Especially since you seem to be using modern OpenGL 3.x+ style (in/out variables, etc).[/QUOTE]
I can’t support this thesis though one bit. I’ve seen different GPU drivers which did for example not implement GL4.x but exposing extensions from it. If I would request a certain core level I would miss out on functionality. This is why I use only extensions and check if functions exist. Everything else just had not been portable enough to be useful. OpenGL is great but sticking to core did never work so far… especially on Windows with the most broken OpenGL implementation in existence :confused:

I can’t support this thesis though one bit.

But you do, whether you believe you do or not.

Your shader uses [var]in[/var] and [var]out[/var]. There is no OpenGL extension that specifies these keywords. The closest you get is the terrible ARB_geometry_shader4 (defining “varying in” and “varying out”), which is so dissimilar from the actual core GL 3.2 geometry shader feature that I don’t even link to them on the Geometry Shader wiki article, for fear of confusing people.

If your shader uses those keywords on global variables, then it depends on at least GLSL 1.30 (aka: GL 3.0). Your shader code (and therefore your OpenGL code) is not written against some extension (or at least, not just an extension); it is written against GL 3.0. So you’re already making a minimum assumption of GL 3.0.

And speaking of geometry shaders, as I stated above, the core GS functionality is exposed in a very different way from the ARB_geometry_shader4 extension. I’m not sure how compatible the extension functionality will be with more advanced stuff (the interaction between ARB_tessellation_shader and ARB_geometry_shader4 is not specified, for example). Furthermore, Mac OSX does not implement the ARB_geometry_shader4 extension. Not even on the older 3.2 implementations.

So if you’re doing this “extension only” coding for compatibility, you’re not helping yourself. Granted, GS functionality isn’t the most useful thing in the world, but that’s not the only stuff that was incorporated with significant changes (ARB_fbo has some wording changes compared to the core FBO functionality, for example).

At the very least, you can’t just ignore the version number and rely on backwards-compatibility extensions, because there has been some useful bits and pieces that never saw an extension form. You need to treat the OpenGL version as another extension, another variable which could expose functionality.

[QUOTE=Alfonse Reinheart;1263381]But you do, whether you believe you do or not.

Your shader uses [var]in[/var] and [var]out[/var]. There is no OpenGL extension that specifies these keywords. The closest you get is the terrible ARB_geometry_shader4 (defining “varying in” and “varying out”), which is so dissimilar from the actual core GL 3.2 geometry shader feature that I don’t even link to them on the Geometry Shader wiki article, for fear of confusing people.

If your shader uses those keywords on global variables, then it depends on at least GLSL 1.30 (aka: GL 3.0). Your shader code (and therefore your OpenGL code) is not written against some extension (or at least, not just an extension); it is written against GL 3.0. So you’re already making a minimum assumption of GL 3.0.

And speaking of geometry shaders, as I stated above, the core GS functionality is exposed in a very different way from the ARB_geometry_shader4 extension. I’m not sure how compatible the extension functionality will be with more advanced stuff (the interaction between ARB_tessellation_shader and ARB_geometry_shader4 is not specified, for example). Furthermore, Mac OSX does not implement the ARB_geometry_shader4 extension. Not even on the older 3.2 implementations.

So if you’re doing this “extension only” coding for compatibility, you’re not helping yourself. Granted, GS functionality isn’t the most useful thing in the world, but that’s not the only stuff that was incorporated with significant changes (ARB_fbo has some wording changes compared to the core FBO functionality, for example).

At the very least, you can’t just ignore the version number and rely on backwards-compatibility extensions, because there has been some useful bits and pieces that never saw an extension form. You need to treat the OpenGL version as another extension, another variable which could expose functionality.[/QUOTE]
This doesn’t make sense. When I started using #130, #140 and even #150 (when required) I’ve worked with GPU drivers not exposing the required core profiles yet I can work with these GLSL versions and extensions linked to it. If I used core I would not be able to do what I did up to now.

Granted my current GPU is high-end so it has >4.3 but I can not rely on that. I even have to bundle glext.h since Windows still has a pathetic GL implementation. If I use core profile there I’m stuck in like 1.2 or maybe 2.x if I’m lucky not even having the entry points.

If you know a “sane” and “portable” way to get the required functionality from an installed GPU driver without relying on extensions you’re kindly invited to tell me. But up to now I know no such way.

I really hope that this doesn’t mean that you’re using GL_ARB_shader_objects…

You want to tell me that stuff like glCompileShader is not supported anymore by core gl > 3.0 ?

glCompileShader() still exists. glGetObjectParameter() has been replaced with glGetShader() and glGetProgram(), several other “object” functions have similarly been split into distinct shader and program versions.

In any case, if you’re relying upon the existence of that extension, there’s no actual reason for modern implementations to provide it when they provide equivalent functionality through the core API.

The extension need not be returned by glGetStringi(GL_EXTENSIONS, index) (note that glGetString(GL_EXTENSIONS) may fail with GL_INVALID_ENUM in recent versions), and wglGetProcAddress(“glCompileShaderARB”) may fail even if wglGetProcAddress(“glCompileShader”) works. If glCompileShaderARB() exists, it doesn’t necessarily support the same GLSL versions as glCompileShader() (i.e. up to the version indicated by glGetString(GL_SHADING_LANGUAGE_VERSION)).

If using GLSL 3.0 code works with glCompileShaderARB() on your system, good for you. But you’re not providing any “compatibility” by using the extension instead of the core function.

Also:

Windows’ OpenGL version is whatever the driver provides. The DLL only exports OpenGL 1.1 functions; anything added in later versions must be obtained via wglGetProcAddress(). This doesn’t mean that such functions are extensions, though.

[QUOTE=Dragon;1263401]This doesn’t make sense. When I started using #130, #140 and even #150 (when required) I’ve worked with GPU drivers not exposing the required core profiles yet I can work with these GLSL versions and extensions linked to it. If I used core I would not be able to do what I did up to now.

Granted my current GPU is high-end so it has >4.3 but I can not rely on that. I even have to bundle glext.h since Windows still has a pathetic GL implementation. If I use core profile there I’m stuck in like 1.2 or maybe 2.x if I’m lucky not even having the entry points.

If you know a “sane” and “portable” way to get the required functionality from an installed GPU driver without relying on extensions you’re kindly invited to tell me. But up to now I know no such way.[/QUOTE]

I understand the problem now. You seem to be confusing “extension” with “loading function pointers.” They’re not the same thing.

The Windows OpenGL32.dll only directly exposes functions in OpenGL 1.1. To access core functions from versions higher than this, you must retrieve the function pointers with wglGetProcAddress. Or use a library to do it. This is the same mechanism you use for loading extension function pointers.

That doesn’t make version 3.0 an extension to OpenGL. The version 3.0 functions are loaded manually, but they’re still core functions, not extensions. You just access them through the same mechanism.

So when I was talking about sticking to core OpenGL, I didn’t mean that you wouldn’t have to load any function pointers. Only that you wouldn’t be using extensions.

glCompileShaderARB(GLhandleARB shaderObj) is the one I mean; that’s from the old GL_ARB_shader_objects extension and the reason why I ask is because this is the only expression of GLSL objects as an extension; glCompileShader (i.e without the -ARB suffix) and friends were never available via an extension but were only ever part of core GL 2.0 (so right away you’re using core GL and not extensions if they’re what you’re using).

The reason why this is important is because you should never rely on newer functionality interoperating with the GL_ARB_shader_objects extension (unless it in turn is explicitly specified to do so). If you’re relying on extensions you should check the “Dependencies” section of their own specifications, check which GL version(s) (or other extensions) are required for supporting the new extension, and check which GL version the extension specification is written against. If you see that an extension requires GL2.0 and/or is written against the GL2.0 spec, for example, then you should never rely on that extension working with GL_ARB_shader_objects. It might on some hardware/drivers, but it might explode on others, and even if it does work, vendors are under no obligation to ensure that it continues to do so.

All this is academic of course since (as Alfonse commented) you do seem to be confusing “use the extension loading mechanism” with “use extensions”.

Basically I check if extensions are listed to see if I can expect certain function pointers to be present or not. I’ve tried relying on the opengl version but (1) you can’t retrieve the version prior to creating a context and (2) I’ve had more than one GPU driver telling to support a core profile but retrieving certain functions from it returned NULL. That’s why I safe-guard by checking if an extensions is listed before trying to query functions. I also don’t rely on functions existing if the driver claims to support a certain core version since this failed more than once so far. Furthermore I never query a function by a single name. So if I want glCompileShader I do something like this:

pglCompileShader = GetProcAddress("glCompileShader");
if( ! pglCompileShader ){
   pglCompileShader = GetProcAddress("glCompileShaderARB");
   if( ! pglCompileShader ){
      pglCompileShader = GetProcAddress("glCompileShaderExt");
      if( ! pglCompileShader ){
         fail
      }
   }
}

This had been the only solution working so far with these requirements:

  • no failing incorrectly if a function is not supported (optional functionality should not make the game bail out)
  • failing if driver claims core but fails to deliver a function (for optional functionality treat it as extension not existing)
  • retrieving core function over extension function if existing (extension function as fallback if core not found).

You see a better way to deal with problematic GPU drivers and core/extension muddle?

I’ve tried relying on the opengl version but (1) you can’t retrieve the version prior to creating a context and

You can’t retrieve function pointers or check the extension string without a context either. Not on Win32.

(2) I’ve had more than one GPU driver telling to support a core profile but retrieving certain functions from it returned NULL.

This sometimes happens with extensions too.

So if I want glCompileShader I do something like this:

You realize that those are two different functions with two different behaviors, yes?

For example, you cannot assume that glCompileShaderARB can be fed shaders from GLSL versions other than 1.10. Oh sure, it might work. And it might not. It might work today and break tomorrow. It might work for most of your post-1.10 shaders, then break for something new you pass. Or whatever. The spec says that it only handles GLSL version 1.10, period.

More important is the fact that the two functions do not have the same interface. glCompileShader takes a GLuint, while glCompileShaderARB takes a GLhandleARB, which is usually a 32-bit signed integer, but on MacOSX (for a reason that escapes me) is a pointer. Your code will fail in a 64-bit MacOSX build, since GLuints are always 32-bits in size.

Only you won’t get a compilation error. Since you cast the function pointer to a function that takes a GLuint, C will dutifully obey you. And thus, the error you get is that you’ll call a function that expects a parameter of a different size. Which leads to undefined behavior in C, and God only knows what kind of error you’ll get at runtime. It could be somewhere in the driver, long after the function call returned. It could be stack trashing that only shows up a few million lines of code away. It could be nothing… until an unfortunate memory allocation or rare combination of code causes it to appear. And every time you attempt to debug it or change your code to find it, it will appear in a completely different location.

In short, what you’re doing is both dangerous and non-portable.

Also, there is no glCompileShaderEXT and there never was. You cannot blindly assume that every core function has an ARB and EXT equivalent.

  • no failing incorrectly if a function is not supported (optional functionality should not make the game bail out)
  • failing if driver claims core but fails to deliver a function (for optional functionality treat it as extension not existing)
  • retrieving core function over extension function if existing (extension function as fallback if core not found).

You see a better way to deal with problematic GPU drivers and core/extension muddle?

It all depends on how you define “better”. I prize de-facto correctness (ie: writing towards the specification), because at least then if my program doesn’t do what it’s supposed to, I know who to blame. Also, if I need to do implementation-specific fixes, I have a spec-correct baseline to start from.

Thus, if an implementation claims to provide version X.Y, but not all of the X.Y functions are actually exposed, then I would say that the implementation is too broken for the program to continue or recover. Even if the functions are for optional features of my program. An implementation that is so broken that it can’t even expose broken versions of functions that it claims to support should not be trusted. I certainly would be disinclined to trust that the functions such an implementation actually provides do what they say that they do.

Extensions should only be used as “fallbacks” to core functions if those are so-called “backwards-compatibility” extensions. That is, if they lack an extension suffix. Otherwise, you’re code should be doing the actual work of using the extension feature.

You should never assume that, for example, “glBindBufferRangeEXT” (from EXT_transform_feedback) does the exact same thing as “glBindBufferRange”. For example, there is nothing that says that glBindBufferRangeEXT can ever be used with the GL_UNIFORM_BUFFER binding point. Whereas ARB_uniform_buffer_object does state that it can be used with glBindBufferRange.

To be sure, it will probably work. But it might not. And you certainly have no foundation to stand on when submitting a bug report to an IHV if you want to make it work. The various specifications have no guarantees on the matter.

Yes, this means that if you can’t get some functionality through core or a BC-extension, then you’re going to have to put different branches for different extensions into your actual rendering code, rather than just swapping function pointers and pretending that there’s no difference. Yes, this means more work for you. But it also means that your actual rendering code will be better able to handle oddball failures, and it won’t try to demand the implementation do things that it doesn’t have to.

You don’t want to accidentally rely on implementation-defined behavior.

I doubt there is more work to handling missing functions than it is right now. But that’s not the main problem. I’ve heard here that you can’t compile and link shaders beyond 1.10 anymore. But how are you supposed to work with shaders if you can’t use those functions anymore? I’m just asking since for example core 3.3 lists glCompileShader. Your statements just leave me confused and not giving any answers.

No, you didn’t. You heard:

And then you misinterpreted it as:

For the last time, glCompileShader does not come from ARB_shader_objects! glCompileShaderARB and glCompileShader are not the same function. They do not have identical behavior; they do not have identical interfaces; and trying to pretend that they do can only end in tears.

Mhagain’s point, which he clarified, was simply that: you can’t use functions from the ARB_shader_objects with any GLSL versions > 1.10. And your code that tries to pretend that these two very different functions are the same is wrong.

You’re still a bit mixed-up here it seems.

You can’t use glCompileShaderARB to compile shaders beyond 1.10. Well, it might work on some drivers, but it’s not guaranteed to, so you shouldn’t.

You can use glCompileShader to compile shaders beyond 1.10.

glCompileShaderARB is the extension version, glCompileShader isn’t. If a wglGetProcAddress call fails to return a valid function pointer for glCompileShader you can’t just try use glCompileShaderARB instead, because they’re not necessarily the same function. They might be with some drivers, but they’re not guaranteed to be: the functions just aren’t interchangable like that, so it’s not safe to just assume that you can fallback to trying -ARB if a core entry point fails. Some day you’re going to ship a program that will blow up in a user’s face if you go on doing that.

This is all nice and fine but it doesn’t answer the problem. I’m more or less using core 3.3 with some extensions here and there should they exists (optimize some stuff if they are found). With extensions you see if it exists and if the function should exist and if it doesn’t you know the driver is broken and ignore the broken extension. But if you stick only to core without extension… how can you know what functions “do have” to exist? How to find out what is the “highest” core context you can create? In some cases an extension does not even export a function but only new symbols (seamless cube-map for example). With extensions having functions I could go around and just try to get every function of the extensions and call it existing if they are not NULL (which does fail on some drivers as I figured out). But for function-less extensions you can’t do this. So how is this supposed to work in a portable way if you need to rely only on core?

But if you stick only to core without extension… how can you know what functions “do have” to exist?

Generally speaking, if you’re using some form of OpenGL loading library, you don’t need to, since that’s its job. If you’re writing your own, then you need to use the OpenGL specification for that version as your guide to what ought to exist. Alternatively, you can iterate through the XML file in the registry to query exactly which functions are in each version of OpenGL. It’s in a Subversion version control, so you’ll need that to download them. But the XML format is pretty readable, for both machines and people.

How to find out what is the “highest” core context you can create?

That’s a much easier question: try. wgl/glXCreateContextAttribsARB takes a major and minor version number. This represents the minimum version that you’re looking for. If the implementation cannot supply that version number, then context creation will fail. And you can either try again with a lower version number or error out if that was the absolute minimum you support. Your context creation code will generally need to loop anyway, in case some of the other attributes can’t be used on the context. So just add versioning to that loop as well.

Once you’ve created your actual context, it’s simply a matter of getting the major and minor version numbers from the context.