Supported OpenGL internalformats

I got a problem with creating my 3D texture using glTexImage3D: When i choose one of the mentioned internal formatsfor single channel 16 bit - like GL_R16UI or GL_R16I , then the command SILENTLY fails. I only can catch an error when using gdebugger, for some reason glGetError doesn’t work here. The error is GL_INVALID_OPERATION if it matters. Now if I use GL_R16 (which is not mentioned in the list of supported formats) it works fine and seems to be internally stored as GL_R16 in 16 bit (no mention of unsigned or signed). Now, as I am not a big fan of gambling, I would like to know what formats are supported so it cannot fail on different machines. Also there is no mention in the OpenGL 3 Specification how GL_R16 will be handled from within the shader, i would assumed as signed but i upload the data from an array with unsigned shorts and declare them as that, so now i can’t be sure if those 16 unsigned bits get compressed to 8 bit, to be used in 16 signed bit format or if it is handled correctly internally.

Additionally i do not see how i could query supported internal formats - there is no get function available and also i couldn’t find an overview of internalformats in gldebugger and similar software. Is it not possible to query available internal formats? So how would one handle this in general? Just use the most general format like GL_R just to be sure?

PS.: I use OpenGL 3.3 Core Profile.

GL_R16 is just like its homologues GL_RGBA8, GL_RGB8, GL_LUMINCANCE8, GL_LUMINCANCE16 and the many others.
All those formats are unsigned integers.
Also, you must use ordinary samplers in your shader such as sampler1D, sampler2D, sampler3D and samplerCube.

The values returned in your shader is from 0 to 1 floating point because they are “normalized integer formats”.

For GL_R16UI and GL_R16I, you must declare them as isampler or uisampler in your shader.
Also, the values returned in your shader are from 0 to 16xxx or -8xxx to +8xxx.

If the GPU can create a GL 3 context, then it must support GL_R16UI and GL_R16I. It sounds like you have some crappy drivers and/or GPU.

Thanks, i did not realize there was such a thing as isampler or uisamplers and that those internal formats are used in connection with that.
I did not read the full OpenGL 3 spec, I must have missed this important part.

I guess my drivers do not support those formats then. Is there no way to generally query supported formats? It seems odd that this can’t be queried to implement a fallback technique for such cases.

As for my issue, GL_R16 is the optimal solution then, thank you!

GL_R16I (GL_R16UI) is supported by GL3.3 - end of story. There is no need to query for that since driver has to support it if its GL3.3.

If it doesnt work for you, then you either have a problem elsewhere in your code (like you could have passed format that is not *_INTEGER to teximage, or format thats incompatibile with integers) or you have found a driver bug. In the latter case you should probably report it where aproperiate (if you care) and detect vendor / driver version with said problem and provide a workaround for it (or drop support).

4 things are required for the R16UI to work:

  • usampler3D // this will require GLSL #version 130 iirc
  • *_INTEGER format when using glTexImage3D()
  • set texture filtering to GL_NEAREST.
  • no mipmaps iirc

Ilian is correct, except that mipmaps are supported and the minification filter can be also NEAREST_MIPMAP_NEAREST besides NEAREST.

Yeah, I didn’t want to check-up on the spec so I just wrote “iirc” ^^".
Anyway, the glGetError() oddity you see, is actually the way GL is designed to work. That gl_invalid_operation is something gDebugger retrieved from the driver by… calling glGetError() after every call to GL. (except in calls done between glBegin/glEnd) .
When you call glTexImage3D() with proper arguments and data , the internal gl_error won’t be set. So, glGetError() will return no_error. That’s because you might want to upload more mip-levels, tune filtering, do stuff - basically do many more steps to complete the texture.
The actual validation (and upload to vram, usually) of the texture will happen on the next drawcall that uses it. If the texture is not perfectly setup, then the call to glDrawArrays() will set gl_error to invalid_operation. gDebugger calls glGetError after the drawcall, and reports the invalid_operation. In your case, I bet you didn’t set the filtering to GL_NEAREST - which by spec should be an error.

So, generally uploading resources to GL will not immediately return error unless the params for their creation are blatantly wrong. After drawcalls (and readbacks and fbo stuff), you’ll be told whether the resources are incomplete or some state is wrong.

Btw, if you’re targeting GL3.x, the spec is very strict about which formats must be supported (R16UI etc); otherwise the implementation isn’t GL3.x at all. So, no need to worry whether things are supported - just check the spec :slight_smile: .

Oh joy of mutable objects in OpenGL. Some time ago i (with a colleague aid) realized that through set of valid (ie. no error anywhere) commands you can end up with fbo depth attachment that has color internal format …

I bet you didn’t set the filtering to GL_NEAREST - which by spec should be an error.

That should only cause sampling operations to return black - such texture is incomplete.
For error it would have to be something like uniform type mismatch i guess.