How to use GL_RGB10_A2UI in OpenGL3.3 or later?

Dear all:
I want to use the glTexImage2D() function with GL_RGB10_A2UI internalformat, but it reports this is an invalid operation and shows a wihte texture on target.
step 1:
Just declare an globle texture buffer array simply:
#define checkImageWidth 64 #define checkImageHeight 64
static GLuint checkImage[checkImageHeight][checkImageWidth];

 step2:
      [b]glGenTextures(1, texName);[/b]

glBindTexture(GL_TEXTURE_2D, texName[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D,
0,
[SIZE=2] GL_RGB10_A2UI,
checkImageWidth,[/SIZE]

checkImageHeight,
0,
GL_RGBA,
[SIZE=2] GL_UNSIGNED_INT_10_10_10_2,[/SIZE]

[b] checkImage);

[SIZE=2]Does anyone konws what’s wrong with my understand of the RGB10_A2UI parameter?[/SIZE][/b]:confused:[SIZE=2]
thank you very much!
[/SIZE]

As far as I can tell, an INVALID_OPERATION should not be generated by glTexImage2D as all conditions are met for the call to be successful.

What hardware? Latest drivers?

The API ref at http://www.opengl.org/sdk/docs/man3/xhtml/glTexImage2D.xml doesn’t list GL_RGB10_A2UI as a valid constant for the internal format. However, the spec (4.2 core) does and mentions that GL_RGB10_A2UI has been introduced with GL 3.3. This, however, is probably a shortcoming of the reference. Even if the constant wasn’t defined for 3.3 capable hardware, the GL would probably report a INVALID_VALUE.

You have to use GL_RGBA_INTEGER or GL_BGRA_INTEGER.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB10_A2UI, 64, 64,
0, GL_RGBA_INTEGER, GL_UNSIGNED_INT_10_10_10_2, checkImage);

[QUOTE=thokra;1237918]As far as I can tell, an INVALID_OPERATION should not be generated by glTexImage2D as all conditions are met for the call to be successful.

What hardware? Latest drivers?

The API ref at http://www.opengl.org/sdk/docs/man3/xhtml/glTexImage2D.xml doesn’t list GL_RGB10_A2UI as a valid constant for the internal format. However, the spec (4.2 core) does and mentions that GL_RGB10_A2UI has been introduced with GL 3.3. This, however, is probably a shortcoming of the reference. Even if the constant wasn’t defined for 3.3 capable hardware, the GL would probably report a INVALID_VALUE.[/QUOTE]
hi,thokra
my hardware is nvidia 8400M,my OS is Ubuntu 11.10.
I use the glxinfo command print the information as follows:
season@ubuntu:~ glxinfo | grepGL_ARB_texture_rgb10_a2ui[/b][b] GL_ARB_texture_rectangle,GL_ARB_texture_rg, [SIZE=2]GL_ARB_texture_rgb10_a2ui, season@ubuntu:~ glxinfo | grep 3.3[/SIZE]
OpenGL version string: 3.3.0 NVIDIA280.13
OpenGL shading language version string:3.30 NVIDIA via Cg compiler
0x023 32 tc 0 32 0 r y . 8 8 8 0 . s 4 24 8 16 16 16 16 0 0 None
0x063 32 tc 0 32 0 r . . 8 8 8 0 . s 4 0 0 16 16 16 16 0 0 None
0x073 32 tc 0 32 0 r . . 8 8 8 0 . s 4 24 8 16 16 16 16 4 1 Ncon
0x0b3 32 tc 0 32 0 r . . 8 8 8 0 . s 4 24 0 16 16 16 16 0 0 None
[b] 0x0c3 32 tc 0 32 0 r y . 8 8 8 0 . s 4 24 8 16 16 16 16 4 1 Ncon

[/b]

[QUOTE=V-man;1237930]You have to use GL_RGBA_INTEGER or GL_BGRA_INTEGER.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB10_A2UI, 64, 64,
0, GL_RGBA_INTEGER, GL_UNSIGNED_INT_10_10_10_2, checkImage);[/QUOTE]
thank you very much,V-man,
I use the GL_RGBA_INTEGER instead of GL_RGBA , glGetError() reports ‘no error’,it seems to be ok now.
I read the OpenGL redbook,it says format can be ‘GL_RGBA or GL_RGBA_INTEGER’ ,is that means GL_RGBA == GL_RGBA_INTEGER?
but form the above code ,it looks contradictory.

I read the OpenGL redbook,it says format can be ‘GL_RGBA or GL_RGBA_INTEGER’ ,is that means GL_RGBA == GL_RGBA_INTEGER?

No, it means you can use one or the other. You can use either, but which you use depends on what you’re trying to do. Just as you can use GL_RGBA8 or GL_RGB10_A2UI for the internal format. However, if you actually want 10/10/10/2 unsigned integer image data, you have to use GL_RGB10_A2UI, not GL_RGBA8.

Also, please use proper [ code ] tags, not font sizes and such for code.

Why does an implementation need GL_RGBA_INTEGER? Why doesn’t the implementation simply deduce from the internal format which clearly staes UI or the type which even more cleary states UNSIGNED_INT? Aside from that, neither the man pages for GL3 nor those for GL4 mention RGBA_INTEGER.

Well, if you use GL_RGBA + GL_UNSIGNED_INT it means that you have a four component 32 bit per component normalized source format (that’s mapped to floating point values in the range [0,1]). While when you specify GL_RGBA_INTEGER + GL_UNSIGNED_INT you explicitly state that the source format is unnormalized integer.

The implementation cannot deduce this from the internal format as the internal format and the external format have to be completely independent, they never rely on each other, there are only restrictions on the combinations that are valid to use.

I agree it’s confusing, but it makes sense if you think about it this way.

Aside from that, neither the man pages for GL3 nor those for GL4 mention RGBA_INTEGER.

Thank you for pointing that out so that I can correct it on the Wiki’s 4.x documentation.

It’s fixed now.

[QUOTE=Alfonse Reinheart;1237938]Also, please use proper [ code ] tags, not font sizes and such for code.[/QUOTE]ok,I got it!thanks

You’re welcome. I actually didn’t know there was a seperate API ref in the wiki. :doh:

Pretty cool though. Thanks for that Alfonse!