Problem with ARB_texture_rg on NV

Hello,

I wanted to kindly ask if this is a driver bug or not.

This code:


        if (!GLEW_ARB_texture_rg)
        {
            printf( "# ARB_texture_rg is not supported on this hardware.
" );
            return;
        }
	glGenTextures( 1, &textureId );
	glBindTexture( GL_TEXTURE_2D, textureId );
	assert( glGetError() == GL_NO_ERROR );
	glTexImage2D( GL_TEXTURE_2D, 0, GL_R32UI, 256, 4, 0, GL_RED, GL_UNSIGNED_INT, NULL );
	printf( "%s
", glGetString( GL_VENDOR ) );
	printf( "%s
", glGetString( GL_RENDERER ) );
	printf( "%s
", glGetString( GL_VERSION ) );
	printf( "%s
", glGetString( GL_SHADING_LANGUAGE_VERSION ) );
	printf( "glGetError() returns 0x%x
", glGetError() );

Outputs:

NVIDIA Corporation
GeForce GTS 450/PCI/SSE2
4.1.0
4.10 NVIDIA via Cg compiler
glGetError() returns 0x502

where 0x502 is the GL_INVALID_OPERATION.

I think i’ve got the latest drivers installed on windows 7 64 bits…

Thank you for any help,

Yes, it should error; NVIDIA’s correct.

GL_R32UI is an integral texture internal format. It must be paired at all times with a pixel transfer format that ends in _INTEGER. So it needs to be GL_RED_INTEGER.

I know that, since you pass NULL, you won’t be doing any pixel transfers, so it’s harmless. The OpenGL spec doesn’t care; it says that if you use an integral internal format, you must use a pixel transfer format that ends with _INTEGER or you get GL_INVALID_OPERATION. And you do.

Ok, thank you very much for your help to clarify this :slight_smile:

Well, sorry to bump this one again, but i’ve got stuck in a further issue. To clarify a bit, i’m trying to implement a radix sort on the GPU using opengl and a good bunch of extensions, and it’s the reason why i need so much these unsigned integer texture formats. :wink:

I know there might not be much benefit to do so, but i’d like get the sorted data directly on the GPU (sorting particles and transparent triangles).

I’ve got now a compiling shader using EXT_texture_image_load_store, EXT_gpu_shader4, and a couple of R32UI images bound, one as read only, and the other as write only to store statistics on the first one. The first texture image (read only) is actually a texture_buffer storing in a VBO the keys so sort out.

I tried the approach to use a FBO to render in the unsigned luminance integer texture first, but the result was always a texture filled with zeros. This is why i now try to use only texture_image_load_store, as there seems to be conflicts when used with FBOs.

I may first report this bug which i stumbled upon, when glMemoryBarrierEXT is used before any draw calls is issued, it causes the application process to exit without any explicit error message. When it’s used after there is no problems, i assume it works properly.

When it comes to using uint in GLSL code, i was surprised how restricted it is with constants: it’s forcing to typecast explicitelly every integer constant to uint because “implicit” conversions from integers are forbidden, as a result you end up with code like this:

uint test = uint(0);
uint somecalculation = (i<<uint(2))+j;

Also, when it comes to EXT_texture_image_load_store, i noticed that the keywords from the specification such as const, restrict and coherent are unsupported when declaring images. This seems to be the only declaration that works out in my case:

layout(size1x32) volatile uniform uimage2D statistics;

I’m bumping into some really odd side effects because i’d need some cache coherency, but

layout(size1x32) coherent uniform uimage2D statistics;

triggers an error:

0(7) : error C0000: syntax error, unexpected identifier, expecting “::” at token “coherent”

I hope my feedback (and mistakes) can help out others to learn from this…
Regards,

When it comes to using uint in GLSL code, i was surprised how restricted it is with constants: it’s forcing to typecast explicitelly every integer constant to uint because “implicit” conversions from integers are forbidden, as a result you end up with code like this:

That’s because you’re using GLSL version 1.20 (which is why you need to use EXT_gpu_shader4). If you used, for example, 3.30, it would happen naturally.

Also, you can just use “2u” for unsigned integer literals. No need to typecast.

Thank you, it works indeed when i specify

#version 410 compatibility

I was previously using 150… in 330 compatibility i still get the error.

But i notice that integet constants cast implicitly to float from GLSL 330. :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.