Bitwise?

I need a 64 bit of data cube (packed) that I need to transfer to GPU to do bitwise operations on.

Would this be correct texture creation?

glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage3D( GL_TEXTURE_3D, 0, GL_RGBA16UI_EXT, BW, BH, BD, 0, GL_RGBA, GL_UNSIGNED_INT, data );

What I don’t quite understand is what happens if I do this in GLSL?

uint alpha = texture3D( 3dtexture, vec3( 0.1, 0.1, 0.1 ) ).a;

What is the size of is ‘alpha’? Is it a 16 bit unsigned int? That doesn’t make sense. Does OpenGL convert the data used while creating the texture to a 32 bit uint? If so how can I get the last 10 bits?

Thanks!

You specified every pixel as 4 unsigned-shorts (GL_RGBA16UI_EXT), so “.a” will be an integer 0…65535. The gpu has 32-bit integer registers, so it’ll be stored in your “uint alpha” as a 32-bit int with values [0;65535].
What is problematic with your code is the GL_UNSIGNED_INT. It should be GL_UNSIGNED_SHORT, otherwise the driver will be converting your values in software while uploading the texture-data go VRAM.

I see. But you said there are only 32-bit uints so it will be converted anyway?
And how would I go about getting the last 10 bits of the 64 bit unit?

My take is that, the alpha value will then correspond to the last 16 bits right? So something like this? But it gets converted into 32 bit uints so:

uint alpha = texture3D( 3dtexture, vec3( 0.1, 0.1, 0.1 ) ).a;
uint mask = 0;
mask = ~mask;
mask = mask >> 22;
uint new = alpha & mask;

can I do this? in fact I think I’ve tried it and it just gave me zeros.

The LSB will be in the x value.
xyzw
rgba <-- GL_RGBA16UI_EXT

You ought to be getting INVALID_OPERATION, and no texture, because you specified an integer internalformat (GL_RGBA16UI_EXT) but a non-integer client format (GL_RGBA).

See the spec: http://www.opengl.org/registry/specs/EXT/texture_integer.txt

Use RGBA_INTEGER if you want GL to treat your client data as integers.

Any type combination is valid-- you can feed in 32 bit ints and ask GL to store it in 16 bits. The data will just be truncated (if the driver really chooses a 16 bit internal format-- you can verify that with glGetTexLevelParameter(…INTERNAL_FORMAT…))

Ok this is not working and I don’t know why. I filled up nodePool with ones:

glTexImage3D( GL_TEXTURE_3D, 0, GL_LUMINANCE_ALPHA32UI_EXT, 2, 2, 2, 0, GL_LUMINANCE_ALPHA_INTEGER_EXT, GL_UNSIGNED_INT, nodePool );

and I’m doing this in the frag shader:

uint l = texture3D( nodePool, vec3( 0.9, 0.9, 0.9 ) ).a;
float ccc = (float)l;
gl_FragColor = vec4( ccc, ccc, ccc, 1.0 );

And it’s giving me black.

Ok apparently there is a usampler3D that I didn’t know about. Thanks guys :slight_smile: