GL_LUMINANCE_ALPHA texture help

Can anyone help me set up a GL_LUMINANCE_ALPHA texture with GL_LUMINANCE16_ALPHA16 format?
Currently when setting up my texels i’m using GLubyte and assigning 4 bytes to each texel:

texels = (GLubyte *)malloc(texW * texH * texD * texBytes);

texels[TEXEL3(s, t, r)] = 0x00;
texels[TEXEL3(s, t, r)+1] = 0x00;
texels[TEXEL3(s, t, r)+2] = 0x00;
texels[TEXEL3(s, t, r)+3] = 0x00;

glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE16_ALPHA16, texH, texW, texD, 1, GL_LUMINANCE_ALPHA, GL_FLOAT, texels);

creating the texture causes an error. can anyone help me fix this?

The way you pass the image data to glTexImage doesn’t make much sense to me. You tell glTexImage texels is storing GLfloat, but in reality it stores GLubyte. You said you assign 4 bytes for each texel, but each texel is 8 bytes. A GLfloat is 4 bytes, and the GL_LUMINANCE_ALPHA reqires two components; that makes a total of 8 bytes per texel. So unless texBytes is at least 8, glTexImage is reading way past the end of the memory block.

do you know how I would assign a 32 bit value to a luminance_alpha texture? What type should I use instead of GLfloat? I’m trying to store 32 bits in each texel where the top 16 bits represents an unsigned value between 0 and 2^16-1 and the lower 16 bits represents another unsigned value in the same range.

I would try this :

glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE16_ALPHA16, texH, texW, texD, 1, GL_LUMINANCE_ALPHA, GL_UNSIGNED_SHORT, texels);

The format of texels is actually specified by the 2 parameters right before it in the arg list.