glTexImage2D vs glTexImage1D

Hey
I have a texture of size 285x32 (maxBasis x basis.size())
and of type unsigned_byte (GLubyte) .

If I load the texture using:

glTexImage2D (GL_TEXTURE_2D, 0, GL_R8UI,
maxBasis, basis.size (),
0, GL_RED_INTEGER_EXT, GL_UNSIGNED_BYTE, baseRefF);

The values ​​on the gpu are a mess, some look good, others changed, and some are just wrong.
Almost looks like it is reading from invalid memory locations.

However, after several attempts to try to understand what is happening (I changed the data type, size, texture, etc. , all ways the same result…), I noticed that if I upload using:

glTexImage1D (GL_TEXTURE_1D, 0, GL_R8UI,
maxBasis basis.size * (),
0, GL_RED_INTEGER_EXT, GL_UNSIGNED_BYTE, baseRefF);

The values ​​are all ok.

In other textures that I use, do not come across any similar problem.
Anyone knows what is going on?

I have an ATI Radeon HD5650 that supports OpenGL 4.1.

I’m not sure your call to glTexImage2D passes the correct buffer size.

I think it’s supposed to be maxBasis * basis.size(). Could be a typo though. Could you post the whole code setting up the texture object (use the code tags ‘[‘code’]’ … ‘[’/code’]’ without the ‘’).

Yeah thats a typo, its maxBasis * basis.size() as you said.
here’s the code that generates the data, it basically uses
int getRefFuncId(…)
to extract a number between [1…7] (thus can be encoded in a single byte) from another data structure.



GLubyte * baseRefF = (GLubyte *) malloc(sizeof(GLubyte) * (basis.size()) * maxBasis);

// initialize to 0
for(int i = 0; i < basis.size() * maxBasis; i++ )	baseRefF[i] = GLubyte(0);

// fill in with the function ids
for(int i = 0; i < basis.size() ; i++)
{
	for(int j = 0; j < basis[i].size() ; j++)
	{
		baseRefF[i*maxBasis + j] = GLubyte(getRefFuncId(basis[i][j].p));
	}
}





Hmm, I think you’re indexing the array the wrong way. Above you gave the example of a 285x32 texture, derived from maxBasis * basis.size().

Assuming maxBasis is the x- and basis.size() is the y-coordinate, the assignment should look like this:



for(int i = 0; i < basis.size() ; i++)
{
	for(int j = 0; j < maxBasis ; j++)
	{
	  baseRefF[i * basis.size() + j] = GLubyte(getRefFuncId(basis[i][j].p));
	}
}


This will correctly fill maxBasis columns of basis.size() rows. However, I’m not sure if getRefFuncId(basis[i][j].p) will yield the correct value when the indices change like that.

no, that would go out of range on the basis[i][j] structure.

the way you indexed baseRefF[i * basis.size() + j], is also wrong, bacause i * basis.size() is not greater than j, several positions in the arrays will overlap.

You could use gDEBugger (http://www.gremedy.com/) to inspect the texture (doesn’t do anything you can’t do yourself, just makes it nice and easy and with a GUI). Lets you inspect the texture values, it’s actual stored format (which can be different from what you requested) etc.

I’m using gDEB. that how I noticed the texture is wrong.
A professor of mine said it could be the fact that the dimensions are not a power of 2. Even though OpenGL specifies support for non power of 2 textures, they can still have problems… :frowning:

have you tried

glPixelStore(GL_UNPACK_ALIGNMENT, 1);

because if your texture is 285 bytes wide, that isn’t 4 byte aligned.

You sir, are awesome!
Thanks that did the trick.

I didn’t even know something like that would be necessary.

I have some other textures that are powers of 2, but they might not be when I generalize the code (it is working for just a specific problem for now). But I have some 3D textures of dimensions 256x256x11, they do not have any problems, yet, should I consider making some sort of precondition testing when uploading the texture, you got any advice on this?

The 256x256x11 texture works just fine as each row and column data is 4 byte aligned (even in case of a single 8 bit component), but if you would use let’s say 255x255x11 with R8 format, I assure you that it wouldn’t work either.

Use the command adviced by Dan Bartlett and you’ll have no issues with odd sized textures, there’s nothing else you have to do besides that.

Alright, cool.

Is there any drawback from using

glPixelStore(GL_UNPACK_ALIGNMENT, 1);

for powers of 2 sized textures?

I can imagine a performance reduction, I just hope it is not too much.

glPixelStore(GL_UNPACK_ALIGNMENT, 1);
It only has an effect on uploading the texture.
If you are doing it once, then it isn’t a problem. If you are going to do it continuously at runtime, then it might be.

Nah I just upload them once.

Thank you all!