glTexImage2D seg faulting

Perhaps, I am missing something, but I am getting seg faults when I call glTexImage2D with certain dimensions of my texture

example:

  
	int w = 400, h = 20, Bpp = 3;
	int size = w*h*Bpp;
	unsigned char buffer[size];
	memset( buffer, 150, size );	// gray
	
	GLuint idTexture;
	glGenTextures( 1, &idTexture );
	glBindTexture( GL_TEXTURE_2D, idTexture );
	
	glTexImage2D( 
			GL_TEXTURE_2D,
			0,
			3, 
			w, h, 
			0, 
			GL_RGB,
			GL_UNSIGNED_BYTE, buffer );	

Will throw a segmentation fault about 50% of the time. However, if I do something as simple as changing the height to ‘25’, then it doesnt segfault at all.

This is just a simple snippet, as I would get the error with my loaded textures, but I just did a gray chunk here for demonstration.

with must be 2^m + 2(border) for some integer m.
height must be 2^n + 2(border) for some integer n.

Does it solve your problem?
-Ehsan-

I think that does help me quite a bit. Thank you muchly!

Just a side note: You shouldn’t be using the constant 3 for the internal format, it is deprecated. Use GL_RGB or GL_RGB8 instead.

Since this thread already exists I might as easily ask here. I have a strange problem with glTexImage2D that also creates some crashes on occasion.

I am calling

glTexImage2D(GL_TEXTURE_2D, 0, texformat, rw, rh, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

texformat may either be GL_RGBA8 or GL_ALPHA8. rw and rh are the correct dimensions of a texture (power of 2 normally except when non-power of 2 textures are supported.)

When I allocate buffer to the exact size of rw*rh I get regular crashes inside the OpenGL code. When I allocate one line more than needed it works on NVidia cards but on some ATIs it still crashes on occasion.

Originally posted by Ketracel White:
[b]Since this thread already exists I might as easily ask here. I have a strange problem with glTexImage2D that also creates some crashes on occasion.

I am calling

glTexImage2D(GL_TEXTURE_2D, 0, texformat, rw, rh, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

texformat may either be GL_RGBA8 or GL_ALPHA8. rw and rh are the correct dimensions of a texture (power of 2 normally except when non-power of 2 textures are supported.)

When I allocate buffer to the exact size of rw*rh I get regular crashes inside the OpenGL code. When I allocate one line more than needed it works on NVidia cards but on some ATIs it still crashes on occasion.[/b]
There has to be some error in your code. But from this one line its not clear where it is. Maybe you should show more ( especialy memory allocation of your buffer )

Originally posted by Ketracel White:
[b]Since this thread already exists I might as easily ask here. I have a strange problem with glTexImage2D that also creates some crashes on occasion.

I am calling

glTexImage2D(GL_TEXTURE_2D, 0, texformat, rw, rh, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

texformat may either be GL_RGBA8 or GL_ALPHA8. rw and rh are the correct dimensions of a texture (power of 2 normally except when non-power of 2 textures are supported.)

When I allocate buffer to the exact size of rwrh I get regular crashes inside the OpenGL code. When I allocate one line more than needed it works on NVidia cards but on some ATIs it still crashes on occasion.[/b]
Are you making sure that you always allocate rw
rh*4 bytes for your buffer? You are specifying GL_RGBA for your clientside format.

Do you really think it would be that simple?
The buffer gets allocated in various locations and all calls look like this one:

		buffer=(unsigned char *)calloc(4,rw * (rh+1));
]

When I take out the ‘+1’ it will crash on occasion and even with this I got reports of ATi cards crashing.

On NVidia I could verify with the debugger that it tried to read past the end of the buffer.

You mention that the buffer is allocated in “Several places,” are you sure that when it is allocated prior to a glTexImage2D call that the values of rh and rw do not change?

Are you sure your pixel unpacking parameters are set correctly? See the docs for glPixelStore.

If you’re working with GL_RGB/GLubyte data you typically would want to set GL_UNPACK_ALIGNMENT=1 (it defaults to 4).

The problem is not the border, there is no border requested nor sent to teximage in that call.

Make sure you have a valid current OpenGL context when doing this.