PDA

View Full Version : glTexImage2D seg faulting



indigo_child
09-30-2005, 07:55 PM
Perhaps, I am missing something, but I am getting seg faults when I call glTexImage2D with certain dimensions of my texture

example:



int w = 400, h = 20, Bpp = 3;
int size = w*h*Bpp;
unsigned char buffer[size];
memset( buffer, 150, size ); // gray

GLuint idTexture;
glGenTextures( 1, &idTexture );
glBindTexture( GL_TEXTURE_2D, idTexture );

glTexImage2D(
GL_TEXTURE_2D,
0,
3,
w, h,
0,
GL_RGB,
GL_UNSIGNED_BYTE, buffer ); Will throw a segmentation fault about 50% of the time. However, if I do something as simple as changing the height to '25', then it doesnt segfault at all.

This is just a simple snippet, as I would get the error with my loaded textures, but I just did a gray chunk here for demonstration.

Ehsan Kamrani
09-30-2005, 09:26 PM
with must be 2^m + 2(border) for some integer m.
height must be 2^n + 2(border) for some integer n.

Does it solve your problem?
-Ehsan-

indigo_child
10-01-2005, 12:53 AM
I think that does help me quite a bit. Thank you muchly!

Overmind
10-01-2005, 02:50 AM
Just a side note: You shouldn't be using the constant 3 for the internal format, it is deprecated. Use GL_RGB or GL_RGB8 instead.

Ketracel White
10-02-2005, 05:35 AM
Since this thread already exists I might as easily ask here. I have a strange problem with glTexImage2D that also creates some crashes on occasion.

I am calling


glTexImage2D(GL_TEXTURE_2D, 0, texformat, rw, rh, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);texformat may either be GL_RGBA8 or GL_ALPHA8. rw and rh are the correct dimensions of a texture (power of 2 normally except when non-power of 2 textures are supported.)

When I allocate buffer to the exact size of rw*rh I get regular crashes inside the OpenGL code. When I allocate one line more than needed it works on NVidia cards but on some ATIs it still crashes on occasion.

Trahern
10-02-2005, 07:43 AM
Originally posted by Ketracel White:
Since this thread already exists I might as easily ask here. I have a strange problem with glTexImage2D that also creates some crashes on occasion.

I am calling


glTexImage2D(GL_TEXTURE_2D, 0, texformat, rw, rh, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);texformat may either be GL_RGBA8 or GL_ALPHA8. rw and rh are the correct dimensions of a texture (power of 2 normally except when non-power of 2 textures are supported.)

When I allocate buffer to the exact size of rw*rh I get regular crashes inside the OpenGL code. When I allocate one line more than needed it works on NVidia cards but on some ATIs it still crashes on occasion.There has to be some error in your code. But from this one line its not clear where it is. Maybe you should show more ( especialy memory allocation of your buffer )

Omaha
10-02-2005, 10:59 AM
Originally posted by Ketracel White:
Since this thread already exists I might as easily ask here. I have a strange problem with glTexImage2D that also creates some crashes on occasion.

I am calling


glTexImage2D(GL_TEXTURE_2D, 0, texformat, rw, rh, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);texformat may either be GL_RGBA8 or GL_ALPHA8. rw and rh are the correct dimensions of a texture (power of 2 normally except when non-power of 2 textures are supported.)

When I allocate buffer to the exact size of rw*rh I get regular crashes inside the OpenGL code. When I allocate one line more than needed it works on NVidia cards but on some ATIs it still crashes on occasion.Are you making sure that you always allocate rw*rh*4 bytes for your buffer? You are specifying GL_RGBA for your clientside format.

Ketracel White
10-02-2005, 12:23 PM
Do you really think it would be that simple?
The buffer gets allocated in various locations and all calls look like this one:


buffer=(unsigned char *)calloc(4,rw * (rh+1));
]When I take out the '+1' it will crash on occasion and even with this I got reports of ATi cards crashing.

On NVidia I could verify with the debugger that it tried to read past the end of the buffer.

Omaha
10-02-2005, 04:27 PM
You mention that the buffer is allocated in "Several places," are you sure that when it is allocated prior to a glTexImage2D call that the values of rh and rw do not change?

Brian Paul
10-03-2005, 08:56 AM
Are you sure your pixel unpacking parameters are set correctly? See the docs for glPixelStore.

If you're working with GL_RGB/GLubyte data you typically would want to set GL_UNPACK_ALIGNMENT=1 (it defaults to 4).

dorbie
10-03-2005, 10:12 AM
The problem is not the border, there is no border requested nor sent to teximage in that call.

dorbie
10-03-2005, 10:22 AM
Make sure you have a valid current OpenGL context when doing this.