PDA

View Full Version : Crash while creating texture



shultays
01-07-2018, 10:00 AM
glTexImage2D returns GL_INVALID_VALUE for some computers. I am not sure what is the problem.

The parameters I am sending is

GL_TEXTURE_2D, 0, GL_RGBA, surf->w, surf->h, 0, data_fmt, GL_UNSIGNED_BYTE, surf->pixels

data_fmt is GL_RGBA
width and height is 872 639

Here are the reasons that it can be invalid value.

GL_INVALID_VALUE is generated if width is less than 0 or greater than GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if target is not GL_TEXTURE_1D_ARRAY or GL_PROXY_TEXTURE_1D_ARRAY and height is less than 0 or greater than GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if target is GL_TEXTURE_1D_ARRAY or GL_PROXY_TEXTURE_1D_ARRAY and height is less than 0 or greater than GL_MAX_ARRAY_TEXTURE_LAYERS.
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log2(max), where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if internalFormat is not one of the accepted resolution and format symbolic constants.
GL_INVALID_VALUE is generated if width or height is less than 0 or greater than GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if border is not 0.
GL_INVALID_VALUE is generated if target is GL_TEXTURE_RECTANGLE or GL_PROXY_TEXTURE_RECTANGLE and level is not 0.

It is not a big texture so size should be under GL_MAX_TEXTURE_SIZE? I don't know the value of it though. Red one is also a possibility but I doubt that RGBA is not supported.
Any idea? Or help me to further debug?

Here is the full code




out << "loading " << fileName << "\n";
dimensions.setZero();
name = fileName;
this->repeat = repeat;

SDL_Surface* surf = IMG_Load(fileName.c_str());
if (surf == NULL)
{
out << "surface_error: " << fileName << " " << SDL_GetError() << "\n";
hasError = true;
return;
}

GLenum data_fmt;
if (surf->format->BytesPerPixel == 4)
{
data_fmt = GL_RGBA;
}
else if (surf->format->BytesPerPixel == 3)
{
data_fmt = GL_RGB;
}
else if (surf->format->BytesPerPixel == 1)
{
data_fmt = GL_RED;
}
else
{
assert(false);
}
GL_CALL(glGenTextures(1, &gTexture));
GL_CALL(glBindTexture(GL_TEXTURE_2D, gTexture));
GL_CALL(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, surf->w, surf->h, 0, data_fmt, GL_UNSIGNED_BYTE, surf->pixels));
if (hadGLError)
{
out << data_fmt << " " << surf->w << " " << surf->h << "\n";
}
GL_CALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
GL_CALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
dimensions = IntVec2(surf->w, surf->h);

glGenerateMipmap(GL_TEXTURE_2D);
GL_CALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, repeat ? GL_REPEAT : GL_CLAMP_TO_EDGE));
GL_CALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, repeat ? GL_REPEAT : GL_CLAMP_TO_EDGE));
GL_CALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GL_CALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR));

SDL_FreeSurface(surf);


glGetError is set to 0x501 after glTexImage2D and it crashes on glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S

It gives me
gl error glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, surf->w, surf->h, 0, data_fmt, GL_UNSIGNED_BYTE, surf->pixels) (source\cTexture.cpp:49) 0x501
and after that it prints
6408 872 639 (which is GL_RGBA and width/height of the textures)

surf->format->BytesPerPixel is 4 so it picks rgba

I cant really debug things further because this is a crash report

OceanJeff40
01-07-2018, 11:08 AM
If you are troubleshooting on a friend's computer, trying load a bunch of different textures (all sizes, etc.) and see what works and what doesn't.

Jeff

mhagain
01-07-2018, 11:23 AM
On modern core OpenGL contexts you must use a sized internalFormat; in other words, use GL_RGBA8 rather than GL_RGBA.

GClements
01-07-2018, 03:55 PM
glTexImage2D returns GL_INVALID_VALUE for some computers. I am not sure what is the problem.

The parameters I am sending is

GL_TEXTURE_2D, 0, GL_RGBA, surf->w, surf->h, 0, data_fmt, GL_UNSIGNED_BYTE, surf->pixels

data_fmt is GL_RGBA
width and height is 872 639

If you're using an ancient version of OpenGL (e.g. Windows' software fallback is only OpenGL 1.1), that will require power-of-two texture sizes (and may have issues with the size; OpenGL versions prior to 2.0 only required support for 64x64 textures, and 256x256 was a common limit on older hardware).

As mhagain says, OpenGL 3.2+ core profile contexts won't accept GL_RGBA, but you shouldn't be getting a core profile context by accident (if that's happening, your initialisation is quite badly broken).

shultays
01-08-2018, 03:53 PM
I think the problem was I am doing something wrong my opengl initilazation, I fixed a couple things and I am no longer getting such reports.

Another small question. Now I am setting opengl version to 3.2 using SDL_GL_SetAttribute and use SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE

Should I do this for development only and disable those in releases? Let the user pick whatever version is good for its computer? There are some reports that I think opengl context is not being created (I tihnk it is, I really should put more error logging at init :( I put now and will be sure about it tomorrow), could that be the reason that I set it for a specific version?

And should I use GL_RGBA8 if I am targetting for 3.2?

GClements
01-08-2018, 04:28 PM
Another small question. Now I am setting opengl version to 3.2 using SDL_GL_SetAttribute and use SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE

Should I do this for development only and disable those in releases? Let the user pick whatever version is good for its computer? There are some reports that I think opengl context is not being created (I tihnk it is, I really should put more error logging at init :( I put now and will be sure about it tomorrow), could that be the reason that I set it for a specific version?

If your application requires OpenGL 3.2, then request it. If your desired OpenGL version is 3.2 or later and the application doesn't require the compatibility profile, then request a core profile (note that MacOS doesn't support the compatibility profile, so if you want features which aren't in OpenGL 2.1 and you want your code to run on a Mac, you can't use "legacy" OpenGL).


And should I use GL_RGBA8 if I am targetting for 3.2?
If you want to support the core profile, you should use GL_RGBA8, as unsized internal formats aren't supported.