glTexImage2D failing on Windows/Qt ???!!!

This sounds elementary but it is a pernicious showstopper for a critical project. Please give any help you can.

I’m now using QT 4.7 in the Qt SDK 1.1, building either with MSVC or MinGW, but have the same problem with 4.6 Open Source using MinGW. The OGL environment is created as a QGLWidget, and I am sourcing textures from QImages. My development platform is WinVista32 sp2 on an HP/AMD64 box with nVidia GeForce6150SE nForce430 graphics, driver 270.61 (latest)

In brief glTeximage2D() is either crashing or failing to load a usable texture, no matter what I do in the way of setup, plain or fancy. The simplest possible program fails just like my big app. But I have no clue why, or how to fix it.

void MinTexIMg::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable( GL_TEXTURE_2D );
glPixelStorei( GL_UNPACK_ALIGNMENT, 4 ); // QImage to GPU
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA,
pim->width(), pim->height(), 0,
GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV,
pim->bits());
glBegin( GL_QUADS );
glTexCoord2f( 0, 1 ); glVertex2f( -1, -1 );
glTexCoord2f( 1, 1 ); glVertex2f( 1, -1 );
glTexCoord2f( 1, 0 ); glVertex2f( 1, 1 );
glTexCoord2f( 0, 0 ); glVertex2f( -1, 1 );
glEnd();
}

pim -> a QImage with 32 bit BGRA pixels. The line
GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV,
will only compile if I include glee.h (which I use in the app). If I do that, the call executes without an OGL error but the screen is white. The alternative
GL_BGRA_EXT, GL_UNSIGNED_INT,
is accepted with just the default Windows headers, but then the call segfaults in NtWaitForMultipleObjects on a secondary thread.

The identical code works perfectly in an older version of the app built with Qt 4.4 OpenSource.

Any ideas???

3 more facts:

  1. This little program fails (white screen) also on Linux/Qt 4.5, on the same box. So it ain’t Bill’s fault, or Qt 4.6/4.7’s.
  2. glDrawPixels(pim->width(), pim->height(), GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pim->bits());
    works as expected on both OSs. So it ain’t the data specifications.
  3. I’ve tried all the obvious enables, disables and parameter settings, and plenty of nonobvious ones. But ask me about them anyhow, because it’s still possible I’m overlooking something basic.
    HELP! HELP!! HELP!!! HELP!!!

I don’t see a call to glTextures() followed by a glBindTexture().
Have you run your program through an OpenGL debugger (e.g. gDEbugger, bugle, glIntercept)? Do you check for OpenGL errors with glGetError()?

Not strictly speaking necessary as OpenGL will just use the default texture (object 0) without them, but the OP should probably confirm if this is desired behaviour.

Things to check:[ul][]The values of pim->width() and pim->height().[]Whether or not the image actually is 32-bit (I suspect that’s OK as glDrawPixels works).[/ul]

AS carsten says, have u made these calls after glEnable(GL_TEXTURE_2D) like this,


glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_2D, texID );

Moreover since u r not specifiying the texture minification filter the default uses mipmaps and since u r not specifying the mipmap generation either this will give u nothing. To circumvent this, add a minification filter (linear or nearest) before the call to glTexImage2D call.


glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

THANK YOU mobeen!!
Setting the filter parameter fixes it. I thought I was (still) doing that in the main app, but that bit of code must have gotten lost. However I did not know it was that important.

The app does use texture objects, as you all suggest. I just left that out for the toy example program.

gratefully, Tom

Not quite there yet :-<
The app does indeed set those filter params; but still can display neither flat nor cubic 2D textures. What else could be stopping it?? Here’s the texture object setup code:

/** set up 2D and Cube texture objects
**/
glGenTextures(nTIDS, TIDS );
#define tex2D TIDS[0]
#define texCube TIDS[1]
// 2D texture mapping parameters…
glBindTexture( GL_TEXTURE_2D, tex2D );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR ); ///GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR ); ///GL_NEAREST);
// border color for 2D textures
float bord[4] = { 0, 0, 0, 1 };
/// float bord[4] = { 1, 1, 0, 1 }; //yellow for debug
glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, bord);
// horizontally wrapped textures require clamp to edge…
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);

// cube texture mapping parameters…
glBindTexture( GL_TEXTURE_CUBE_MAP, texCube );

glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR ); ///GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR ); ///GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);

// copy texels
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
// no texcoord generation
glDisable( GL_TEXTURE_GEN_S );
glDisable( GL_TEXTURE_GEN_T );
glDisable( GL_TEXTURE_GEN_R );
// but if you do…
glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP );
glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP );
glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP );

All textures are loaded with glTexImage2D after enabling 2D or cube texture and binding that target to tex2D or texCube.
The app renders a curved mesh to an FBO. The mesh comes out black if I choose the cube texture, chocolate brown if I choose the 2D texture. But displaying the 2D texture with code just like the example gives white screen.

I’m assuming those texture parameters persist in the objects. Or do I need to set them per frame?

helphelphelp!!! – Tom

I suggest u remove the cubemaps for the time being and make it work without them. Later when u have sorted this out, you can add in cubemaps. The last bound texture will remain bound to the texture target unless u issue.


glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);

Retraction: I can display the loaded 2D texture directly now (was retrieving its ID wrong). But still getting black mesh.

The app has separate modes for 2D and cubic source images, they are both projected the same way using a vertex shader. Here is the relevant part of my rendering code.

// use pure gPP vertex shader shader
    glUseProgram( vsID );
    float m[16];
    SSMat.put(m);
    glUniformMatrix4fv( MVMloc, 1, false, m );
    if( upmode == um_pano ) panVMat.put(m);
    else srcVMat.put(m);
    glUniformMatrix4fv( PRMloc, 1, false, m );
    glUniform4f(parm0loc, cylHcos, cylHsin, coneSlope, upscale );
    glEnableVertexAttribArray( vertloc );
// use fixed function frag shading...
    glMatrixMode( GL_TEXTURE );
    glLoadIdentity();
    if( texmode == tm_2D ){
        glDisable( GL_TEXTURE_CUBE_MAP );
        glEnable( GL_TEXTURE_2D );
        glBindTexture( GL_TEXTURE_2D, tex2D );
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR ); ///GL_NEAREST);

#if 1
glBindBuffer( GL_ARRAY_BUFFER, tcsVBO );
glTexCoordPointer( 3, GL_FLOAT, 0, 0 );
#else
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glTexCoordPointer( 3, GL_FLOAT, 0, psurf->texCoords() );
#endif
} else {
glEnable( GL_TEXTURE_CUBE_MAP );
glBindTexture( GL_TEXTURE_CUBE_MAP, texCube );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR ); ///GL_NEAREST);
#if 1
glBindBuffer( GL_ARRAY_BUFFER, vtsVBO );
glTexCoordPointer( 3, GL_FLOAT, 0, 0 );
#else
glBindBuffer( GL_ARRAY_BUFFER, 0 );
glEnableClientState( GL_TEXTURE_COORD_ARRAY );
glTexCoordPointer( 3, GL_FLOAT, 0, psurf->vertices() );
#endif
}
}

if( wireLineWidth &gt; 0 ) {
    glLineWidth( wireLineWidth );
    glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, wirEBO );
    glDrawElements( GL_LINES, nlids, GL_UNSIGNED_INT, 0 );
} else {
    glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, triEBO );
    glDrawElements( GL_TRIANGLE_STRIP, nsids, GL_UNSIGNED_INT, 0 );
}

The #if blocks are there because I wasn’t sure if the problem is how I feed in the texture coordinates. But neither alternative works. Of course my TCs could be wrong; but I can say for certain the cube TCs are 3D unit vectors properly placed on the sphere, and the 2D TCs are all in the range 0 to 1. Moreover, the program used to display 2D textures OK using these same TCs (fed direct from client side, not from a buffer).

So I think my problem is something I did while upgrading the code to be ‘version 3 ready’. But everything I have added/changed is supported by OGL2.10/GLSL1.30, which is the level of my dev. platform. Mainly I have substituted buffer objects for client side arrays and custom shader vars for fixed function OGL objects. The code above uses only my vertex shader, but I also have code that uses vertex+frag shaders, without any fixed function; and that gives exactly the same results.

RESOLVED (with open questions)
I went back to the full shader version of the app code, and behold, it now displays both 2D and cube textures [hare Krishna!!] The mixed code I showed you still does not, but I am on the air and don’t care. Thanks much for giving me the confidence to get here.
–Tom