Cubemap Issue (Please let it be something obvious)

For quite some time now I have had dynamic cubemaps in the game I have been working on. And they’ve been working great. Recently I did some changes to my model loading code, my map generation code etc. And at around that time I noticed that after loading a number of maps (ie. switching between maps) I was getting a crash in NVOGLNT.dll (memory could not be read).

After hunting and pulling out my hair (because naturally the call stack wouldn’t tell me where the problem was). I believe I have found the code that is causing the problem. (See below)

If I comment out these two blocks of code, leaving the normal rendering code in (aka. I disable the use of my cubemaps). My app works fine. Put the code back in and it starts crashing again.

Can someone please have a look and see if there is something really simple that I’m missing in this code (for example, should I unbind the cubemap after I’m finished the code - could that be the problem.

It might be that there’s nothing wrong with the code below, that the problem may be some change in my system (I’ve reinstalled and upgraded my drivers with no luck). Or maybe there’s some really obscure bug that I cannot find - and believe me I’ve looked. But it does seem wierd that this code is the difference between it working and not working.

FYI - I run a geforce 3 Ti200 with detonator 43.45 (I was on 40.72 when this bug first started showing - but I had been running those drivers for months).

if (materials[0].reflectivity != 0.0f && utilGlobal.bMultitexture &&
utilGlobal.bCubeMap && unCubeMapID && !utilGlobal.bDisableCubeMaps &&
utilConsole.variables[GL_DYNAMICCUBEMAPS].bValue)
{
// Setup the reflection cube map.
glClientActiveTextureARB (GL_TEXTURE1_ARB);
glBindTexture (GL_TEXTURE_CUBE_MAP_ARB, unCubeMapID);

// If a handle to a pbuffer is supplied then use that…
if (pbuf && utilConsole.variables[GL_RENDERTOTEXTURE].bValue)
{
wglBindTexImageARB ( pbuf, WGL_FRONT_LEFT_ARB );
utilGlobal.HandleGLError( “Binding the pbuffer to a texture object.” );
}

glTexGenf (GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP_ARB);
glTexGenf (GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP_ARB);
glTexGenf (GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP_ARB);

// Adjust the Generated coords.
glMatrixMode(GL_TEXTURE);
glPushMatrix();
glLoadIdentity();
glRotatef(utilGlobal.activecamera->rotation.z, 0.0f, 0.0f, 1.0f);
glRotatef(utilGlobal.activecamera->rotation.y, 0.0f, 1.0f, 0.0f);
glRotatef(utilGlobal.activecamera->rotation.x, 1.0f, 0.0f, 0.0f);
glMatrixMode(GL_MODELVIEW);

glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);

glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_ALPHA_EXT, GL_REPLACE);
glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_INTERPOLATE_EXT);

glTexEnvf (GL_TEXTURE_ENV, GL_SOURCE2_RGB_EXT, GL_CONSTANT_EXT);
glTexEnvf (GL_TEXTURE_ENV, GL_SOURCE1_RGB_EXT, GL_PREVIOUS_EXT);
glTexEnvf (GL_TEXTURE_ENV, GL_SOURCE0_RGB_EXT, GL_TEXTURE);
glTexEnvf (GL_TEXTURE_ENV, GL_SOURCE0_ALPHA_EXT, GL_PREVIOUS_EXT);

glTexEnvf (GL_TEXTURE_ENV, GL_OPERAND0_RGB_EXT, GL_SRC_COLOR);
glTexEnvf (GL_TEXTURE_ENV, GL_OPERAND0_ALPHA_EXT, GL_SRC_ALPHA);
glTexEnvf (GL_TEXTURE_ENV, GL_OPERAND1_RGB_EXT, GL_SRC_COLOR);
glTexEnvf (GL_TEXTURE_ENV, GL_OPERAND1_ALPHA_EXT, GL_SRC_ALPHA);
glTexEnvf (GL_TEXTURE_ENV, GL_OPERAND2_RGB_EXT, GL_SRC_COLOR);
glTexEnvf (GL_TEXTURE_ENV, GL_OPERAND2_ALPHA_EXT, GL_SRC_ALPHA);

if (utilGlobal.bFullChrome)
{
fCol[0] = fCol[1] = fCol[2] = fCol[3] = 1.0f;
glTexEnvfv (GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, fCol);
}
else
{
fCol[0] = fCol[1] = fCol[2] = fCol[3] = materials[0].reflectivity;
glTexEnvfv (GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, fCol);
}

// Enable the Texture coord generation.
glEnable(GL_TEXTURE_CUBE_MAP_ARB);
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_R);
glEnable(GL_TEXTURE_GEN_T);
}

/*****
*** Rendering code in here…
*****/

// Now we reset all the materials etc.
if (nmaterials > 0 && materials[0].reflectivity != 0.0f &&
utilGlobal.bMultitexture && utilGlobal.bCubeMap && unCubeMapID && !utilGlobal.bDisableCubeMaps &&
utilConsole.variables[GL_DYNAMICCUBEMAPS].bValue)
{
glMatrixMode(GL_TEXTURE); // Select The Modelview Matrix
glPopMatrix();
glMatrixMode(GL_MODELVIEW);

// If a handle to a pbuffer is supplied then use that…
if (pbuf && utilConsole.variables[GL_RENDERTOTEXTURE].bValue)
wglReleaseTexImageARB ( pbuf, WGL_FRONT_LEFT_ARB );

glClientActiveTextureARB(GL_TEXTURE1_ARB);
fCol[0] = fCol[1] = fCol[2] = fCol[3] = 1.0f;
glTexEnvfv (GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, fCol);

glDisable(GL_TEXTURE_CUBE_MAP_ARB);
glDisable(GL_TEXTURE_GEN_S);
glDisable(GL_TEXTURE_GEN_R);
glDisable(GL_TEXTURE_GEN_T);
glClientActiveTextureARB(GL_TEXTURE0_ARB);
glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
}

[This message has been edited by rgpc (edited 08-15-2003).]

I find it suspicious that the if-conditions before and after the rendering code don’t match.

I concur with Relic. This piece of code is screaming for a function that looks something like:

inline bool UseCubeMaps() const
{
/// …
}

The difference between the two is handled by an if that surrounds the top block of code. In fact all Meshes have materials (nmaterials > 0) but not all have cube maps - so in the cube map case I have to do something other than just bind the texture and set the material properties.

The plot thickens…

The single line of code that is causing the problem is…

glClientActiveTextureARB (GL_TEXTURE1_ARB);

Which I call just before I bind the Cube map. If I comment out this line of code, my app works perfectly (although I haven’t any textured object that are using cube maps - which is probably why it doesn’t look wierd).

This has got me completely stuffed. Particularly since I haven’t had a problem until recently. I might have to put out a beta version to see if the problem occurs on other peoples hardware (I’m getting a 5900 next week so it should be a good litmus test).

I don’t know what the rest of your code looks like, but the problem might be unrelated to that line of code. I experienced similar problems when using M$ VC6’s STL implementation, I believe it’s broken… Also, have you tried anything as simple as a basic rebuild all?

[Edit] Are you 100% sure that the extensions are initialized correctly? Sounds like some dll address error…

[This message has been edited by dbugger (edited 08-15-2003).]

Check if glClientActiveTextureARB is initialized (in WIN32 extension functions are NULL).

Check if the Cubemap is 100% perfectly created. I once had a problem with cubemaps which i discovered when i tried to bind the cubemap to another TU, than the one i used when i created the cubemap.

Jan.

Originally posted by Jan2000:
[b]Check if the Cubemap is 100% perfectly created. I once had a problem with cubemaps which i discovered when i tried to bind the cubemap to another TU, than the one i used when i created the cubemap.

Jan.[/b]

There might be something in what you’ve said here. I’ll have to go through my cube map creation code and see if there’s anything odd in it. The extension itself is fine - and I even checked to see if I was doing something that might clobber the pointer.

I did find another line of code that I can turn off to stop the problem. If I reduce the number of textures I load by 1, then the problem doesn’t seem to occur. Yet there is nothing different in the extra texture (I even changed it to use one of the existing textures to see if that helped - to no avail).

It looks to me like a driver bug, but maybe there’s just something I’m not seeing. I might have to send my code to nVidia to see if they can find anything - but I’ll get some testing on other hardware first…

Maybe it’s your cubemap generation code. Try to fill each texel with (1.0,1.0,1.0).

Hmm, why do you set GL_TEXTURE_ENV_COLOR in the setup step and the rendering step? I think I’m missing something.

That kind of behaviour does not sound like a driver bug to me. Get boundschecker or something similar and debug your code, before bothering the poor nvidia driver writers.

if (utilGlobal.bFullChrome)
{ fCol[0] = fCol[1] = fCol[2] = fCol[3] = 1.0f;
glTexEnvfv (GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, fCol);
}
else
{
fCol[0] = fCol[1] = fCol[2] = fCol[3] = materials[0].reflectivity;
glTexEnvfv (GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, fCol);
}

I do this to control the amount of reflectivity of the surface. I also have a flag that I can turn on and off to make things 100% reflective, which I use to check the cube maps are working OK. That’s the bFullChrome setting (it’ll be removed later on).

Something that has twigged that I might have to look into is that I use two variations from the code in one of the nVidia samples in their SDK (Simple Cube Map?). I use gluBuild2DMipmaps(), whereas the nVidia code I looked at builds each level separately. And I use GL_RGBA instead of GL_RGB8.

I’ll switch these and see if they make any difference.

As for boundschecker, if I was overrunning memory, I would expect I’d get the access violation whether or not I turned off the line of code in question. And I’ve been right through all the code checking that everything looks OK.

There were two things I found - I could have been losing some of my textures because I was memcpy’ing my materials and not incrementing my usage counter for the texture, and the other was that I discovered that if you do a “new VERTEX[0]” you actually get a pointer back - which I find very odd, and can’t see a use for - but it was something I didn’t know. I expected a NULL back from such a call.

[This message has been edited by rgpc (edited 08-15-2003).]

OK. I’ve looked at my code all w’end and haven’t found the cause for the crash I’m experiencing. The closest I have been to finding it is that I found that if I loaded one less texture (and it was a specific texture), the crash didn’t occur.

I tried substituting that texture for another, and I still got the crash. I’ve looked at all my mallocs, callocs, reallocs, memcpy’s, memmoves, news and deletes. Plus all of my operator overloads - and I still haven’t found anything that might be causing it.

I now need to know if maybe my drivers are screwy. I’ve reinstalled the drivers (and upgraded twice - aka removed existing drivers then installed new version) and I’m still getting the problem. I had a friend test it on his geforce256 and he didn’t get the crash.

My app can be downloaded from here

To replicate the problem, run the app, start a game and you can just skip through the levels by pressing “INSERT”. You may have to go through all the levels more than once, you may find it crashes on the first level.

If I could get some feed back as to if other people get the access violation, that’d be very helpful. It’d really be helpful to see if anyone with a Radeon gets it - at least then I can discount drivers all together. I’m at my wits end trying to find this.

Thanks…

On my GF 4200 with latest drivers it does crash.

Maybe you have to first enable GL_TEXTURE_CUBE_MAP_ARB, before you call glBindTexture (instead of first binding the texture and then enabling it). Could be, that the driver doesn´t know you bound that texture-object, if the TU was not enabled when you bound the texture. Not sure, just try it out.

Jan.

Thanks Jan (and thanks for your e-mail to - it gave me a few more things to check out). Unfortunately I’ve already tried your suggestion. I enable cube mapping and switch to the TU that I use for the cube map, and you’ve seen the results. It’s got me well and truly stumped (at least now I know it isn’t a dodgy install of my drivers or my Windows install).

If I can get some Radeon results then I might have to go and bother nVidia…

I’ve seen this work on my 3DLabs card and fail on nvidia’s. Anyways, I always activate the texture unit then bind texture to it. I suppose you could set TexEnv modes before binding the texture to the unit as gl knows which unit the settings apply to.

Have you tried using glActiveTextureARB(…) instead? I think that client state thingy is used with arrays of vertices, etc.

I think I’ve tried it before (when I first implemented cubemaps). But I’ll give it a go and see what happens…

I debugged a problem like this a few days ago in our code. It was crashing in the NVOGLNT driver and I tracked it down using breakpoints to find that is was occuring in a glDrawElements() call. It was crashing on Nvidia cards, but not on ATI cards.

The problem was that someone had previously enabled the texcoord arrays on the 2nd tex unit, but hadn’t disabled them. Later on we started rendering models but the texcoord array hadn’t been disabled and the texcoord data was now invalid. This caused the driver to access an invalid memory location.

We fixed this by disabling the texcoord array before rendering the models:

glClientActiveTextureARB(GL_TEXTURE1_ARB);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);

Sounds like something similar to whatever problem you are having. Maybe you should check that you’ve not accidentally left an array on some tex unit accidentally enabled.

Hope this helps,
Stephen

[This message has been edited by Stephen_H (edited 08-17-2003).]

Originally posted by JD:
Have you tried using glActiveTextureARB(…) instead? I think that client state thingy is used with arrays of vertices, etc.

Right you are JD. Fan-bloody-tastic. Waste a week trying to find something, pull all your hair out, and it ends up being something stupid like that. Ah well, life would be boring if it wasn’t for dumb extensions where we have two functions for the one process… (Not to mention untracable access violations)

The wierd thing is that it was working fine, right up until I added one extra texture, then it hit the proverbial fan…

Thanks to all for their help - I’m suddenly more enthusiastic about getting my 5900.

[EDIT] Wrong write… I mean right…

[This message has been edited by rgpc (edited 08-18-2003).]