gluBuild2DMipmaps causing UNDERFLOW Exceptions

I have recently discovered that when I use gluBuild2DMipmaps with the GL_COMPRESSED_RGB_ARB paramater, underflow exceptions are created when images are larger than 256X256. Underflow exceptions can be checked by using _control87() and enabling EM_UNDERFLOW.

All the code is correct, and the images are normal power of 2 24bit bitmaps. When the underflow exception is not enabled no visible problem is seen, the images appear as normal in any 3D view. They are compressed and display properly using there correct mipmap.

The HARDWARE ACCLERATOR is a NVIDIA GEFORCE4 MX420, using the latest NVIDIA drivers ( I capitalize this as I reckon this problem is card or driver related ).

Any ideas? Is there any source code available for the function gluBuild2DMipmaps??

VS

Mesa has the source code for all the glu functions.

Just a wild guess but it could be the video drivers if you are indeed getting a fpu stack underflow.
I don’t see how GLU could be at fault here.

Just a tought,

Do you also get underflows by using auto mipmap generation through the SGIS_generate_mipmap?

regards,

GLU, by and large, does not call the card or driver at all. It’s basically a collection of software subroutines to provide some functionality that’s “commonly expected” eventhough it’s not in hardware.

However, it turns out that GLU is pretty slow for most things, and if you find that you want to use GLU, you might want to look into writing your own. Specifically, the performance of MIP map generation code is pretty sad by today’s standards (in my opinion, of course). You’re better off writing your own, or turning on GENERATE_MIPMAP.

Yes, the glu routines are slow for scaling, that’s why I have created glhlib. It has the equivalent of gluScaleImage, gluBuild2DMipmaps and many others.
I have even put in SSE capable gluProject, gluUnProject.

Maybe one day I will put in the equivalent of DX effects.

GLU never be card acceleration?
are you really confirm :eek:
but microsoft’s opengl driver will
ask card driver’s function include glu :confused:

I have now tried the auto mipmapping. I did’nt notice any performance improvement though. Maybe NVIDIA does it the glu way.

I have noticed that the LOD/BIAS or other settings are giving a different result though.

With the glu command the texture diminishes in size perfectly over any distance. Gradually fading out. There are no swirls or moire patterns.

With the Automatic command the texture map seems to peeter out to early. Diminishing itself out in half the distance.

What cause that???

I have tried setting some of the LOD/BIAS settings and have very limited success. Some set correctly, others refuse ( checked with glGetFloatv, set with glTexenvf).

Any ideas.

I did’nt notice any performance improvement though. Maybe NVIDIA does it the glu way.
That is probably not the case, I think there is dedicated hardware, else render to texture would need a round trip to the cpu, and performance would not be here.

With the Automatic command the texture map seems to peeter out to early. Diminishing itself out in half the distance.
Surprising. Maybe try to play with GL_LINEAR and other mipmap and anisotropic filters ?