So slow

Hi,
I’m having a spot of bother with texturing. For some reason using

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );

kills the performance of my app, it eats up 100% of the CPU (as far as I’m aware it’s a hardware process so shouldn’t be touching the CPU that much?) and I’m lucky to get a handfull of frames in a minute. Yet

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );

works fine and hardly touches the CPU. Now I know using GL_LINEAR should make it a little slower, but not that much. Particularly when I can get much better performance doing my own texture filtering in a shader, something I want to avoid in this case as I’m already using a hell of a lot of instructions…I’m running it on an NVIDIA GF6800 with Forceware 81.98 on M$ Windows XP.

Any ideas why this is so slow? :confused:

Thanks

Originally posted by Megalomaniac:
Hi,
Any ideas why this is so slow? :confused:

You are probably using texture format for which the hw does not have filtering support so it will fallback into sw emulation.

Originally posted by Komat:
You are probably using texture format for which the hw does not have filtering support so it will fallback into sw emulation.
Thanks. Where could I find out what the 6800 supports?

Originally posted by Megalomaniac:
[quote]Originally posted by Komat:
You are probably using texture format for which the hw does not have filtering support so it will fallback into sw emulation.
Thanks. Where could I find out what the 6800 supports?
[/QUOTE]Look for GPU programming guide on nVidia pages. Probably the most likely format you can use that is not supported with filtering are 32bit floating point textures.

This thread talks about how GL_LINEAR is not supported for 32bit float textures.

Thought it might be something like that, I can’t really afford to do it in the shader at the moment. Might be able to get away with 16bit though, hmmmmm time for some maths.

Thanks