NPOT FBO on NVidia

We’re using viewport-sized textures bound to FBO’s for postprocessing effects. Viewport-sized of course means they’re non power of two textures. This works fine on ATI cards but crashes at the first call to glRenderbufferStorageEXT on Nvidia cards. If we use a power of two texture instead then it stops crashing on NVidia, but of course this does no good since we need viewport-sized textures.

NPOT textures are part of OpenGL 2.0, right? Is it still necessary to go through all of that texture_rectangle silliness on Nvidia cards when rendering to non power of two FBO’s?

Thanks for any answers/help!

EDIT: Forgot to mention that YES, these textures are without mipmaps, no borders, GL_CLAMP_TO_EDGE.

As far as I remember, everything worked just fine…

Read this document on NPOT support
http://developer.nvidia.com/object/nv_ogl2_support.html
Do you have a GeForce 5 class board?
Don’t expect that to be fast with NPOT textures, but it shouldn’t crash.
Send a reproducer to NVIDIA.

yeah gf5 dont support NPOT textures
arb_texture_rectangle work fine with FBOs though
couple of things if u have color + depth the depth needs to be 24bit

I ran into a similar problem lately. Although NPOT textures were supported but they were horribly slow. So what i did was to make a POT texture with the nearest power of two so that the entire viewport fits in it and then rescale the texture coordinates appropriately so that the extra portion never makes it to the screen. It does cost some extra memory but its fast and easy to use and fits within the normal pipeline of POT textures without the need of NPOT textures. Here is an example for a viewport of 1024x768 size

Texture Dim (nearest POT): 1024x1024
Texture coordinates: s = 1024/1024, t = 768/1024

Hope this helps.

Originally posted by Zulfiqar Malik:
[b]I ran into a similar problem lately. Although NPOT textures were supported but they were horribly slow. So what i did was to make a POT texture with the nearest power of two so that the entire viewport fits in it and then rescale the texture coordinates appropriately so that the extra portion never makes it to the screen. It does cost some extra memory but its fast and easy to use and fits within the normal pipeline of POT textures without the need of NPOT textures. Here is an example for a viewport of 1024x768 size

Texture Dim (nearest POT): 1024x1024
Texture coordinates: s = 1024/1024, t = 768/1024

Hope this helps.[/b]
This is basically what Doom 3 does with post-processing shaders like the heat-haze shader.

There’s an explanation how Doom 3 does it exactly on iddevnet.com if anyone wants more info on that.

-SirKnight

GeForce FX GPUs support the ARB_texture_rectangle extension for non power of two textures. GeForce 6 and GeForce 7 GPUs added support for the ARB_texture_non_power_of_two extension.

On either card, you shouldn’t be seeing a crash in glRenderBufferStorage. Can you reproduce this crash with the latest 81.85 beta drivers?

http://nzone.com/object/nzone_downloads_winxp_2k_32bit_81.85.html

If so, please email me a repro app and I’ll make sure a bug is filed.

This is on a FX. Therefore, the bottom line is that in order for this to work on a FX GPU, we would have to use ARB_texture_rectangle, but on the 6 and 7 GPU’s, we could simply use a NPOT texture and it would supposedly work fine (as it is already doing on our ATI Radeon test systems). Is that correct?

Originally posted by Claytonious

This is on a FX. Therefore, the bottom line is that in order for this to work on a FX GPU, we would have to use ARB_texture_rectangle, but on the 6 and 7 GPU’s, we could simply use a NPOT texture and it would supposedly work fine (as it is already doing on our ATI Radeon test systems). Is that correct?

Yeah NPOT textures would work fine on NV4x and G7x series cards. However, as far as i can remember ATi does not support ARB_texture_rectangle. At least that was the case 3 - 4 months back. Don’t know the situation right now. Do confirm it, before continuing with ARB_texture_rectangle.