frame drop when binding & rendering large texture

Hello!

I am experiencing a frame drop when binding and rendering textures for the first frame. I can remedy this by rendering the texture once at start-up but it is not an ideal situation to have to pre-render all the textures just to get rid of the sudden frame drop. It is like the texture becomes cached when it has been rendered and don’t cause any future slowdowns once this has been done.

Can this issue be fixed without pre-rendering the texture somehow? The texture is 2048x2048 and I’m using an ATI4850 graphics card, I know it should be able to handle it.

No shaders involved, just fixed function and two textures.

I’ve made a small test program. If anyone is willing to try it out I would be grateful.

www.oddgames.com/temp/texturetest.zip

It’s a big quad going back and forth over the screen. When space is pressed the quad changes texture. The first time this is done there is a brief frame drop, It might be needed to run the test several times to notice the frame drop.

I have noticed this bug on:
ATI Radeon 4850
intel mobile graphic card. (don’t know exact model)

The drop didn’t occur on:
GeForce GTX 285
GeForce 8400 M G

Simple question: Do you bind the tex object and upload data to video memory at init time or right before doing some rendering?

Thanks for the reply! Both textures are loaded right after window creation. The program main loop only binds the requested texture and renders it.

This use to be a common problem in all games because the driver stores the texture in RAM but it never sends it to the video card until it is used.

It is possible that AMD and nVidia have different texture policies. It isn’t something that can be control via GL calls.

This use to be a common problem in all games because the driver stores the texture in RAM but it never sends it to the video card until it is used.[/QUOTE]
Right. O-san, I think the solution you propose is exactly what you need. AFAIK, there isn’t a glUploadTextureToTheGPUNow( tex_handle ). You need to invoke an action that requires the texture to be GPU-resident, and the upload is done implicitly…

…of course, it may be “undone” too, if you have insufficient room on your GPU for your full working set of textures. Texture residency is virtualized. There is glPrioritizeTextures, but I don’t know which vendors honor that, if any.

Ah thank you for the insight. I’ll just try to prerender my most needed textures then and see if that helps. I also thought of splitting up my large texture to smaller ones. It seemed to help somewhat to shrink the texture. But now that I’ve read your posts I am having second thoughts about it.