Hello,
I’m working in an application that needs to load many many texture data. I’m talking about 40-50 4256x2848xrgb images in a typical worst case scenary.
Well, all my life I grown up with the idea that GL drivers were so smart that we didn’t need to know the video memory usage in our programs. I thought about texture memory/disk swapping behaviour when applications exceeds the video memory available, causing a slowdown. This behaviour was fine for me until know, because it was according with the GL spec (driver tries to allocate texture objects in vram, that’s resident textures, and if can’t it will do it in main memory and will swap to vram when needed, even if the process is slow)
With my current application (many 2D textures, tiled in 3D textures to avoid the maximun 2d texture size limitation), when I load, say 350Mb of texture data (having a 8500GT 512Mb) strange things starts to happen. Let me describe the scenary: Suppose that I load 16 textures, and I proceed to draw them in the canvas as screen aligend quads in a 4x4 matrix fashion. Well, I expected that if I exceed the video memory available the only effect will be an horrible slowdown, but the real thing is that textures jumps from a position of the matrix to another, and some positions becomes invalid. Its like if the driver becomes crazy and the “internal structure that stores texture names and texture images” were messed up totally.
This problem happens when I reach to the ‘magic limit’ of ~350Mb of texture memory, until there, all works fine.
I’m sure that I could write a repro application to fill a bug report to NVidia, but I want to know your opinions and experience about this matter (using huge texture memory quantities)
I can workaround this problem by loading low resolution and grayscale versions of the images to avoid reach the magic limit but I do not want put the application in the production stage with this problem.
Other problem I’m having is this one.
If I load many textures, without reaching the magic limit, performance drops a lot (about 0.5f fps). Well, that’s not strange. The strange thing is that if I zoom out a lot the scene (currently a 2D scene. This part of the application is some kind of photo editor) and then I restore the zoom to 1, the performance grows a lot, having about 30fps. I think that this could be cache related, but I’m not sure because I can’t realize the true reason of this behaviour.
There is another scenary with the same problem. Photos have many markers placed over it, say around 60-100. Markers are drawn correctly without performance drops (thanks to textured point sprites and vertex shaders ) but if I activate the flag that enables the markers’ labels, performance drops down again, until some seconds passes (around 20’) or I zoom out/in the scene. Then, the fps gets useable again.
I checked the drawing loop to see if some high cost operation is performed in the first frames of the slowdown, and nothing strange happened. I tested too texture residence (font texture and photos textures) and the driver always reports residence, so I am really lost with these two problems, so any help will be welcomed
Thanks,
Jacobo.