I have a little video texture application which basically consists of:
-
creating the texture once using glTexImage2D
glTexImage2D( GL_TEXTURE_2D, /level/0, /internal_format/GL_RGBA, 2048, 2048, /border/0, /format/GL_RGBA, GL_UNSIGNED_BYTE, NULL ); -
filling the texture at every frame using glTexSubImage2D( GL_TEXTURE_2D, /level/, /xoffset/ 0, /yoffset/0, 2048, 1556, /format/GL_RGBA, GL_UNSIGNED_BYTE, MyVideoTexture );
This is on a Quadro FX 3400 card PCI-EXPRESS, which is supposed to be very fast.
But I only get 8 fps on this card while I can get 40 fps on a similar ATI card (PCI-EXPRESS).
Worse is that on an older Nvidia card (AGP) I can get 20 fps (ok it’s on another machine but it’s an older one!).
I just changed the code to use GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D and now things are really fast but I just hate this extension as it forces me to change all my texture coordinates. I found that the extension ARB_non_power_of_two is available on this machine but not sure how to activate it? Is it automatic?
I also tried different internal formats: 4, GL_BGRA, etc. but no luck.
Indeed I tried several driver versions: 61.77, 61.82, 67.20, 70.41 and no luck.
I am sending a bug report to Nvidia but I was wondering if anyone has a hint on how to make this work with nvidia at a decent rate ?