PDA

View Full Version : Terrible ATI opengl performance



dukey
06-08-2009, 05:16 PM
I wrote my own media centre software .. for my living room :eek:
And it has an opengl front end renderer for video. It renders from direct show and simply passes a pointer to opengl for rendering. Like so

glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, videoWidth, videoHeight, 0, pixelType, GL_UNSIGNED_BYTE, imageData);

Note the GL_TEXTURE_RECTANGLE_ARB

My pc has a low end ATI card X300 or X550. It is a new card, supports shaders etc, just fanless and low end (perfect for HTPC). Anyway under XP this works perfect. Performance is great. On Vista 32 and vista 64 however, performance is terrible, to the point where i can't watch anything at all. I can see also CPU usage is very high, 2x that of XP when trying to render video, although never quite 100%. The video is rendering fullscreen, so under vista there isn't that extra copy required like there would be in windowed mode.

Anyone else encounted this problem with ati drivers under vista ? :eek: The pc I am using is hardly low end, 2gig of ram, 3.4gig p4.

I was running identical software during my test, same hardware, only thing was different was the OS. (Latest drivers etc).

overlay
06-08-2009, 06:46 PM
Do you use GDI on top of OpenGL?

if so, see section What All This Means for the OpenGL Developer,
sub-section "GDI compatibility notes":
http://www.opengl.org/pipeline/article/vol003_7/

also in the sub-section "Performance notes and other recommended practices":

"Calling synchronization routines like glFlush, glFinish, SwapBuffers, or glReadPixels (or any command buffer submission in general) now incurs a kernel transition, so use them wisely and sparingly."

skynet
06-09-2009, 12:42 AM
Better use glTexSubImage2D for updating the texture. Also make sure, you use GL_CLAMP_TO_EDGE as texture coordinate wrapping mode. Additionally, make sure, your source data type and format is not something weird (i.e. for instance not GL_RGB and GL_UNSIGNED_BYTE). Furthermore you might try GL_RGBA as internal format instead.
My 2c :-)

Stephen A
06-09-2009, 01:40 AM
You are doing it wrong (tm) :)

Don't call glTexImage2D repeatedly. Call it once on startup and use glTexSubImage2D to upload data.

Don't use GL_RGB as the internal format, as it is not supported natively. Use GL_RGBA.

For optimal performance, make sure that pixelType is something nice, such as GL_BGRA with GL_UNSIGNED_BYTE. Almost anything else will have to be swizzled by the driver.

The last item is the killer in OpenGL video, as decompressed frames tend to be in YUV or RGB or something ugly like those. BGRA is the way forward :)

For extra performance, you can use PBOs and upload data from a secondary thread. If your target hardware doesn't support that, you might be able to create more than one OpenGL context and upload data to multiple textures in parallel. You then render each texture in round-robin fashion, thus hiding upload latency.

dukey
06-09-2009, 05:08 AM
Do you use GDI on top of OpenGL?

nope :eek:
No glflush etc either

glTexSubImage2D, i'll definitely try this one. Looks like I should have already been using this before.

As for swizzling. I seem to recall with nvidia that if you specify the internal format as RGB, it'll actually store it as BGR to match the windows pixel format. That assumption correct ?

I am not sure either whether RGBA would really be that much faster since the data is always packed to mod 4 bytes anyway.

All good suggestions though :)
thx

Dark Photon
06-09-2009, 06:05 AM
glTexSubImage2D, i'll definitely try this one. Looks like I should have already been using this before.

Yeah, just remember:

glTexImage* - Allocate and fill
glTexSubImage* - Fill only

You only need to allocate once.

dukey
06-10-2009, 07:47 AM
i changed to gltexsubimage
and installed vista64 again, issued solved !

Stephen A
06-10-2009, 09:32 AM
I am not sure either whether RGBA would really be that much faster since the data is always packed to mod 4 bytes anyway.

All good suggestions though :)
thx
Just FYI, modern hardware does not support RGB internal formats natively (it uses RGBA anyway). It probably doesn't make any difference as long as you are uploading the data in a mod 4 bytes format, but sooner or later you'll probably encounter some anal driver that falls back to software rendering or something. Better be safe than sorry :)

Also note that a RGB/RGBA internal format typically means BGR/BGRA. Which means you'll get a nice performance boost if you upload data in BGRA - this way the driver won't have to perform the conversion for you.