How fast should this be (glDrawPixels)?

Okay, I know all about the speed issues with glDrawPixels() but I’m hoping to be able to use this with acceptable results. I have an application that I need to be able to read, modify & draw some pixel data. I know that OpenGL isn’t the fastest at doing this, but I’m using OpenGL for all of my 3D stuff as well.

So here is the worst case scenario, given a 450MHz Pentium II running the MS “Software” driver, how long would you expect it to take to draw a 675x500 image using glDrawPixels()?

On this particular system that I am testing, it is taking 6 seconds to draw this image, which seems too long. I’m pretty sure I’ve got all the stuff turned off that slows down glDrawPixels(), but I’m starting to wonder.

Thanks in advance for your help.
-Keith

I’m not sure about the the software implementation but the PRIMARY thing to do when using glDrawPixels on consumer cards is to make sure that your source format matches the frame buffer format. The implementation will do the conversion but usually this is really slow.

Eg if you requested a 32 bit RGBA color buffer, you should also submit 32bit RGBA data to OpenGL.

For example on a recent Radeon ICD, writing 640x480 16bit pixels to a RGBA_8888 frame buffer takes 120ms on my machine. Doing the conversion myself and writing RGBA_8888 reduces this to 15ms

So in this case, you set the data type to GL_UNSIGNED_BYTE to maximize the performance – correct?

I thought in general the GL_UNSIGNED_BYTE was the fastest data type for reading & writing pixel data.

“So in this case, you set the data type to GL_UNSIGNED_BYTE to maximize the performance – correct?”

Almost

something like glDrawPixels(…,GL_RGBA,GL_UNSIGNED_INT_8_8_8_8);
<- note that this might require _EXT suffixes on pre 1.3 headers (EXT_packed_pixels got integrated into 1.3).

“I thought in general the GL_UNSIGNED_BYTE was the fastest data type for reading & writing pixel data.”

Not necessarily. If you end up with a 16bit color buffer, you are generally better off writing with
glDrawPixels(…,GL_RGB,GL_UNSIGNED_SHORT_5_6_5);

You can check your color buffer bit depth with DescribePixelFormat().

Thanks for the follow up. I also knew about the format for 16bit color buffers, but I thought I read that someone changed from the GL_UNSIGNED_INT_8_8_8_8 to GL_UNSIGNED_BYTE and got better performance on their system.

The GL_UNSIGNED_INT_8_8_8_8 format is new to me, what is the advantage of this format?

“The GL_UNSIGNED_INT_8_8_8_8 format is new to me, what is the advantage of this format?”

I don’t know, it just sounded flashy

In all seriousness, I don’t know if that one or GL_UNSIGNED_BYTE is faster. I haven’t benched these two against another yet. AFAIK they should both do exactly the same thing for me, because I use destination alpha.

If you have a simple RGB buffer, GL_UNSIGNED_BYTE will probably be faster because the driver doesn’t have to throw away the alpha channel and pack.

If you want to read more on _8_8_8_8 and similar stuff I suggest you have a look at the EXT_packed_pixels spec or the GL1.3 spec on document page (!= acrobat reader page) 98+.

Thanks zeckensack for the help.

Now, back to my original question – can anyone confirm my time for drawing a pixel image as listed in the frist message?

Check out my recent thread testing a very similar application. The top speed we were able to get was 8-9 fps, or 120ish milliseconds. http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/005569.html
I’ve been told that much better is possible, but I haven’t seen it.
Joe

JoeMac – thanks for the notification. I’ve been following your thread as well – but there have been some new posts since I last read it.