offscreen software rendering with alpha

I’ve been rendering using an on-card pbuffer but noticed the transfer time is about 8ms. We’re not rendering very much so I was wondering about doing it in software to avoid the glReadPixels call. I set up a rendering context using PFD_DRAW_TO_BITMAP in my pfd but noticed that the windows docs claim that even 32 bitCount bitmaps don’t support alpha (they leave the high byte blank no matter what I draw). Are there any other solutions for rendering offscreen in software with a full byte of alpha channel?

I experimented with draw_to_bitmap and support_gdi for antialiased line and quad strip drawing, using software rendering, and it worked fine (slow, but accurate). Since antialiasing is done by alpha blending, there
must be an alpha channel present. I would like to hear other opinions on this.

Interesting…Maybe you set up your rendering context differently so that you weren’t actually rendering to a windows bitmap? I set mine up basically following the example in wright’s superbible. Create a windows bitmap (fill out BITMAPINFO etc.), associate it with the device context (SelectObject()), and then set the pixel format as usual. Rendering from then on renders directly into the &bits pointer passed to the bitmap when it is initially created. I verified that &bits was receiving the rendered data except for every 4th byte (representing the alpha channel) would never get touched. Is there another way to do it? Could you verify that you were indeed using the alpha channel on the bitmap? Perhaps there is some temporary frame buffer used in the antialiasing you performed?

Originally posted by Alon:
…noticed the transfer time is about 8ms…

8ms is about 125fps… looks quite fast to me. Consider you will lose tons of features when going sw so if you need to emulate this it will take a lot more time.

Example: something multitextured, lighted up, alphablended with specular highlights over texture –> SW rendering will take a lot. Probably faster to do it in HW and readying back.

Something simplier may go faster if you are able to do it in sw but I really don’t think.

BTW, i am still speculating here. I never really tried sw rendering however, I don’t think you will be able to avoid the ReadPixels call… Please, can you tell me if avoiding this is really possible and how? I am just curious.

I’m primarily working on a video application which means I have about 30ms (one SDI video frame time) to do everything (which includes lots of processing!) that I need to do in a mostly linear process. Taking nearly one-third of that time just to transfer the graphics buffer seems like a huge waste. If I can render in sw in 5 or 6ms instead of 2ms on the card, I still come out ahead after you account for the transfer time.
It is important to note that I’m not using any special GL features like specular highlights or multitexturing. I simply want to render an rgba texture at certain coordinates - I’m guessing this should be fast even in sw (especially given that we’re using very powerful machines with regard to proc/memory). Perhaps there are other ways (GDI or D3D?) but perferably openGL since we have other applications that would try to share code (and they’re already based on openGL).
glReadPixels is avoidable right now using the DRAW_TO_BITMAP setting in the pfd and creating a bitmap as I described earlier. The bits are readily available in the pointer with which you create your bitmap. Unfortunately I still have the problem that alpha channels are not supported in the bitmap (help?!)