My OpenGL Odyssey, LayeredWindows & Alpha Blending

I had written a small windows application and decided to use openGL to draw the graphics and produce non-rectangular alpha-blended windows on my desktop. Create a window, without the frame, draw the graphic and I’m done - or so I thought.

  1. My first attempt was to use Embarcadero (Borland/CodeGear) C++ builder 9, and a panel object called OpenGLPanel. OpenGLPanel allows you to draw on a panel using openGL, and C++ builder’s TForm allows you to have alpha blending and transparency for the window and its panels.

Unfortunately after playing with it I discovered that BCB’s form implements transparency and alpha-blending using Microsoft’s SetLayeredWindowAttributes API. This dramactically slows the application. Drawing a simple openGL scene 100x100 at once per second consumes 20% of my cpu time! But it does work - sort of. Beside cpu consumption, some gotchas are that the alphablending is ‘per frame’ not ‘per pixel’, and since you have to specify the transparency colour, it tends to leave artifacts of that colour on the result after blending an antialiased image. With the resultant cpu performance, I had to find a new way to produce my image and get it onto the screen.

  1. It became obvious to me that if I should use the UpdateLayeredWindow api instead, i could get the performance I needed & per pixel alpha blending (and transparency). All that was required for UpdateLayeredWindows was the hDC of the resulting openGL image whenever I changed it. So the problem is how can I get a resultant image from openGL?

After about two days of browsing, I heard mention of about 4 ways to get the image out of openGL.

a) I could get the image from a ‘hidden’ openGL window
b) I could get the image from a bitmap generated by openGL
c) I could get the image from a frame buffer
d) I could get it from a pbuffer

I’ve never really played with the internals of either windows bitmaps, or openGL image buffers, so I had to decide without knowing much of the terminology and or costs/benefits.

I knew how to draw a ‘hidden’ window, or as somone on the web said - you just ‘dont show the window - openGL still works’. This seemed like the simplest and most appropriate route. Besides, drawing in openGL was accelerated - that would be better than drawing to a bitmap.

2a) So I changed my code to generate a new window, and added openGL to draw to that window. To insure it worked, I decided to ‘Show()’ the ‘hidden’ window first. Once it worked, I figured I could then take its hdc and give it to UpdateLayeredWindow. I ran it and voila, of course my generated image was there visible on my not-so-‘hidden’ window. When I passed the hdc to UpdateLayeredWindow - success a beautiful per-pixel alpha-blended transparent window!

Then the problems started.

I decided not to ‘Show()’ that hidden window, since it was shown only for my debugging purposes. UpdateLayered window didnt work either! But I thought they said you could draw to a hidden window. Oh well, maybe not! Maybe if i didnt swap buffers, or maybe if i drew to a bitmap - after hours of black rectangles i decided i didn’t understand what people meant by a ‘hidden’ window. All i wanted was what openGL had created, on my screen but without showing it on my ‘hidden’ screen first! How to do that? I guess I didnt understand.

Looking at my remaining options… i read that (2d) pBuffers is sort of being deprecated in favor of (2c) frame buffers - whatever those where. But (2c) frame buffers where an extension to the spec.

With all the acronyms, this took a while to figure out.

So, since my app has an audience of average XP users, I can’t assume they have the latest graphics cards. I cant use 2d, or 2c without some sort of fallback. Maybe I’ll try option 2c or d later, when I have more time to figure out what people mean. But unfortunately if I go to (2b) will lose hardware acceleration. Oh well - thems the breaks.

2b) Drawing to a bitmap didn’t seem that difficult. I’ll just switch the option from DRAW_TO_SCREEN to DRAW_TO_BITMAP, then plug the hdc into UpdateLayeredWindows like I did before.

It didnt work.

That hdc doesnt contain the bitmap! Then where is it? (Im still not sure!)

More research. It turns out DRAW_TO_BITMAP option produces the bitmap as a DIBsection. Oh no! Now I have to learn DIBsections.

So this is where I am now, trying to learn DIBsections to get at a bitmap what was fine but so-so in phase 1, and perfect but too (or two ) visible in phase 2a. Isn’t there an easier way. It was perfect right there in my 'not-so-‘hidden’ window. Im starting to think that openGL is designed to insure you learn every possible format for bitmaps, when i seemed very close to a solution only a few days ago.

Is proceeding down path (2b) my only way to get the hdc and image in a format that I can plug it into UpdateLayeredWindow? If you know you may save me (and others) alot of time!

Any help would be appreciated, and I’ll keep you posted!

In the mean time - where should I go to read about the DIBsections that openGL produces when you specify bitmaps - to insure they can be transferred to UpdateLayeredWindow? Now i think i have to learn about ComptibleDeviceContexts too.

And why can’t I get at the bitmap (in 2a) before SwapBuffers shows it? It looked perfect.

Thanks

2a) nothing to do with Swapbuffers or not. It does not work because “hidden parts of a window” fail the pixel ownership test, see “14.070 Why don’t I get valid pixel data for an overlapped area when I call glReadPixels() where part of the window is overlapped by another window?” here :
http://www.opengl.org/resources/faq/technical/rasterization.htm
2b) can work, but DRAW_TO_BITMAP is software rendering GL 1.1 only.
2c) FBO is indeed to be favored over 2d) pbuffers, but you can fall back to pbuffers with old drivers for example. FBO is simpler and cleaner to use. Both are hardware accelerated, and both are “extensions”