PDA

View Full Version : Rendering to memory bitmaps



towsim
03-07-2014, 08:39 AM
Dear OGL specialists. After a desperate week of trial and error, I stuck in error. I am working on a project to generate topographical maps from terrain profile data.

Task:
The task is to create a BMP file from a OGL frame. The picture shall not be a screen copy, it shall be rendered separately in a user defined pixel resolution (maximum 16384 x 9498 RGB 32 bit).

Solution:
To render the frame separately, I used a memory bitmap as drawing surface. The bitmap is created with the function 'CreateDIBSection' with the screen device context (HDC) as input parameter. The only difference to the screen pixel format is the attribute PFD_DRAW_TO_BITMAP given in the PIXELFORMATDESCRIPTOR. Since the entire frame is made of colored polygons, I need polygon antialiasing. To avoid the annoying stitches between polygons, the blend function is set to 'glBlendFunc(GL_SRC_ALPHA, GL_ONE)'.

Problem:
The described configuration rendered to the screen works perfect. If I render to the memory bitmap, I do not find any configuration which avoids stitches and shows a good quality with polygon antialiasing.

the following combinations were tested:

glBlendFunc: GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA
GL_BLEND: enabled
Result:
Stitches between polygons, antialiasing perfect.

glBlendFunc: GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA
GL_BLEND: disabled
Result:
no stitches, perfect colors, ugly antialiasing.

glBlendFunc: GL_SRC_ALPHA,GL_ONE
GL_BLEND: enabled
Result:
nothing is drawn but the background color.

glBlendFunc: GL_SRC_ALPHA,GL_ONE
GL_BLEND: disabled
Result:
no stitches, perfect colors, ugly antialiasing.

Question:

Is there any blending combination for memory bitmaps which works without the stitches and GL_BLEND on ?
Is there any possibility to share display lists between the screen render context and the memory render context? (I could not get that to work).

What is the sense behind the polygon stitches, which are complete useless and beeing observed over decades in every OGL version? The stitches are not hardware dependent. They are even observed if rendered directly to memory bitmaps or to printer DCs and no graphics card is involved.


Regards
Mike

arekkusu
03-07-2014, 04:15 PM
Blending by itself doesn't antialias anything. How are you antialiasing, with glEnable(GL_POLYGON_SMOOTH)?

I'll suggest:
* verify which renderer is being used for the on-screen and bitmap contexts, with glGetString(GL_RENDERER). Perhaps your bitmap renderer is software?
* Instead of using a bitmap surface, glReadPixels from the HW context into your BMP allocation (use an FBO, if your OS doesn't guarantee on-screen results covered by other windows are always rendered. Stitch together multiple renders, if your final image is too big for MAX_VIEWPORT_DIMS.)

towsim
03-08-2014, 02:12 AM
Hi arekkusu,
thanks for the reply. I just had the same idea yesterday to assemble the large bitmap from different views of the main screen and copy the content to the bitmap. This would have the advantage, that all display lists are available which makes the difference between 15 minutes or 1 minute render time for the biggest BMP file. This solution is not that elegant but seems to be the shortest way to the destination. I will try this out today and the result will be posted.
Thanks an regards,
Mike

towsim
03-09-2014, 04:24 AM
Got it to work! The bitmap is assembled from tiles taken from the screen. The only disadvantage is, that the screen content has to be scaled and sized so that it covers exactly one tile. This causes a wild flicker of pictures until the last tile is copied to the bitmap. The big advantage is, that the entire procedure lasts less than 10 seconds instead of 15 minutes until the BMP file with 608 Mb is placed on the disk.

arekkusu
03-10-2014, 12:22 PM
Leave the screen alone, and use an FBO.

towsim
03-11-2014, 02:07 AM
Thank you, I will give it a try. I never used FBOs before.