MS OpenGL implementation

Hi All,

As you probably know, the more straightforward way to create a bitmap of the OpenGL scene is the MemoryDC. BTW this approach uses the Microsoft implementation of OpenGL and the result are often arguable.

We are fighting with the following issue:

Do you see the saw on the face edges?

Is there any way to remove it? Any workaround?

We see it only in BMP generated via MemoryDC, on screen the picture is perfect!!!

We also implemented the FBO approach but does not work correctly on many hardwares with Windows Vista.

How can we get a good image of our OpenGL scene? :confused:

Thanks,

Alberto

Use ReadPixels

Hi Zengar,

Thanks for your help but:

We need to create a BMP 2, 4 or even 8 times bigger than OpenGL viewport on screen.

Is it possible using ReadPixel?

Thanks again,

Alberto

Seems when creating the Bitmap with a MemoryDC, the z-buffers bit-depth is lower.

With FBOs you can render to textures, that are much bigger, than the viewport. The maximum size is the gfx-card’s limit for texture-sizes, usually 4K*4K.

So, yes, you can use ReadPixels with a hardware-accelerated DC, even for such big images.

Jan.

Hi Jan,

My question was: is it possible to resize temporary the viewport to a 8x size and use the ReadPixel to get the BMP image?

As I told before we are experiencing a lot of issues with FBO and Windows Vista…

Thanks,

Alberto

You can’t make an on-screen window bigger than the screen.

However, for a power-of-2 multiplier like 8, couldn’t you just use ReadPixels to get a smaller image, then just copy each pixel to an 8x8 block of a larger one on the CPU?

What sort of FBO problems are you having? First I’ve heard of there being Vista issues there.

Ah, sorry, i miss-read that part of your post. But what Lindley suggests is a reasonable idea, that will work, though it will need you to setup your projection-matrix carefully. Actually, it is the same thing, as doing a very high-res screenshot, by rendering parts of it, reading it back, and composing the result on the CPU. There was a discussion about that a few weeks ago.

Jan.

Lindley,

We are using the color buffer and not the rendering to texture. The same code work perfectly on XP and not on Vista.

Jan,

Maybe I got it. Do you mean to use the gluPickMatrix & glReadPixel to get many little tiles of the bigger image?

This sounds like a good idea.

Thanks again,

Alberto

Yes, that was the basic idea. However, i have never done that myself, so i cannot tell you exactly how that works. You may want to search the forum for posts about that.

Jan.

Originally posted by devdept:
[QB] Lindley,

We are using the color buffer and not the rendering to texture. The same code work perfectly on XP and not on Vista.

Ah, so it’s a problem with Renderbuffer objects? You could always try using textures instead; not like it makes a whole lot of difference for the most part.

Also check for driver updates, naturally. An error that glaring would hopefully be fixed pretty fast.

Answering the original issue

Is there any way to remove it? Any workaround?
The artifacts at the face edges are due to lack of z-buffer precision as Jan already explained.

To fix this you need to specify a depth buffer with higher resolution than what you currently have selected.
Check glGetIntergerv(GL_DEPTH_BITS, &i) in your program and you’ll probably get 16 bit while the image above is rendered and 24 bit when using the hardware.

Microsoft’s SW implementation supports 16, 24 and 32 in either onscreen or memory DC pixelformats. Change your pixelformat selection code to make sure you get 24 bit depth on the memory DC (matching what most HW can do).

ChoosePixelFormat() is pretty dumb, better enumeration yourself and pick one manually.

If that’s not it, the depth buffer precision increases the smaller the ratio zFar/zNear gets. That is, push zNear out as far as you can without clipping your objects and draw zFar in, also without clipping your object, and you get the best of your depth buffer bits.

BTW, the artefacts from the dark sides of the objects in you above image could also be eliminated by enabling face culling if you don’t need to draw the backfaces for two sided lighting or models with inconsistent (bad) winding. Though that wouldn’t eliminate the problem on the intersecting objects.

Hi Relic,

Thanks so much for your detailed answer.

Honestly, for a professional app and with the need to create poster sized bitmaps would you follow the glu.PickMatrix/gl.ReadPixel tiles approach or still the MS OpenGL Implementation with the required 24 bit depth buffer?

Creating a huge Memory DC is also very cumbersome.

Thanks again to you all,

Alberto

I would first follow Relic’s advise, since you can fix your current issues within very short time.

If you want to extend your application and need poster-size screenshots, i would do the tiled rendering approach. The advantage is, that you get hardware acceleration and, above all, more than OpenGL 1.1, which MS implementation is stuck at. That means, you can use shaders and all the fancy stuff and it will be fast.

Jan.

Honestly, for a professional app and with the need to create poster sized bitmaps would you follow the glu.PickMatrix/gl.ReadPixel tiles approach or still the MS OpenGL Implementation with the required 24 bit depth buffer?
Easy, if “poster sized” is above the GL_MAX_VIEWPORT_DIM of your underlying OpenGL implementation, you must use tiles anyway.

I guess if you are talking more about poster sized bitmaps, I have also done similar kind of application earlier, where i would like to see my model in 2x/4x/8x large format sizes, for this we used to tile the camera along x,y and capture each tile buffer and stitch the images. This has worked out very well for us.

If you want to generate some thing large with out this apporach … sorry I have no idea :confused:

Hi Relic,

Thanks again, this really make sense!

Hi Kumar,

To know about other methods you need to look for FBO and its ability to write pixel to a renderBuffer/texture.

Thanks,

Alberto

I strongly recommend the tiled approach. You can render at whatever resolution you want and fully exploit hardware accelleration.

The following pseudo code will construct a perspective matrix that can be panned in post-projection space with ShiftX, ShiftY (pixels).

  
  v = near / (Cos(FoV/2) / Sin(FoV/2));
  h = v * Aspect;
  sx = -(ShiftX * (2.0 * h) / Width);
  sy = -(ShiftY * (2.0 * v) / Height);

  left   = -h + sx;
  right  =  h + sx;
  bottom = -v + sy;
  top    =  v + sy;

  m = Identity;
  m[0,0] = (2 * near/ (right - left);
  m[2,0] = (right + left) / (right - left);
  m[1,1] = (2 * near) / (top - bottom);
  m[2,1] = (top + bottom) / (top - bottom);
  m[2,2] = -(far + near) / (far - near);
  m[3,2] = -(2 * far * near) / (far - near);
  m[2,3] = -1;
  m[3,3] = 0;

Also, you may find that many OpenGL implementations will render fine even when the window is occluded if you clear the buffer manually with geometry rather than with glClear.

Hi Madoc,

Yes, your code is exactly what the the glu.PickMatrix does.

I already did some test and this approach works perfectly!

Thanks so much guys!

Alberto

Huh, didn’t know that. I’ve never even looked at glu, in 10+ years I’ve been using OGL!

Hi Guys,

We are doing many tests on the RenderToBitmap based on the glReadPixel / glu.PickMatrix technique to render an poster sized image 2/4/8 times bigger than the OpenGL Viewport.

Now the problem is:

On some machines and for sure on the Microsoft OpenGL implementation we get a size mismatch between the tile bitmap and the glReadPixel contents.

More specifically:

The procedure works perfectly on our machines but on some customer machines somebody sees a grid on the final image.

Doing some debugging we realised that it looks like the glReadPixel instead of reading the 100x100 tile it reads a 98x98 tile and once drawn with the right step on the poster image a small gap appears between tiles.

As I mention before this it is not true on every graphics card.

Any idea on where to start?

Thanks so much in advance,

Alberto