Fremebuffer with multisampling AND stencil buffer

I am having sucess with creating a framebuffer, for offscreen rendering, with multisampling and with a stencil buffer attached, but having both I fail at.

Trying to create a multisampled stencil buffered framebuffer fails with an GL_FRAMEBUFFER_INCOMPLETE_MULTISAMPLE_EXT. The following code works as expected for non multisampling, but fails when multisampling is enabled.


static GLuint createOffScreenFrameBuffer(int width, int height, int numSamples)
{
GLuint fboID, colorBufferID,depth_stencil_texture;
GLint maxSamples;

glGetIntegerv(GL_MAX_SAMPLES_EXT, &maxSamples);
if(maxSamples<numSamples)
throw gcnew System::Exception(“Requested number of multi samples not supported by video format”);

glGenFramebuffersEXT(1, &fboID);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glGenRenderbuffersEXT(1, &colorBufferID);

glGenTextures(1, &depth_stencil_texture);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, depth_stencil_texture);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_DEPTH24_STENCIL8_EXT, width, height, 0, GL_DEPTH_STENCIL_EXT, GL_UNSIGNED_INT_24_8_EXT, NULL);
glTexParameterf(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

if(numSamples>1) //here we say that 1 multisample is equal to no multisample. It is just ONE sample
{
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, colorBufferID);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER_EXT, multiSampleCount, GL_RGBA8, width, height);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, colorBufferID);

glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_RECTANGLE_ARB , depth_stencil_texture, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_RECTANGLE_ARB , depth_stencil_texture, 0);
}

else

{
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, colorBufferID);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA, width, height);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, colorBufferID);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_RECTANGLE_ARB , depth_stencil_texture, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_RECTANGLE_ARB , depth_stencil_texture, 0);
}

GLuint status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);

if(status!=GL_FRAMEBUFFER_COMPLETE_EXT)
throw gcnew System::Exception(“Failed to create off screen framebuffer”);

return fboID;
}

The texture you want to attach as depth/stencil buffer is not multisampled. For multisampling to work, ALL attachments have to be multisampled (and use the same number of samples).

In order to make those textures multisampled, too, see here:
http://www.opengl.org/registry/specs/ARB/texture_multisample.txt

As an alternative you may want to consider rendering into multisampled renderbuffers first and then downsample (i.e. blit) them into non-multisampled textures.

That makes sense. I am however unable to use the extension. I am using glee to load extensions and for some reason, though the extension is supported by the driver, it is not included in the glee header.

Anyway. I have no special need for a texture and I initially did try this with a multisampled render buffer, but dropped it since it would let any stencil test pass though no errors were given during framebuffer construction (stencil testing failed silently). I thought I HAD to use textures then. After your replie indicated otherwise, I tried renderbuffers again and this time I didn’t neglect to bind both as depth and stencil buffer and it worked fine. Thanks for that little pointer!

Now my problem is that of too much multisampling actually. I use stenciling to render graphics interlaced. I render every other scan line, advance my time and render the other scanlines. With multisampling, the scanlines bleed into eachother. In other words, if I render only every other scan line, then I should have every (other) other scan line being blank, but with multisampling, the lines bleed into the blank lines.

I need multisampling (though only really horizontally) so my framebuffer must be multisampled, but I would rather not have the stencil being multisampled.

Are there any solution to this problem, or is my requirements too strange?

Even if you place correctly the coordinates to be at integer line positions, some multisampling systems with non aligned-grids will indeed break your interlacing.

In fact this is a difficult problem.
Possible options :

  • work on multisampled full frame 60/50 hz, then interlace, discarding every other line of each pair of progressive frames. Problem : half rendered fragments are wasted.
  • do oversampling by hand by working on a twice height and twice width, so you can be certain to be axis-aligned, then average 4 pixels in one. Problem : do not benefits from fancy multisampling modes.

I was actually considering rendering two full frames and then interlacing. In an earlier version, with DX, I did this in a shader. First rendered the two fields as full frames in a texture each, and then outputted the result from a shader looking at the two textures.
I have a generally quite simple scene and am in no way limited by my render speed (but rather a lot of pixel copying elsewhere), so it could be a usable solution again.

Can this be done in opengl without writing a shader, or should I try to dust off the old code?

Should be doable simply like this :

  • glClear depth, color, stencil
  • draw frame 0
  • enable stencil write
  • disable depth test
  • draw every odd (or even) line (can be done with a single call with a fullscreen quad with alpha-tested textured lines)
  • enable stencil test
  • glClear depth, (and color too if needed)
  • draw frame 1

However a shader may be more efficient.

EDIT: I realized that this multisampling behavior was actually required for proper interlacing, odd as it may seem. If the graphics is a horizontal line one pixel thick, then non-multisampled rendering and interlaced tv display would render the line in one field and not in the next. The line would flicker. This is not desired. Instead it should be somewhat present in both fields. A solid square would show the same flickering on top and bottom edges. Sorry about wasting your time in this thread :-/


Before edit

That is pretty much what I am doing now. I render every other line into a the stencilbuffer and then i test for stencil==0 and render one field and then stencil==1 and render the other.

The result without multisampling can be seen at
www.greenleaf.dk/no multisampling.png
The same procedure with x16 multisampling is seen in
www.greenleaf.dk/x16 multisampling.png

At the fast moving edges, where there is a large difference between the two fields, the first image shows isolated lines while the multisampled shows these lines blured up and down to form a semi transparent surface.

I am now to opengl and may be missing an important thing here and there. I now think that perhaps my problem has nothing to do with a multisampled stencil buffer (if there even is such a thing with blurred stencil values??), but is simply due to the bluring of pixels rendered in the individual fields.

As an example, ff the stencil at row 100 is 1 and it is 0 at 99 and 101, and I render testing for stencil equal to 1, then row 100 gets graphics rendered and 99 and 101 does not… as expected, but the multisampling does in fact result in color being written into 99 and 101? Is this right?

The question is then how I make multisampling only sample left and right and now up and down… or perhaps I should avoid asking specificks and simply ask “how do I make my interlaced rendering as seen in the images, previously mentioned, stop bluring the fields into eachother?” :slight_smile:

The mulstisampling image above look very wrong, unless you want to deinterlace with a kind of linear blending between the 2 frames…

I agree… it does look wrong. That was the reason I started writing about multisampling og stencil etc.
That is however how they want it in the broadcasting business to avoid the flickering line issue. Any sharp edged object will have flickering pixels on its upper and lower edge otherwise.

In essense I create graphics with two samples in time embedded in a single frame. They then broadcast those two samples at different times again and this gives aliasing if the two samples (fields) are not slightly blurred in time.

My primary area of expertise is actually neither in tv video signals or opengl, so even though it makes sense to me now, it also seems slightly confusing. It is what they want anyway, so considering this is what the software generates, we are all happy :wink:

If I were a user of your system I would be quite pissed off with this result.

I will try to find some time to code a better solution. If you can post your current multisampled demo as a simple .c GLUT program, it will help me a lot.

I do not quite understand your point here? This is not just “good enough” it is actually what is required. Consider it temporal smoothing. Without some blending of the fields, temporal aliasing is seen. I have now seen the output with and without antialiasing on a video monitor and the edges do seem to jitter up and down when the fields are not smoothed into each other.

I would like to see how to go about making clean cut interlaced with no smoothing in time, as I originally requested, but I am not really able to cut out the code doing this. It is a very large application with a lot of classes supporting each other. Primary code is a win forms c# application using various c# classlibraries and calling unmanaged c++ code, so it would require a full rewrite and not just a cut out function :-/

Of course it depends on the usage, and this kind of temporal smoothing is very nice for static artificial drawings with a lot of perfectly horizontal lines.
But it is pretty bad for dynamic scenes.

About the jitter up and down, the worst case is 1-pxel wide horizontal line that appears only on one of the fields.
However, if it is shifted half-pixel vertically, it belongs 50% to each field, which feels much better on the eye.

Best deinterlacing solutions use a motion detection system which analyse previous frame(s) (and sometimes future frame) and give a steady result on static parts, and un-blurred high quality motion on fast moving parts.

Yes, it depends on the usage. My use is primarily just that, static images with axis aligned edges. Also other things, but that is the primary use.

Shifting the line ½ pixel up or down gives the same result as the antialiasing as far as I can see? two lines with 50% transparency each.

I didn’t know how deinterlacing was generally implemented. Seems natural to deal with the data depending on how it moves, but that certainly seem too complex for my use.

I read a paper somewhere about multisampling from nvidia. I can’t find it again, but I seem to recall that they talked about ways to manually choose the multisampling locations. If I remember correctly, then thay could possibly be used to only multisample horizontally?

Yes, but it will also work properly with moving stuff, unlike your current method :slight_smile:
I heard that professional TV graphics are slightly pre-blurred vertically to further reduce flicker. And live video is untouched, as less prone to have perfectly horizontal lines.
This kind of pre-blur should be done separately for each field, to avoid ghosting with animated graphics.

I read a paper somewhere about multisampling from nvidia. I can’t find it again, but I seem to recall that they talked about ways to manually choose the multisampling locations. If I remember correctly, then thay could possibly be used to only multisample horizontally?

Note quite the same, but this is the nearest I could find :
http://www.opengl.org/registry/specs/NV/explicit_multisample.txt
However it would go back to ‘render stuff that will be discarded’ (and needs quite recent hardware).

You talk about pre-blurring on a per field basis, but unless the blur has a range of more than one pixel, then no blur would ever span outside the line being blurred and the neighbouring lines (from the other field)…? This would then result in the only-horizontal blur that I initially sought, but no longer want.

The extension you linked to looks interesting. I will take a closer look in the morning. New hardware is not really a problem. The application is not supposed to run on a wide range of different systems. Rendering things double is also not a problem.

Thanks for the replies you have given in this thread :slight_smile: