Reading a texture from multisample back-buffer

Hi,

I have a multisample OpenGL viewport (when creating the viewport I ask for a pixel format with multisample to wglChoosePixelFormatARB) and my geometry is displayed antialised.

Now, I read the backbuffer inside a texture that I later use use for a quick repaint of my scene in some situations.

The texture is createad with NEAREST min and mag filters and has the same size of the viewport, so no magnification and minification is involved.

On ATI cards the texture is exactly equal to the scene on screen (so when I use it for the repaint I don’t notice any change on screen), but on NVidia cards they are not the same: the lines colors become a bit more intense in the texture.

What could be the problem and how could I fix it?

If I use a pixel format without multsampling there’s no difference from the image on screen in both ATI and NVidia.

Perhaps some screen images might help others see your problem?

This is the screen capture of the rendering of the lines:

and this is the screen capture of the texture generated reading the back buffer

Have noticed this. Seems that the NVidia driver isn’t using the same downsample filter as the GPU for the ReadPixels path vs. the SwapBuffer path.

You could try doing a manual resolve/downsample via glBlitFramebuffer to a single sample texture (which should ideally happen on the GPU), and read back the texels from that. That might produce the desired results.

It looks like only a difference in gamma-correction.

I’m currently using

glCopyTexSubImage2D

to copy the BackBuffer to texture.

I’ll give it a try using glBlitFramebuffer…

Hi,

I tried to use the glBlitFramebuffer but I still get the difference between the texture and the rendered image on NVidia cards.

Maybe I’m not using it the right way?

Here’s my code.

// Draw the scene on the backbuffer

       //Generate the FBO
        rb = gl.GenRenderbuffersEXT();
        gl.BindRenderbufferEXT(gl.RENDERBUFFER_EXT, rb);
        gl.RenderbufferStorageEXT(gl.RENDERBUFFER_EXT, gl.RGB, width, height);


        rbDepth = gl.GenRenderbuffersEXT();
        gl.BindRenderbufferEXT(gl.RENDERBUFFER_EXT, rbDepth);
        gl.RenderbufferStorageEXT(gl.RENDERBUFFER_EXT, gl.DEPTH_COMPONENT, width, height);


        fbo = gl.GenFramebuffersEXT();

       gl.BindFramebufferEXT(gl.FRAMEBUFFER_EXT, fbo);


        gl.FramebufferRenderbufferEXT(gl.FRAMEBUFFER_EXT, gl.COLOR_ATTACHMENT0_EXT, gl.RENDERBUFFER_EXT, rb);
        gl.FramebufferRenderbufferEXT(gl.FRAMEBUFFER_EXT, gl.DEPTH_ATTACHMENT_EXT, gl.RENDERBUFFER_EXT, rbDepth);

        int status = gl.CheckFramebufferStatusEXT(gl.FRAMEBUFFER_EXT);


        CheckFboStatus(status);

        // Copy the Back Buffer to the FBO
        gl.BindFramebufferEXT(gl.READ_FRAMEBUFFER_EXT, 0);  // IS IT RIGHT???????????????
        gl.BindFramebufferEXT(gl.DRAW_FRAMEBUFFER_EXT, fbo);
        gl.BlitFramebufferEXT(0, 0, Width, Height, 0, 0, Width, Height, gl.COLOR_BUFFER_BIT, gl.NEAREST);

        // Read the FBO image in 9 textures (3x3 grid)

        for (int j = 0; j < 3; j++)


            for (int i = 0; i < 3; i++)
            {


                gl.BindTexture(gl.TEXTURE_2D, texName[i + j * 3]);


                // Debug.WriteLine("Grabbing tex " + texName[i + j * 2] + " at " + (texSize * i) + " , " + (texSize * j));
                gl.CopyTexSubImage2D(gl.TEXTURE_2D, 0, 0, 0, texSize * i, texSize * j, texSize, texSize);
            }

Anybody could help, please?

Just a thought,
why use RenderbufferStorage. Render to an offscreen texture buffer that you can blit any time you like to the back buffer. Now even if there is a difference between AMD and nVidia versions, at least it is consistent on a card whether it is the first render or the “fake” blit.

Please use [code] blocks to mark code.

glCopyTexSubImage2D reads from the READ_BUFFER of the READ_FRAMEBUFFER. It doesn’t look like you’re reading from the framebuffer I think you intended to (see glBindFramebuffer() and glReadBuffer()). Looks like you’re copying from the MSAA framebuffer as before here.

And agree with [b]tonyo_au[/b]. Just blit to an FBO bound to a texture. Then you don’t need the CopyTexSubImage to get it into another texture.

Also yes, binding FBO 0 reverts to the system framebuffer. I’m presuming that’s the one you’ve got that’s multisampled.

If you still have issues, you could see if the same happens when rendering to an off-screen MSAA FBO. There may be something about your system framebuffer that’s “special”.

I’m assuming you’re not using sRGB framebuffers.

Hi,

thank you all for the replies.
Drawing directly to a FBO bound to a texture works and gives the same image!

Thank you very much.

Drawing directly on the FBO with texture I get a texture without antialiasing.

I will try to use a multisample FBO and see what happens, but I’m losing hope…

You need to create multi-sample textures with glteximage2dmultisample, not normal textures and enable mutli-sampling

I tried drawing on a multisample FBO with the same number of Samples of my pixel format (that is 2) and then copying it to my back buffer.

The situation improved but it’s still not pixel perfect yet (near horizontal and near vertical lines are the same, oblique lines are not)…


                int nSamples;
                nSamples = 2;

                //Now make a multisample color buffer
                rbColorMS= gl.GenRenderbuffersEXT();
                gl.BindRenderbufferEXT(gl.RENDERBUFFER_EXT, rbColorMS);
                gl.RenderbufferStorageMultisampleEXT(gl.RENDERBUFFER_EXT, nSamples, gl.RGBA8, fboTexSize, fboTexSize);
                gl.BindRenderbufferEXT(gl.RENDERBUFFER_EXT, 0);

                //Make a depth multisample depth buffer
                rbDepthMS = gl.GenRenderbuffersEXT();
                gl.BindRenderbufferEXT(gl.RENDERBUFFER_EXT, rbDepthMS);
                gl.RenderbufferStorageMultisampleEXT(gl.RENDERBUFFER_EXT, nSamples, gl.DEPTH_COMPONENT, fboTexSize, fboTexSize);
                gl.BindRenderbufferEXT(gl.RENDERBUFFER_EXT, 0);

                // Create FBO and attach the 2 render buffers
                fboMS = gl.GenFramebuffersEXT();
                gl.BindFramebufferEXT(gl.FRAMEBUFFER_EXT, fboMS);                           
                gl.FramebufferRenderbufferEXT(gl.FRAMEBUFFER_EXT, gl.COLOR_ATTACHMENT0_EXT, gl.RENDERBUFFER_EXT, rbColorMS);
                gl.FramebufferRenderbufferEXT(gl.FRAMEBUFFER_EXT, gl.DEPTH_ATTACHMENT_EXT, gl.RENDERBUFFER_EXT, rbDepthMS);

                status = gl.CheckFramebufferStatusEXT(gl.FRAMEBUFFER_EXT);
                CheckFboStatus(status);
                gl.BindFramebufferEXT(gl.FRAMEBUFFER_EXT, 0);

            // Enable Multisample FBO
            gl.BindFramebufferEXT(gl.FRAMEBUFFER_EXT, fboMS);

            gl.Clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
           
            // DRAW THE SCENE on the multisample FBO....
     
...
            // Later on, copy the Mutlisample FBO to the Backbuffer

            gl.BindFramebufferEXT(gl.READ_FRAMEBUFFER_EXT, fboMS);
            gl.BindFramebufferEXT(gl.DRAW_FRAMEBUFFER_EXT, 0);
            gl.BlitFramebufferEXT(0, 0, Width, Height, 0, 0, Width, Height, gl.COLOR_BUFFER_BIT, gl.NEAREST);
            gl.BindFramebufferEXT(gl.READ_FRAMEBUFFER_EXT, 0);
            gl.BindFramebufferEXT(gl.DRAW_FRAMEBUFFER_EXT, 0);


So is there no way to get a pixel perfect capture of a Multisampled framebuffer on NVidia cards?

If I was doing antialiasing by myself (through Accumulation buffer) would I still get this problem? (I’m afraid of the penalty of having to draw the scene multiple times, though…).

I could also always draw in the multisampled FBO and copy it to the backbuffer at every frame (instead of drawing directly to the backbuffer for standard frames and use the draw to FBO + copy to Backbuffer for the quick repaints at the end of a movement of the scene).
This way I would always pay the cost of the copy from FBO to BackBuffer (is it much?), but the frames would be consistent because I would be drawing them always in the same way.

thanks for ur posting. i used to try to draw in multisampled FBO but i cant to get perfect pixel on nivida card

To your question, obtaining exactly the same output relies on either 1) you replicating exactly the downsample filter that NVidia is using, or 2) just using NVidia’s downsampling. #2 is probably the safest since #1 is potentially vendor-specific magic.

As far as obtaining the actual framebuffer samples, you can read back the actual subsamples from an MSAA FBO (possibly with help of a tiny frag shader). You can also read back the downsampled pixel values of course. So getting the actual data pre- and post- isn’t a problem. You canalso query sample positions IIRC. But you still come back to needing to replicate the downsample filter being used (or just using the one that’s built-in).

I could also always draw in the multisampled FBO and copy it to the backbuffer at every frame … This way I would always pay the cost of the copy from FBO to BackBuffer (is it much?), but the frames would be consistent because I would be drawing them always in the same way.

Should be fairly cheap. I mean, the GPU has to downsample the image to render it anyway, and if you do timing in an MSAA video mode you’ll see that this is rolled into your SwapBuffer time. So forcing your own downsample via glBlitFramebuffer and blitting from an MSAA FBO to a single-sample system FB should just be making this downsample explicit in your code.

Thank you very much.

I went down the route of drawing my scene on the Multisample FBO always and copying it at every frame on the Single Sample BackBuffer and it works nice.