inconsistent alpha values in multisampled buffer

I am creating a Windows bitmap from a OpenGL multisampled pbuffer with a transparent background. I am using the alpha channel to control transparency in the resulting bitmap. The problem is that the alpha component in the RGBA value is inconsistent with the color components.

For example, I have a 2x FSAA pbuffer. I reset the RGBA color to (0,0,0,0). I then draw a filled white triangle. A pixel near the edge of the triangle will have these RGBA values: (188,188,188,127). I would have expected the RGB components to be the same as the alpha component. As it is, when I draw the bitmap into my device context, some of the blended pixels have color components that have overflowed (RGB component value would be greater than 255 which produces incorrect results). I am using the Win32 API AlphaBlend to draw the bitmap. It uses the following formula (red component example used) for normalized [0-1] values:

Dst.Red = Src.Red + (1 - Src.Alpha) * Dst.Red

So if I have a white destination pixel, and I substitute in the unnormalized values, I get this:

Dst.Red = 188 + (255 - 127) * 255 / 255 = 316 (Overflow!)

In my example, if the RGB components were the same as the alpha value, then there would never be an overflow. So my question is, for white fragments, why are the resulting RGB components not the same as the alpha component?

Here’s my code (MFC dialog application):


class CAlphaBlendDlg : public CDialogEx
{
  ...
  HGLRC       m_hBitmapRC;
  HPBUFFERARB m_hBitmapPbuffer;
  HDC         m_hBitmapPbufferDC;
  ...
};

BOOL CAlphaBlendDlg::OnInitDialog()
{
  CDialogEx::OnInitDialog();

  CClientDC dc (this);
  CRect clientRect;
  GetClientRect(clientRect);

  // create an off-screen buffer for bitmap rendering
  int numMultiSamples = 2;
  const int attributes[] =
  {
    // need OpenGL supported
    WGL_SUPPORT_OPENGL_ARB, GL_TRUE,
    // enable render to pbuffer
    WGL_DRAW_TO_PBUFFER_ARB, GL_TRUE,
    // at least 24 bits for depth
    WGL_DEPTH_BITS_ARB, 24,
    // need RGBA colors
    WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB,
    // at least 24 bits for color
    WGL_COLOR_BITS_ARB, 24,
    // need alpha channel for transparent background
    WGL_ALPHA_BITS_ARB, 8,
    // we don't need double buffering
    WGL_DOUBLE_BUFFER_ARB, GL_FALSE,
    // enable FSAA
    WGL_SAMPLE_BUFFERS_ARB, GL_TRUE,
    WGL_SAMPLES_ARB, numMultiSamples,
    // end with a NULL terminator
    NULL
  };
  int newPixelFormat = 0;
  UINT numFormats = 0;
  bool usePixelFormat = false;
  wglChoosePixelFormat (dc.m_hDC, attributes, NULL, 1, &newPixelFormat, &numFormats);
  // try to use OpenGL pbuffers to render to an off-screen buffer
  m_hBitmapPbuffer = wglCreatePbuffer (dc.m_hDC, newPixelFormat, clientRect.Width(), clientRect.Height(), NULL);
  m_hBitmapPbufferDC = wglGetPbufferDC (m_hBitmapPbuffer);
  /* No need to set the pixel format because the pbuffer is already
  created with the pixel format specified. */
  m_hBitmapRC = ::wglCreateContext (m_hBitmapPbufferDC);
}

void CAlphaBlendDlg::OnPaint()
{
  CPaintDC dc(this); // device context for painting
  CRect clientRect;
  GetClientRect (clientRect);

  CDC *pBitmapDC = new CDC();
  pBitmapDC->CreateCompatibleDC (&dc);

  BITMAPINFO bitMapInfo;
  memset (&bitMapInfo, 0, sizeof(bitMapInfo));
  bitMapInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
  bitMapInfo.bmiHeader.biWidth = clientRect.right;
  bitMapInfo.bmiHeader.biHeight = clientRect.bottom;
  bitMapInfo.bmiHeader.biPlanes = 1;
  bitMapInfo.bmiHeader.biBitCount = 32;
  bitMapInfo.bmiHeader.biCompression = BI_RGB;

  void *pBitmapBits = NULL;
  HBITMAP hBitmapDIB = ::CreateDIBSection (pBitmapDC->m_hDC, &bitMapInfo, DIB_RGB_COLORS, &pBitmapBits, NULL, 0);
  pBitmapDC->SelectObject (hBitmapDIB);

  VERIFY(wglMakeCurrent (m_hBitmapPbufferDC, m_hBitmapRC) != FALSE);

  // set up orthogonal projection so we can specify window coordinates for vertices
  glMatrixMode (GL_PROJECTION);
  glLoadIdentity();
  gluOrtho2D (0, clientRect.right, clientRect.bottom, 0);
  glMatrixMode (GL_MODELVIEW);
  glLoadIdentity();
  glViewport (0, 0, clientRect.Width(), clientRect.Height());
  glDisable(GL_DEPTH_TEST);
  glDrawBuffer (GL_FRONT);
  glClearColor(0,0,0,0);
  glClear(GL_COLOR_BUFFER_BIT);

  // draw a bunch of filled white triangles
  glColor4ub (255,255,255,255);
  static const int numTriangles = 100;
  static const double triangleSizeFactor = 0.1;
  const int triangleSize = (int) (clientRect.Width() * triangleSizeFactor);
  glBegin(GL_TRIANGLES);
  for (int index = 0; index < numTriangles; index++)
  {
    int x = rand() % (clientRect.Width() - triangleSize) + triangleSize / 2;
    int y = rand() % (clientRect.Height() - triangleSize) + triangleSize / 2;
    for (int vertexIndex = 0; vertexIndex < 3; vertexIndex++)
    {
      int vx = x + (rand() % triangleSize) - triangleSize / 2;
      int vy = y + (rand() % triangleSize) - triangleSize / 2;
      glVertex3i (vx,vy,0);
    }
  }
  glEnd();

  // read the pixels
  glReadBuffer(GL_FRONT);
  glReadPixels (0, 0, clientRect.Width(), clientRect.Height(), GL_BGRA_EXT, GL_UNSIGNED_BYTE, pBitmapBits);
  VERIFY(wglMakeCurrent (NULL, NULL) != FALSE);

  // use pBitmapBits to draw onto DC using AlphaBlend
  // ...

  // cleanup
  delete pBitmapDC;
  pBitmapDC = NULL;
  ::DeleteObject (hBitmapDIB);
  hBitmapDIB = NULL;
}

To visualize what is happening, I created a transparent bitmap from a 32x FSAA image with a bunch of white triangles. The bitmap is drawn onto a device context with a white background. The entire image should be white but you can see the triangle outlines. This because of the color component overrun caused by the alpha value not matching the color component values:

[ATTACH=CONFIG]294[/ATTACH]

Is there an OpenGL setting that will fix the mutlisampled alpha values to be computed in the same way that the RGB components are computed? Or is this a bug in the Nvidia OpenGL driver?

More information regarding the source of this problem. I tested my sample code on a Intel HD Graphics P4000 integrated video adapter and there is no problem. That is, for the edges of white triangles, the alpha value is always the same as the RGB component values. For example, if each RGB component is 127, the alpha value is also 127. Maybe I need to contact Nvidia about this issue…

The driver might use sRGB color space for resolving color samples.

What happens if you disable blending and leave MSAA enabled?
What happens if you disable MSAA and leave blending enabled?

Couple thoughts on causes: blending, MSAA, CSAA, coverage, filtering.

First, coverage + MSAA. If you aren’t passing exactly the same start and end points for an edge in adjacent triangles, you have no reason to expect that the edge will be rasterized the same for both and thus set all of the samples in the boundary pixels (and even then, I don’t know for sure that that guarantees position-invariance – check the spec). That could lead to problems – some subsamples are affected while others aren’t. At a glance, it appears this may apply to your code.

Also, you’re not doing anything with alpha-to-coverage are you? And I don’t see where you’re actually enabling blending and setting your blend function.

Second, blending. If you have the previous problem, this is just going to aggravate it since some subsamples will get the blend while others don’t. Then we downsample to mix things and make it harder to figure out what happened.

Third, CSAA. Make sure you’re using an honest-to-goodness-no-kidding pure MSAA mode. Not CSAA or an MSAA/CSAA mix, which seems to be common in the NVidia drivers nowadays. Could be a CSAA color quantization issue. For instance, here in the Linux NVidia drivers, for std system framebuffer formats, I see:


    Valid 'FSAA' Values
      value - description
        0   -   Off
        1   -   2x (2xMS)
        5   -   4x (4xMS)
        7   -   8x (4xMS, 4xCS)
        8   -   16x (4xMS, 12xCS)
        9   -   8x (4xSS, 2xMS)
       10   -   8x (8xMS)
       11   -   16x (4xSS, 4xMS)
       12   -   16x (8xMS, 8xCS)
       14   -   32x (8xMS, 24xCS)

Fourth, filtering. If the driver is using a kernel filter which pulls in samples from adjacent pixels (ala Quincunx from the olden days), will give you some bleed-over between samples in different pixels, but this won’t be the cause – will just make it harder to infer what is going on with the individual subsamples.

You might render this to an MSAA texture instead and then use texelFetch to grab the individual colors assigned to the subsamples to see exactly what the values are pre-downsample.

Also might simplify your test case to using just 2 triangles so easier to debug/reproduce, and put this in a standalone GLUT test program and post so others can easily run it for cross-comparison.

Framebuffer sRGB was also mentioned – good point and should be checked.

Doesn’t happen often with NVidia but could be a driver bug.

If you convert the value 0.5 from linear to sRGB, you get 0.735360635 which is 187.51 / 255, so I think sRGB blending is probably the issue. Try if

glDisable(GL_FRAMEBUFFER_SRGB);

fixes it.

I tried using glDisable(GL_FRAMEBUFFER_SRGB) but it didn’t fix the problem. Note that blending is not enabled. The problem happens when the edge of a triangle is downsampled from the 2x multisample buffer.

I registered as an Nvidia nvdeveloper member and filed a question/bug report with them. I’ll see what they have to say.