MSAA with maginification's simple question

This program is only a version of the fbo code example of the the book OpenGL Programming Guide(the red book) fourth edition with a slight modification of multisampling and magnification instead of a copy-and-paste one. Since some fault within it, it runs on neither ATI nor NVIDIA graphics card which turns multisampling on.


//fbo.c
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include <windows.h>
#include "gl\glew.h"

#define GLUT_DISABLE_ATEXIT_HACK

#include "glut.h"

enum {Color, Depth, NumRenderbuffers};

GLuint framebuffer, renderbuffer[NumRenderbuffers];

void drawTiangle()
{
    glColor3f(0.0f,1.0f,0.0f);
    glBegin(GL_TRIANGLES);
        glVertex3f(-1.0f,1.0f,0.0f);
        glVertex3f(1.0f,0.0f,0.0f);
        glVertex3f(-1.0f,-1.0f,0.0f);
    glEnd();
}

void init()
{
    GLenum status;
    GLint samples;
    GLint bufs;
    GLenum err;
    char ErrorMessage[1024];
    int value;
    glEnable(GL_MULTISAMPLE_EXT);
    //glHint(GL_MULTISAMPLE_FILTER_HINT_NV, GL_NICEST); 
    glGetIntegerv (GL_SAMPLE_BUFFERS, &bufs);
    glGetIntegerv(GL_MAX_SAMPLES_EXT, &samples);
    printf("MSAA: buffers = %d samples = %d
", bufs, samples);
    glGenFramebuffersEXT(NumRenderbuffers,renderbuffer);
    glBindRenderbufferEXT(GL_RENDERBUFFER_EXT,renderbuffer[Color]);
    //glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_RGBA8, 256,256);
    glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER_EXT,4,GL_RGBA8, 256,256);
    glBindRenderbufferEXT(GL_RENDERBUFFER_EXT,renderbuffer[Depth]);
    //glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_DEPTH_COMPONENT, 256,256);
    glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER_EXT,4,GL_DEPTH_COMPONENT24, 256,256);
    glGenFramebuffersEXT(1, & framebuffer);
    glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT,framebuffer);
    glFramebufferRenderbufferEXT(GL_DRAW_FRAMEBUFFER_EXT,GL_COLOR_ATTACHMENT0_EXT,GL_RENDERBUFFER_EXT,renderbuffer[Color]);
    glFramebufferRenderbufferEXT(GL_DRAW_FRAMEBUFFER_EXT,GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT,renderbuffer[Depth]);
    status=glCheckFramebufferStatusEXT(GL_DRAW_FRAMEBUFFER_EXT);
    glEnable(GL_DEPTH_TEST);
}


void display()
{
    GLenum status;
    glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT,framebuffer);
    status=glCheckFramebufferStatusEXT(GL_DRAW_FRAMEBUFFER_EXT);
    glViewport(0,0,256,256);
    //red
    glClearColor(1.0,0.0,0.0,1.0);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glClearDepth(0.0f);
    drawTiangle();
    glViewport(0,0,512,512);
    glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT,framebuffer);
    glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT,0);
    status=glCheckFramebufferStatusEXT(GL_DRAW_FRAMEBUFFER_EXT);

    //blue
    //glClearColor(0.0,0.0,1.0,1.0);
    //glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    //glReadBuffer(GL_COLOR_ATTACHMENT0);
    glBlitFramebufferEXT(0,0,255,255,0,0,511,511,GL_COLOR_BUFFER_BIT,GL_NEAREST);
    status=glCheckFramebufferStatusEXT(GL_DRAW_FRAMEBUFFER_EXT);
    glutSwapBuffers();
}

int main(int argc, char** argv)
{
   glutInit(&argc, argv);
   glutInitDisplayMode(GLUT_DOUBLE| GLUT_RGBA | GLUT_DEPTH | GLUT_MULTISAMPLE);
   glutInitWindowSize(512, 512);
   glutInitWindowPosition(100, 100);
   glutCreateWindow("Frame buffer object");

   //Initialize the glew library.
   glewInit();

   init();
   glutDisplayFunc(display);
   glutMainLoop();
   return 0;
}

Please don’t post multiple copies of a single message. Also, use [noparse]

...

[/noparse] or [noparse]

...

[/noparse] to mark code blocks to preserve the formatting. Fixed that for you.

If I hack the Windows-isms out of your #includes as follows:


//fbo.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#ifdef WIN32
# include <windows.h>
#endif
#include <GL/glew.h>
 
#define GLUT_DISABLE_ATEXIT_HACK
 
#include <GL/glut.h>
...

It builds and runs just fine on NVidia/Linux. Creates a GL window visual 0x31 (4x MSAA), and it successfully creates a renderbuffer FBO with 4 MSAA samples.

In your display() function though, your glBlitFramebuffer fails with “invalid operation” for this reason:

GL_INVALID_OPERATION is generated if [i]GL_SAMPLE_BUFFERS[/i] for both read and draw 
buffers greater than zero and the dimensions of the source and destination 
rectangles is not identical. 

Change the target to 0,0,255,255 and it’ll work. You need to check for GL errors:

Chances are you don’t want GLUT to create a MSAA system framebuffer. So remove GLUT_MULTISAMPLE.

[QUOTE=Dark Photon;1245140]Please don’t post multiple copies of a single message. Also, use [noparse]

...

[/noparse] or [noparse]

...

[/noparse] to mark code blocks to preserve the formatting. Fixed that for you.

If I hack the Windows-isms out of your #includes as follows:


//fbo.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#ifdef WIN32
# include <windows.h>
#endif
#include <GL/glew.h>
 
#define GLUT_DISABLE_ATEXIT_HACK
 
#include <GL/glut.h>
...

It builds and runs just fine on NVidia/Linux. Creates a GL window visual 0x31 (4x MSAA), and it successfully creates a renderbuffer FBO with 4 MSAA samples.

In your display() function though, your glBlitFramebuffer fails with “invalid operation” for this reason:

GL_INVALID_OPERATION is generated if [i]GL_SAMPLE_BUFFERS[/i] for both read and draw 
buffers greater than zero and the dimensions of the source and destination 
rectangles is not identical. 

Change the test to 0,0,255,255 and it’ll work. You need to check for GL errors:

Chances are you don’t want GLUT to create a MSAA system framebuffer. So remove GLUT_MULTISAMPLE.[/QUOTE]

The error caused is because the dimensions of the source and destination rectangles is not identical. But the target of this program is first creating a image by using multisamping then mapped to the screen with magnification as a static one.
So can you provide a solution how this can be done as fast as possible?

Thanks in advance and god bless you!

[QUOTE=newbiecow;1245168]The error caused is because the dimensions of the source and destination rectangles is not identical. But the target of this program is first creating a image by using multisamping then mapped to the screen with magnification as a static one.
So can you provide a solution how this can be done as fast as possible?[/QUOTE]

You can’t resize a multisample buffer (with glBlitFramebuffer). Downsample first. Then resize. (two blit calls)

If you write your own shader for this, you could of course downsample and resize all-in-one-go.

[QUOTE=Dark Photon;1245217]You can’t resize a multisample buffer (with glBlitFramebuffer). Downsample first. Then resize. (two blit calls)

If you write your own shader for this, you could of course downsample and resize all-in-one-go.[/QUOTE]

So can you please paste this two blit calls here?

They’re both glBlitFramebuffer. In the first, call it to blit between an MSAA target and a single sample target (same resolution) – this does the downsample. In the second, call it to blit between the single-sample target and another single-sample target (different resolutions) – this does the resize.

Thanks a lot! I have done such operations. But after my trial, the most important thing should be noticed is between the two glBlitFramebuffer calls, a
glDisable(GL_MULTISAMPLE);
should be called. Otherwise a GL_INVALID_OPERATION error will be reported because the samples of read and draw buffer are not the same.

Thanks for all your help, Dark Photon!

Best Regards,

newbiecow

Actually, whether the buffer has multisample rasterization enabled or not is a separate issue. The problem comes when you try to blit between two different MSAA buffers which have different number of samples per pixel.

[QUOTE=Dark Photon;1245217]You can’t resize a multisample buffer (with glBlitFramebuffer). Downsample first. Then resize. (two blit calls)

If you write your own shader for this, you could of course downsample and resize all-in-one-go.[/QUOTE]

You mean that if I write a shader, I can both downsample and resize. Then can you tell me if I write a shader, can I use it to sample the stencil buffer in my own way?

Best Regards,

newbiecow

Yes, if your GPU/driver supports OpenGL 4.3 and/or ARB_stencil_texturing.

If not, probably some PBO copy games you can play to “retype” a stencil or depth/stencil texture so you can read it in the shader.

[QUOTE=Dark Photon;1245671]Yes, if your GPU/driver supports OpenGL 4.3 and/or ARB_stencil_texturing.

If not, probably some PBO copy games you can play to “retype” a stencil or depth/stencil texture so you can read it in the shader.[/QUOTE]

Then can you tell me if I have a hardware support OpenGL 4.3 and ARB_stencil_texturing, I can use a shader program to control the stencil buffer sampling in my own way?

If so, can you tell me such a program is a part of a vertex shader program or a part of a fragment shader program or an indenpendent shader program which is separated from the former two?

And can you tell me whether a hardware supports opengl 4.3 and ARB_stencil_texturing can be tested by what software? I have tried AIDA64, but it seems that the newest opengl version can be tested by it is only 4.2.

Best Regards,

newbiecow

Query GL_RENDERER and GL_VERSION and print them out. You can also query for extensions by name.

It’s also possible to use a different GL context create call (CreateContextAttribs) and request or force a context of a specific GL version. Check your GLUT implementation as it might support this. Or you can call it yourself, though that means ditching GLUT.

[QUOTE=Dark Photon;1245687]Query GL_RENDERER and GL_VERSION and print them out. You can also query for extensions by name.

It’s also possible to use a different GL context create call (CreateContextAttribs) and request or force a context of a specific GL version. Check your GLUT implementation as it might support this. Or you can call it yourself, though that means ditching GLUT.[/QUOTE]

Dear Dark Photon,

Can you please also answer my former two questions?

Best Regards,

newbiecow

I was answering your first question. How to tell whether your GPU/driver has support.

Now that I re-read it though, if I infer what grammer I think you actually meant, then the answer is yes. However, you can very likely do this on pre-GL4.3 hardware as well using PBO copies.

As to your 2nd question, you just read the texture in the shader like normal, so read it in the shader of your choice.

As to the 3rd, I don’t know.