PDA

View Full Version : It's possible to implement antialiasing in OpenGL v2.1 with an integrated Intel card?



danildushistov
05-24-2016, 08:08 AM
Hello!

I want to make a few simple examples with 3D graphics, and I can't implement anti-aliasing in my programs. I have a laptop with ArchLinux and Intel integrated graphics card. Unfortunately, my hardware supports only OpenGL v2.1, and so I can't follow the most of modern tutorials, which require at least OpenGL v3.

I'm tried to use multisampling, described at the bottom of this article (https://www.opengl.org/wiki/Multisampling), but it's also not working on my laptop (I checked it on another Linux machine with Radeon videocard, and it works well). However, I was able to make the anti-aliasing using the accumulation buffer, but it's decreases the number of frames per second very much (now I have about 4 fps at fullscreen scene).

Can I implement CPU-based anti-aliasing algorithm? For example, is it possible to get the full scene after rendering as an array of pixels and perform anti-aliasing "by hand", and then output the post-processed image to screen? I found some tutorials that give answers to my questions (such as adding super-sampling anti-aliasing or image post-processing), but it's require at least OpenGL v3, while I need to implement anti-aliasing on the 2.1 version.

It's interesting, but glxinfo and glinfo show GL_ARB_multisample in my OpenGL extensions:



$ glxinfo
...
GLX version: 1.4
GLX extensions:
...
GLX_ARB_multisample,
GLX_SGIS_multisample,
...
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Intel Open Source Technology Center (0x8086)
Device: Mesa DRI Intel(R) Ironlake Mobile (0x46)
Version: 11.2.1
Accelerated: yes
Video memory: 1536MB
Unified memory: yes
Preferred profile: compat (0x2)
Max core profile version: 0.0
Max compat profile version: 2.1
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 2.0
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ironlake Mobile
OpenGL version string: 2.1 Mesa 11.2.1
OpenGL shading language version string: 1.20
OpenGL extensions:
...
GL_ARB_multisample,
...

$ glinfo
GL_VERSION: 2.1 Mesa 11.2.1
GL_RENDERER: Mesa DRI Intel(R) Ironlake Mobile
GL_VENDOR: Intel Open Source Technology Center
GL_EXTENSIONS: GL_ARB_multisample ...
GL_SHADING_LANGUAGE_VERSION = 1.20
GLU_VERSION: 1.3
GLU_EXTENSIONS: GLU_EXT_nurbs_tessellator GLU_EXT_object_space_tess
GLUT_API_VERSION: 4
GLUT_XLIB_IMPLEMENTATION: 13


Here is my current code with accumullation buffer:



#include <iostream>
#include <stdio.h>
#include <math.h>

#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>

#include "jitter.h"

using namespace std;

#define PI_ 3.14159265358979323846
#define ACSIZE 8

// variables for OpenGL
GLuint textureId; // the id of the texture
GLUquadric *quad;
float rotate1 = 0, rotate2 = 0;

struct BMPImage
{
int width;
int height;
char *data;
};

bool antialiasing;

void getBitmapImageData(const char *pFileName, BMPImage *pImage)
{
FILE *pFile = NULL;
unsigned short nNumPlanes;
unsigned short nNumBPP;
int i;

if ((pFile = fopen(pFileName, "rb")) == NULL)
cout << "ERROR: getBitmapImageData - %s not found " << pFileName << "." << endl;

// seek forward to width and height info
fseek(pFile, 18, SEEK_CUR);

if ((i = fread(&pImage->width, 4, 1, pFile)) != 1)
cout << "ERROR: getBitmapImageData - Couldn't read width from " << pFileName << "." << endl;

if ((i = fread(&pImage->height, 4, 1, pFile)) != 1)
cout << "ERROR: getBitmapImageData - Couldn't read height from " << pFileName << "." << endl;

if ((fread(&nNumPlanes, 2, 1, pFile)) != 1)
cout << "ERROR: getBitmapImageData - Couldn't read plane count from " << pFileName << "." << endl;

if (nNumPlanes != 1)
cout << "ERROR: getBitmapImageData - Plane count from " << pFileName << " is not 1: " << nNumPlanes << "." << endl;

if ((i = fread(&nNumBPP, 2, 1, pFile)) != 1)
cout << "ERROR: getBitmapImageData - Couldn't read BPP from " << pFileName << endl;

if (nNumBPP != 24)
cout << "ERROR: getBitmapImageData - BPP from " << pFileName << " is not 24: " << nNumBPP << "." << endl;

// seek forward to image data
fseek( pFile, 24, SEEK_CUR );

// calculate the image's total size (3 bytes per pixel for 24 bit color BMP)
int nTotalImagesize = (pImage->width * pImage->height) * 3;

pImage->data = (char*)malloc(nTotalImagesize);

if( (i = fread(pImage->data, nTotalImagesize, 1, pFile) ) != 1 )
cout << "ERROR: getBitmapImageData - Couldn't read image data from " << pFileName << "." << endl;

// finally, rearrange BGR to RGB
char charTemp;
for (i = 0; i < nTotalImagesize; i += 3)
{
charTemp = pImage->data[i];
pImage->data[i] = pImage->data[i+2];
pImage->data[i+2] = charTemp;
}
}

void keyboard(unsigned char key, int x, int y) {
switch (key)
{
case 27: // escape key
exit(0);
break;
}
}

void special(int key, int x, int y) {
switch (key)
{
case GLUT_KEY_LEFT:
rotate1 += 2.0f;
if(rotate1 > 360)
{
rotate1 = -360;
}
glutPostRedisplay();
break;
case GLUT_KEY_RIGHT:
rotate1 -= 2.0f;
if(rotate1 < -360)
{
rotate1 = 360;
}
glutPostRedisplay();
break;
case GLUT_KEY_DOWN:
rotate2 += 2.0f;
if(rotate2 > 360)
{
rotate2 = -360;
}
glutPostRedisplay();
break;
case GLUT_KEY_UP:
rotate2 -= 2.0f;
if(rotate2 < -360)
{
rotate2 = 360;
}
glutPostRedisplay();
break;
}
}

void load_texture() {
glGenTextures(1, &textureId); // make room for our texture
glBindTexture(GL_TEXTURE_2D, textureId); // tell OpenGL which texture to edit

BMPImage textureImage;

getBitmapImageData("mars.bmp", &textureImage);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, textureImage.width, textureImage.height,
0, GL_RGB, GL_UNSIGNED_BYTE, textureImage.data);
}

void reshape(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0, (float)w / (float)h, 0.1, 100.0);
}

void display_sphere()
{
glTranslatef(0.0f, 0.0f, -10.0f);

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textureId);

glRotatef(90, 1.0f, 0.0f, 0.0f);
glRotatef(rotate1, 0.0f, 0.0f, 1.0f);
glRotatef(rotate2, 1.0f, 0.0f, 0.0f);
gluQuadricTexture(quad, 1);

gluSphere(quad, 3, 80, 50);
}

void acc_frustum(GLdouble left, GLdouble right, GLdouble bottom,
GLdouble top, GLdouble nnear, GLdouble ffar, GLdouble pixdx,
GLdouble pixdy, GLdouble eyedx, GLdouble eyedy, GLdouble focus)
{
GLdouble xwsize, ywsize;
GLdouble dx, dy;
GLint viewport[4];

glGetIntegerv(GL_VIEWPORT, viewport);

xwsize = right - left;
ywsize = top - bottom;

dx = -(pixdx * xwsize / (GLdouble)viewport[2] + eyedx * nnear / focus);
dy = -(pixdy * ywsize / (GLdouble)viewport[3] + eyedy * nnear / focus);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(left + dx, right + dx, bottom + dy, top + dy, nnear, ffar);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(-eyedx, -eyedy, 0.0);
}

void acc_perspective(GLdouble fovy, GLdouble aspect, GLdouble nnear,
GLdouble ffar, GLdouble pixdx, GLdouble pixdy,
GLdouble eyedx, GLdouble eyedy, GLdouble focus)
{
GLdouble fov2, left, right, bottom, top;

fov2 = ((fovy * PI_) / 180.0) / 2.0;

top = nnear / (cos(fov2) / sin(fov2));
bottom = -top;

right = top * aspect;
left = -right;

acc_frustum(left, right, bottom, top, nnear, ffar,
pixdx, pixdy, eyedx, eyedy, focus);
}

void display() {
if (antialiasing)
{
GLint viewport[4];
int jitter;

glGetIntegerv(GL_VIEWPORT, viewport);

glClear(GL_ACCUM_BUFFER_BIT);
for (jitter = 0; jitter < ACSIZE; jitter++) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
acc_perspective(50.0, (GLdouble)viewport[2] / (GLdouble)viewport[3],
1.0, 15.0, j8[jitter].x, j8[jitter].y, 0.0, 0.0, 1.0);
display_sphere();
glAccum(GL_ACCUM, 1.0/ACSIZE);
}
glAccum(GL_RETURN, 1.0);
glFlush();
}
else
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

display_sphere();

glutSwapBuffers();
}
}

void init() {
glEnable(GL_DEPTH_TEST);
//glEnable(GL_LIGHTING);
//glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE);
glEnable(GL_COLOR_MATERIAL);
quad = gluNewQuadric();

load_texture();
}

int main(int argc, char** argv) {
glutInit(&argc, argv);
//glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB | GLUT_ACCUM | GLUT_DEPTH);

//glBlendFunc(GL_SRC_ALPHA_SATURATE, GL_ONE);
//glEnable(GL_BLEND);
//glEnable(GL_POLYGON_SMOOTH);

//glEnable(GL_MULTISAMPLE_ARB);

glutInitWindowSize(500, 500);

glutCreateWindow("Planet Test");

antialiasing = true;

init();

glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
glutSpecialFunc(special);
glutReshapeFunc(reshape);

glutMainLoop();

return 0;
}

GClements
05-24-2016, 10:35 AM
I want to make a few simple examples with 3D graphics, and I can't implement anti-aliasing in my programs. I have a laptop with ArchLinux and Intel integrated graphics card. Unfortunately, my hardware supports only OpenGL v2.1, and so I can't follow the most of modern tutorials, which require at least OpenGL v3.

The most significant feature which was added in OpenGL 3 was framebuffer objects, which allow rendering into a texture. Check whether your implementation supports the ARB_framebuffer_object extension.



Can I implement CPU-based anti-aliasing algorithm? For example, is it possible to get the full scene after rendering as an array of pixels and perform anti-aliasing "by hand", and then output the post-processed image to screen?

It's possible, but it may well be too slow to be of any use. Apart from the memory bandwidth for reading the framebuffer contents and the CPU usage, reading from the framebuffer will block until any pending commands which render to the framebuffer have finished.



It's interesting, but glxinfo and glinfo show GL_ARB_multisample in my OpenGL extensions:

Even if the extension is supported, it's possible that the only supported framebuffer formats have a single sample (glxinfo will confirm or refute this).