Opinions on possible buggy behaviour...

Given the following code, I get bizarre behaviour on OSX using ATI graphics cards. NVidia cards work fine, but ATI ones do not. (You’ll need to #define BUGGY_BEHAVIOUR to see the problem).

// Compile with:
//   g++ drawpixels.cpp -o drawpixels -framework GLUT -framework OpenGL -lobjc
 
#include <stdlib.h>
#ifdef __APPLE__
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
  
  
static unsigned int imageWidth = 720, imageHeight = 576;
static unsigned char *imageData = NULL;
  
  
static void initialise(void)
{
  glClearColor(0.0, 0.0, 0.0, 1.0);
  glDisable(GL_DITHER);
  
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();
#ifdef BUGGY_BEHAVIOUR
  gluOrtho2D(-0.5, imageWidth + 0.5,
             -0.5, imageHeight + 0.5);
#else
  gluOrtho2D(0.0, imageWidth,
             0.0, imageHeight);
#endif // BUGGY_BEHAVIOUR
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();
}
  
  
static void redraw(void)
{
  glClear(GL_COLOR_BUFFER_BIT);
  
  glRasterPos2f(0.0, 0.0);
  glDrawPixels(imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
  
  glutSwapBuffers();
}
  
  
  
static void keyPress(unsigned char key, int x, int y)
{
  switch (key)
  {
  case 27:
    exit(0);
    break;
  
  default:
    break;
  }
}
  
  
  
int main(int argc, char *argv[])
{
  // Setup the floating point image buffer
  
  imageData = new unsigned char[imageWidth * imageHeight * 4];
  for (int y = 0; y < imageHeight; y++)
  {
    unsigned char *rowPtr = imageData + y*imageWidth*4;
    for (int x = 0; x < imageWidth; x++)
    {
      *rowPtr++ = (((x & 3) == 0) | ((y & 3) == 0)) * 255;
      *rowPtr++ = (((x & 3) == 0) | ((y & 3) == 0)) * 255;
      *rowPtr++ = (((x & 3) == 0) | ((y & 3) == 0)) * 255;
      *rowPtr++ = 0;
    }
  }
  
  glutInit(&argc, argv);
  glutInitDisplayMode(GLUT_DOUBLE);
  glutInitWindowSize(imageWidth, imageHeight);
  glutCreateWindow("test");
  
  initialise();
  glutKeyboardFunc(keyPress);
  glutDisplayFunc(redraw);
  glutMainLoop();
  
  return 0;
}

The only difference is the specification of the gluOrtho2D() call which, in my mind, should not cause the garbage output from the glDrawPixels() call.

Is this kind of behaviour implementation dependant, or is it a buggy OpenGL driver implementation?

It looks to me that the window coordinate calculation based on the current raster position, and each pixel position for the glDrawPixels, is suffering from some rounding errors. I didn’t expect the behaviour of glDrawPixels to produce this kind of output regardless of the current raster position.

Anyone any thoughts?

You didn’t describe what “garbage” means, however, you made the ortho space a pixel bigger in the “buggy” case. You need to subtract 0.5 from all four parameters to shift the coordinates into pixel centers, it’s not (x, y, w, h) it’s (left, right, bottom, top).
BUT, you want that for integer coordinates when drawing lines and points, never ever do that for pixel graphics, which work best with coordinates from the bottom-left corner to the top-right corner of pixels to hit their area perfectly.
Your setup is walking on the pixel centers razor edge.
gluOrtho2D(0.0, imageWidth, 0.0, imageHeight) and glRasterPos2i(0, 0) is what works and you really want.

You didn’t describe what “garbage” means

The image is distorted/corrupted… not really an easier way to describe it. It’s only corrupted on OSX using an ATI graphics card. Linux, IRIX, Windows and OSX+Nvidia works fine.

however, you made the ortho space a pixel bigger in the “buggy” case. You need to subtract 0.5 from all four parameters to shift the coordinates into pixel centers, it’s not (x, y, w, h) it’s (left, right, bottom, top).
That’s probably a typo on my part, but that’s kind-of irrelevant. See below.

BUT, you want that for integer coordinates when drawing lines and points, never ever do that for pixel graphics, which work best with coordinates from the bottom-left corner to the top-right corner of pixels to hit their area perfectly.
Your setup is walking on the pixel centers razor edge.
gluOrtho2D(0.0, imageWidth, 0.0, imageHeight) and glRasterPos2i(0, 0) is what works and you really want.
But surely once you have set the origin of the pixel image via glRasterPos2f(), the matrices play no further part. The glRasterPos2f() is transformed through the matrices.

Each pixel rendered via glDrawPixels() is then mapped directly to the window without going through the matrix transformations. That’s the way both glBitmap and glDrawPixels works, assuming glPixelZoom(1.0, 1.0).

So once you have set the origin, and thus the lower left pixel position, each other pixel should be at an exact pixel offsets from this initial position.

All right, but maybe OSX implements it using a grid of GL_POINTS and rounding errors screwed it, ask Apple.
What can I say, if it doesn’t work, don’t do it. :slight_smile:

I’m waiting for a response from Apple. They say it’s implementation defined behaviour. I think it’s a buggy driver… hence my question here. Just wanted to check that I’m not being completely daft :wink:

As to “don’t use it”… unfortunately, it’s the fastest way of drawing pixel images on the Mac. Textures are about 30-50% slower (even with Apple’s extensions to tell it to use AGP, client_storage, etc.).

How about glGet on the projection matrix after this & see what is in there. You could also try a glOrtho call direct.

You’ve been given good advice already and are ignoring it. You really need to describe what you mean by distorted & garbage if you’re talking about subpixel aliasing or possibly image filtering due to subpixel shifts (some hardware will do this) then you have your answer but for all we know ‘garbage’ could be minor filtering and subpixel issues.

How about glGet on the projection matrix after this & see what is in there. You could also try a glOrtho call direct.
I’ve not tried that. The above sample is a small cut-down piece of code to simulate the problems we’re experiencing in our main application.

The main application has a user-defined choice of how to render (either with textures, glDrawPixels, etc.). Given that we allow zoom in/zoom out/pan of the images, we modify the ortho matrix such that the textures render as appropriate based on pixel coordinates when drawing. Similarly, we overlay graphics on top of the images using the same ortho matrices.

Given the fact that you could zoom in to 200% (i.e. double), pan left by 0.5 pixel, then zoom back out to 100%, then you’re going to effectively have an ortho which is offset by 0.5 pixels anyway. So it’s not quite a simple ‘get rid of the 0.5 offset’.

You’ve been given good advice already and are ignoring it.
I wouldn’t say ignoring it… just trying to understand why I see what I see, and whether it really is a bad driver, or just something which is expected behaviour or not.

You really need to describe what you mean by distorted & garbage if you’re talking about subpixel aliasing or possibly image filtering due to subpixel shifts (some hardware will do this) then you have your answer but for all we know ‘garbage’ could be minor filtering and subpixel issues.
See the image here: http://www.miramar.uklinux.net/ati_bad_pixels.tiff

I have always been under the impression that once the raster position has been set, then the glDrawPixels() will render to exact window pixels. If that’s not the case, then…

Yes locations are fixed, however the subpixel location of samples in this case is significant (OT but it can affect drawing images with multisample AA on for example) and it is subpixel sample location that is biting you here I think.

With the screenshot it looks like this is sample location rounding right on the boundary condition for all samples. Your projection is placing framebuffer pixel centers right on the boundary of your image pixels (as another poster tried to explain). This causes the aliasing artifacts you are seeing since it’s a crap shoot as to which pixel lands where (they all land exactly half way between samples). It’s probably not the best implementation by ATI but you’re left to live with the consequences. By the looks of things they may even be using a texturing operation with nearest filter to draw the pixels to the framebuffer.

So presumably, an alternative solution which should work with both lines and images would be

gluOrtho2D(-0.5, imageWidth - 0.5, -0.5, imageHeight - 0.5);
glRasterPos(-0.5, -0.5);
glDrawPixels(...)

So that way, the raster position is set to the lower-left of the actual pixel centre. And then when drawing lines, we can use the integral positions which match the centres of the pixels?

I expect so, try it & see.

Originally posted by wprice99:
As to “don’t use it”… unfortunately, it’s the fastest way of drawing pixel images on the Mac. Textures are about 30-50% slower (even with Apple’s extensions to tell it to use AGP, client_storage, etc.).
If this is a static image, then texture will definitly be faster. If it’s animated, then streaming may still be faster.

glDrawPixels is always iffy. On an older ATI driver on Windows, doing what you did there would not render anything on screen, since raster(0, 0) would get clipped.

Interesting behavior you have there nonethless.