Hi. I am just starting in OpenGL programming with a background in C/C++, Java, and other languages - so I suspect that I am missing a very simple protocol or setting that results in my issue. Any help would be appreciated.
The application: draw a set of vertical bars in an OpenGL window, then set them in motion across the window (horizontally). Implementation of the motion uses an animation thread to periodically render the scene. Rendering is performed simply by clearing the buffer and then redrawing the vertical bars at an offset determined by the given rate of traversal across the window. So, I clear and redraw the entire display for each rendering (at least I think that’s what I’m doing ). I turned off dithering, blending, and all anti-aliasing (point, line, polygon), and have even set up a clear for the accumulator (which is performed with, and with the same data as, the color buffer clear). I have also tried a version of the program that uses integer (rather than float) values for the screen postitioning data - no effect.
The problem: when the traversal rate is 0 (bars are not moving), the edges of the bars are well defined. As soon as the bars are set in motion, the edges start to fuzz or blur (on most hardware). The extent of the fuzz seems to increase roughly in proportion to the rate of traversal.
The hardware: a number of different platforms were tested. Some might be a couple of years old, and this might be part of the issue, but the problem occurs on a MacBook Pro (XCode OpenGL framework) that is less than a year old as well as on several Windows machines (Java JOGL). Interestingly, a NVidia GeForce4 Ti 4200 (AGP8X) w/ Windows does not exhibit the problem - edges remain well defined at all traversal rates. Yes, I made sure I had up to date video drivers and software on all tested platforms.
Is there some control (maybe something to do with motion blur?) that I am missing that could eliminate this effect?
Also, I seem to have some jumpy behavior when running the app on my MacBook, I think due to OS scheduling - is there a way to eliminate that as well?
Some java code - I know, I can improve efficiency by running the vertex definitions in a loop inside a glbegin ;), but that is not the issue.
... init:
public void init(GLAutoDrawable drawable) {
mygl = drawable.getGL();
mygl.setSwapInterval(1);
mygl.glMatrixMode(GL.GL_MODELVIEW);
mygl.glLoadIdentity();
mygl.glDisable(GL.GL_BLEND);
mygl.glDisable(GL.GL_DITHER);
mygl.glDisable(GL.GL_POINT_SMOOTH);
mygl.glDisable(GL.GL_LINE_SMOOTH);
mygl.glDisable(GL.GL_POLYGON_SMOOTH);
colorSetup();
}
... display:
public void display(GLAutoDrawable drawable) {
if (con.isColorChanged()) { colorSetup(); }
mygl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_ACCUM_BUFFER_BIT);
for (double x = offset - 1.0 - con.getStripWidth(); x < 1.0; x += con.getStripWidth()) {
drawStrip(mygl, x);
}
offset += con.getShiftDistance();
if (offset > 0) offset = - con.getStripWidth() + offset % con.getStripWidth();
}
void drawStrip(GL gl, double position) {
gl.glBegin(GL.GL_QUADS);
gl.glVertex2d(position, 1.0);
gl.glVertex2d(position, -1.0);
gl.glVertex2d(position + con.getLineWidth(), -1.0);
gl.glVertex2d(position + con.getLineWidth(), 1.0);
gl.glEnd();
}
Thanks in advance for any assistance! Please let me know if more information is required and I will do my best to provide it.