Decreasing precision of increasing vertex ranges

Hello,

basicly i have a simple application where I draw 2D time series such as sound-data with glVertexPointer & glDrawArrays.

When the samplefreqency is 44kHz, the x-increment scaled in seconds is 22 microseconds.

If I plot a snipped of data e.g from 500seconds to 502 seconds the pointer to the data is:

data[0].x = 500.000
data[0].y = somedata
data[1].x = 500.000022
data[0].y = somedata

The resulting datapoints begin to jitter (a well defined sinus is quite well distorted).

Its cleary tested that the error comes from ogl, when i plot the data eg. from 0 to 2 seconds, there is no error.

I am assuming it has to do with the realtion of the plot distance to the data incremnet.

Any suggestion for solving this problem?

Thanks a lot…

Don’t know for sure (need more detail), but it sounds like you may be running up on the limits of 32-bit (single-precision) floating point precision (what GPUs/OpenGL use natively and what you typically talk to OpenGL with). Either in the coords you’re pumping to OpenGL, or the MODELVIEW matrix computation, or both.

Think about it this way. You get about 7 sigfigs of decimal precision from a 32-bit float. You want X values accurate to ~0.000022 sec. You’ve got a about 5-6 sigfigs eaten to the right of the decimal point already. So you’ve got maybe 2 if you’re lucky to the left before you start seeing precision errors. The bigger the number, the greater the error.

Vertex positions: So if you’re feeding raw seconds in for the X value for your GL draw call, this suggests that your input vertex positions might be a problem. You’re running out of precision. Redefine the origin (and maybe step size) based on what you can see in your window to maximize precision, and feed OpenGL offset positions in the positions vertex attribute array. Note that you can compensate for this offset with a corresponding offset in your MODELING matrix. Which brings me too…

The MODELVIEW transform: Also, when your time coords are out in the ~500 range, write out what your MODELING and VIEWING matrix translations are (not the whole MODELVIEW translation, but component MODELING and VIEWING translations). Why? Because it’s very possible you’re running up against float precision computing your translations in floating point too. A possible solution? Compute your MODELVIEW matrix in double precision, and only thunk down 32-bit float when you toss the matrix to OpenGL. This allows you to have “big” translates in MODELING and VIEWING which might otherwise be a problem. Of course if you’ve got a resulting big translate in MODELVIEW, you’ve still got a problem.

So look at the magnitude of your vertex positions, MODELING, and VIEWING matrix transforms and determine if/where you have a problem.

It is all up to floating point representation. When your points are within [256…512) range, you have constant step of 256/2^23 between them. It equals to 0.000030517578125. Which is greater then your step value. So, in single precision your 2 numbers 500.000000 and 500.000022 are equal! So you got banding.

thanks for the fast reply,

Vertex positions: …and feed OpenGL offset positions in the positions vertex attribute array->

Its not practical for me because i am using a ringbuffer (in our example 2 seconds 48kHz = 96000 vertex2 values).
The ringbuffer consist several channels, the size can be bigger, constant offset addition could eat performance.
Also a offset/reset mechanism must be implemented :-(.
(the buffer will be filled continiouse with data for foreward and backward scrolling)

The MODELVIEW transform:

write out what your MODELING and VIEWING matrix translations are…

glMatrixMode(GL_MODELVIEW);

glViewport( viewport_ptr->left, //the size of the
viewport_ptr->bottom,//panel with gl context
viewport_ptr->right,
viewport_ptr->top);

//set current projection…
glMatrixMode(GL_PROJECTION);

glLoadIdentity();

//adjust to the limits of the current ringbuffer state
// in our ecxample:
// left = 500, right = 502 - 1/48000
//parallel projection of 2 seconds of data after 500 sec.
gluOrtho2D( sensor_view_properties_ptr->limits.left,
sensor_view_properties_ptr->limits.right,
sensor_view_properties_ptr->limits.bottom,
sensor_view_properties_ptr->limits.top);

glPushMatrix();

//no affect to my problem
glTranslatef(0.0, chanel dependent offset,0.0);

glEnableClientState(GL_VERTEX_ARRAY);

glVertexPointer( 2, GL_DOUBLE, 0, pointer to ringbuffer);

glDrawArrays(GL_LINE_STRIP,
indecies of the the values from 500 to 502 sec (simplified)
);

glDisableClientState(GL_VERTEX_ARRAY);

glDisableClientState(GL_VERTEX_ARRAY);

glPopMatrix();
//------------------------------------------------------------------------

I also tried to mantain the cords for gluOrtho2D @ 0…2 sec,
translated the vecor with x-vaues 500-502 (-500)
same effect:-(

Does it seem that i can solve the problem without removing the timing information?

There are ways to work around this issue. For example, you could render-to-texture using offset 0-2 and translate the whole texture by +500.

You could also decouple the timing information from the actual data: save the data as 2-second blocks + offsets, use the offsets to select the correct blocks and always render around 0-2 (or whatever fits your screen).

Hi,

again i tested your suggestion to shift the whole texture while setting the glOrtho from 0 to 2 seconds.
The error is the same.

The second suggestion would be approximatly my final workaround.

If anybody nows how to decouple the vertex2 array into a seperate x-y array for glVertexPointer(…), it would help me again.

Thanks all…

andy

You could also use a vertex shader as described here.

Basically, you would use two float values to represent your time coordinate; therefore, you have more bits of precision. In your case since you don’t use the z value, you could store time in the x and z components of the vertex. The vertex shader would use the camera position also encoded as two floats per component, the x component, and the z component to compute the position of your time axis.

In the link I provided, I describe converting a double value into two floats to get approximately 1 cm precision. You would use a different scaling factor to get microsecond precision. If you needed even more precision, you could encode the double as three floats.

While this method increases the size of your vertex buffer, once you place the data into the vertex buffer, you will not have to edit that data per frame as you move around, which you mention as desirable. For my work, this resulted in an overall performance improvement.

Thanks, wSpace, very interesting reading!

Hi wSpace,

it sounds that i can solve my problem for a very huge timescale.

Thank you very much!

So i have to learn to define VBO’s (actually i am not very advanced in ogl)? Maybe you can give me a hint how to implement that x&z axis will be combined to a one double precision axis for ogl after using the’casting’ function CDoubleToTwoFloats in your article?

While you could use normal vertex arrays as you are doing, VBOs would be better and in keeping with today’s technology. What is required is that you use GLSL. After you familiarize yourself with GLSL (if not already), free to private message me, and I can offer you help with the vertex shader.