Pbuffer & Numerical Precision

I have to code an application that use intensely the PBuffer for an accurate physic’s simulation.

But only putting a float value in the pbuffer and then immediately read back from it the number is changed from 0.5 to 0.5019675.

WHY???

I correctly create an RGBA pixelbuffer with 32 bits for colorchannel and the pixelformat that I obtain is actually consistent.

Thanks…

Because your pbuffer is 8 integer bits per channel.

128/255 ~= 0,50196

You want a floating point pbuffer with 16 or 32 bits of resolution per color channel, not total.

Off the top of my head, I think the relevant extension is called ATI_buffer_float (may have been promoted to EXT or ARB).

edit:
Nope. But try this . Or this .

This is the parameters for my pbuffer:

How you can see I already specify 32 bits per color channel and
the ati token for the float pixel format

  
   int piAttribIList[20]= {
      WGL_DRAW_TO_PBUFFER_ARB, 1,
      WGL_RED_BITS_ARB,	32,
		  WGL_GREEN_BITS_ARB,	32,
		  WGL_BLUE_BITS_ARB,	32,
		  WGL_ALPHA_BITS_ARB,	32,      
      WGL_DEPTH_BITS_ARB,	24,
      WGL_STENCIL_BITS_ARB,	0,
      WGL_PIXEL_TYPE_ARB,WGL_TYPE_RGBA_FLOAT_ATI,
	  0,0
    };  

To read pixels and put it in an array I call the function:

glPixelStorei(GL_PACK_ALIGNMENT,4);
glReadPixels(0,0,width,height,GL_RGBA,GL_FLOAT,array);

If I print the pixel format using:

 
glGetIntegerv(GL_RED_BITS,&(bits[0]));
glGetIntegerv(GL_GREEN_BITS,&(bits[1]));
glGetIntegerv(GL_BLUE_BITS,&(bits[2]));
glGetIntegerv(GL_ALPHA_BITS,&(bits[3]));
glGetIntegerv(GL_DEPTH_BITS,&(bits[4]));
  glGetIntegerv(GL_STENCIL_BITS,&(bits[5]));  

the pixelformat is actually which that I request(i.e. 32,32,32,32,24,0) but
when I use the pbuffer 0.50000 magically became 0.50196.

This looks correct.

Anyway, somwhere during processing you’re dropping down to 8 bits integer per channel.

If it’s not the pbuffer format, it might be your method of “putting a float value in the pbuffer”. Can you elaborate, or post the code?

If you’re using integer textures, they will limit your precision. There’s an extension for floating point textures: ATI_texture_float (supported on current NVIDIA drivers, too).

Interpolated vertex colors can also have rather limited precision. As a workaround, you can pass the color to GL “disguised” as a texture coordinate, and write it to result.color in a fragment program (or shader).

to pass to the pipeline the texture in which i put the floating point i use this function:

		inline void toOpenGL(const GLenum format) const
		{
			glPixelStorei(GL_UNPACK_ALIGNMENT,Ext<T>::byteDim());
			glTexImage2D(GL_TEXTURE_2D,0,colorchannel,MatrixTraits<T,height,width,colorchannel>::nColumns(_text),MatrixTraits<T,height,width,colorchannel>::nRows(_text),0,format,Ext<T>::token(),_text);
		}

Texture is a template class Texture<typename T,int colorchannel,int width,int height> internally it has a 3-dimensional array.
In this case T is a GLfloat, Ext<T>::byteDim() = 4
,Ext<T>::token() = GL_FLOAT,colorchannel is 4 and format stands for GL_RGBA. _text() is the 3d array.

Some error in my code?!?!?

Thanks to help me:)

Thanks zeckensack!!!
with your suggestion about ATI_texture_float finally my code goes on…

I’m going to elvate your rating;)