PDA

View Full Version : gPixelTransferf on textures



jimmycox
01-22-2010, 09:11 AM
Hi,

I'm loading of bunch of scientific datas (around 100Mo with quite a high dynamic range) into the alpha channel of a texture...

So the actual value of the pixel displayed will be controlled
if I understood correctly by

glPixelTransferf(GL_ALPHA_SCALE, gain);

Problem is, when I want to change the gain (often), I do have to call glTexSubImage2D again which can take a significant amount of time... to have the new pixel transfer settings applied...

I thought I could overcome that with shaders...
I transfer the pixels with low gain (to prevent clipping of my datas) and then tweak the gain in the shaders

something like that (fragment shader excerpt):


lkup = texture2D(texture, uv.xy);

pente1 = tan(pisur4 + scaling);

intensity = pente1 * lkup.a + (0.5 - pente1*0.5);

gl_FragColor = texture1D(lut, clamp(intensity, 0.0, 1.0))*gl_Color;


by artificially playing with scaling I can affect the value of my data texture lookup before looking up for color values into my color map texture...

this seems to work at first sight but when ramping up the gain quickly precision issues mess everything up... (you end up looking up on only a few pixels of your color map)

Is there any way around all this?
thx for any thoughts!

Pierre Boudier
01-22-2010, 09:47 AM
did you make sure to use linear filtering for both your textures ?

jimmycox
01-22-2010, 09:59 AM
did you make sure to use linear filtering for both your textures ?

yep, on both the data texture and the colormap textures...

mfort
01-22-2010, 11:15 AM
What texture internal pixel format you use?
What pixel format you use during pixel transfer from host to OpenGL?

use more then 8 bits per component. 16bit ints or floats are good choice for such application.

jimmycox
01-25-2010, 03:35 PM
What texture internal pixel format you use?
What pixel format you use during pixel transfer from host to OpenGL?

use more then 8 bits per component. 16bit ints or floats are good choice for such application.


Yes I checked that out...I'm using more than 8 bits...
I think I'm simply hitting the precision barrier...

being able to adjust the gain to see correctly values around 0.1 or in the 1000s is pushing the limits of what you can do without redoing a glPixelTransferf....

Edit: I'm using GL_ALPHA16 and GL_RGBA16...
anything higher?