Where does my assembler rotten?

I am trying to light a scene recovering radiance values from a cubemap. So
in my OpenGL app I just set the textures and pass the whole stuff to the
vertex and fragment program. Now the program should:

  1. extract the luminace value from the cubemap
  2. set up the correct exposure (using the formula 2^(exposure+2,47393)
  3. gamma correct the lit color value
  4. map the result so that the 0.18 middle_gray is 3.5 f-stops below the
    full 1.0 white.

my fragment program is:

PARAM exposure = program.local[0];
PARAM one_over_gamma = {0.4545,0,0,0};
PARAM middle_gray = {0.332,0.332,0.332,1.0};

TEMP texelColor,luminance, m_exp, t_exp, temp_inColor, temp_outColor,
temp_inColor2;
#retrive luminance value
TEX luminance, inTexCoord2, texture[1], CUBE;
#get the unexposured lit color (it may be out of the [0,1] range)
MUL temp_inColor, inColor, luminance;
#exposure stuff
#t_exp should be set at the value exposure+2.47…
ADD t_exp, exposure.xxxx, 2.47393;
#m_exp should be 2^t_exp = 2^(exposure+2.47)
EX2 m_exp, t_exp.x;
#compute the exposured lit pixel value (not gamma corrected,neither
clamped to [0,1]
MUL temp_inColor2, temp_inColor, m_exp;#.xxxx;
#gamma correct the pixel using the formula
#gamma_corr_pix=pixel^one_over_gamma
POW temp_outColor.x, temp_inColor2.x, one_over_gamma.x;
POW temp_outColor.y, temp_inColor2.y, one_over_gamma.x;
POW temp_outColor.z, temp_inColor2.z, one_over_gamma.x;
#map the pixel color so the .18 mid gray is
#mapped on 0.332
MUL outColor, temp_outColor, middle_gray;

(all the pixel processing is extracted from some paper teaching how to
compute correct exposure settings coming from exr pictures)

the problem is that id doesn’t work, so i am afraid i am doing something
sintattically wrong. I can tell you that is isn’ working because
computing the same thing by hand i have:

using exposure=0, color=1,1,1 and luminance=2:

  1. temp_inColor * luminance=2,2,2
  2. t_exp = exposure+2.47 = 2.47
  3. m_exp = 2^t_exp = 5.55
  4. temp_inColor2 = temp_inColor * m_exp = 5.55
  5. temp_outColor = temp_inColor2^0.4545 = 2.98
  6. outColor = temp_outColor * 0.332 = 0.99

so my output color should be a full white… instead OpenGL display an
almost black sphere, and even if i pass different luminance values
(ranging from 1 to 5) the sphere remains of the same constant dark gray
(just like the luminance wasn’t changing at all)…

where my program is failing? Am i the last human being who uses assembler programs on this planet?

Thanks guys, the gunslinger

using exposure=0, color=1,1,1 and luminance=2:

  1. temp_inColor * luminance=2,2,2
  2. t_exp = exposure+2.47 = 2.47
  3. m_exp = 2^t_exp = 5.55
  4. temp_inColor2 = temp_inColor * m_exp = 5.55
  5. temp_outColor = temp_inColor2^0.4545 = 2.98
  6. outColor = temp_outColor * 0.332 = 0.99

Are you sure luminance is 2? If it’s not a float cubemap, then it’s max is 1

and from my calc, I get

outColor=0.72

If you need to debug, then try to output each registers value as OutColor.

hi vman,
after a lot of struggle i realized the actually it IS clamping all values to one (if the exceed that treeshold).

How can I pack values > 1 into a cube map? That cubemap is defines as:

glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X_ARB+cubeface, 0, GL_LUMINANCE16, 32, 32, 0, GL_LUMINANCE, GL_FLOAT, data);

shouldn’t it store 16bit floats?

by the way, how can i dump registers’ values in output? right now i am debugging just looking at the rendered scene, and it isn’t really usefull (actually neither too smart ).
Thanks for the help,
G.

Aren’t floats 32-bit? Doesn’t the GPU use the same ‘float’ type as C?

Coubemap textures can handle only fixed-point(normalized) data. I think, floating point cubemaps are possible with ATI but not with nvidia.
16-bit for luminance does simply mean that a 16-bit fixed-point numer is used to store information.

Workarounds: if you need only 1 floating point value, you can use RGBA code it as:

pack = {2^0, 2^8, 2^16, 2^24};
unpack = {2^0, 2^-8, 2^-16, 2^-24};

packed_value.rgba = value*pack.rgba;
unpacked_value = dot4(packed_value, unpack);

If you need RGB floating-point value, use RGBE format(google - I’ve bever used it )
The idea is to use E(A) as exponent, thus:
floatR = R*2^E …

well there is an intermediate format called "half"that uses 16 bit floating point values. I know 4 sure cause i am using it with EXR image format…
Not sure if it is the same thing thou…

Anyway the clamping problem was “outflanked” for now by dividing the luminance values by 100 before passing em to the texture… kinda brutal, but as for now good enought 4 my purposes…
now i’d like to ask u a lil queston:

if i don’t use any program, my texture looks clockwise drawns, with the correct aspect ratio… if i enable my programs, it is drawn counterclockwise, and the texture looks “zoomed”… really i can’t understand why, cause i am not scaling anything and my tex coords are just the vertices normals.

Anyone has got ideas??

thx the G.

pack = {2^0, 2^8, 2^16, 2^24};
unpack = {2^0, 2^-8, 2^-16, 2^-24};

packed_value.rgba = value*pack.rgba;
unpacked_value = dot4(packed_value, unpack);


I don’t see how the above is suppose to work.
If you can preprocess the texture on the CPU and pack it,

and then in the GPU program unpack it, it would work except that you need integer register support and direct copy to float register, and that’s not possible with ARB.
Maybe NV gives the possibility? I haven’t tried it.

>>>if i don’t use any program, my texture looks clockwise drawns, with the correct aspect ratio… if i enable my programs, it is drawn counterclockwise, and the texture looks “zoomed”… really i can’t understand why, cause i am not scaling anything and my tex coords are just the vertices normals.<<<

I guess your VP is not same as fixed pipe.

yep seems like my vp isn’t doing it all right.
A friend of mine told me that when i pass texcoords from the vp to the fp i should first DP4 them with the inverse modelview matrix.
I added the DP4s:

DP4 outTexCoord.x, mvinv[0], inTexCoord;
DP4 outTexCoord.y, mvinv[1], inTexCoord;
DP4 outTexCoord.z, mvinv[2], inTexCoord;
DP4 outTexCoord.w, mvinv[3], inTexCoord;

but now when i rotate the camera, the texture doesn’t change anymore… (guess that the inverse mv product compensate the camera rotation). So i am not getting very far…
What should i do?
Why the DP4ing with the modelview? It is like xforming the tex coords from world coords to obj coords, isn’t it?

thx the G.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.