PDA

View Full Version : laplacian...help....



guyinhell
03-08-2005, 11:02 PM
hi everybody:

i am using a laplacian 3x3 mask to do the enhancement for my gray level image. that is to use the 9 times of one pixel intensity to substract all the 8 neighboring pixel intensities. with my 8 bits image, everything works perfect, but when i use the same code for my 12 bits image( texture is attached as LUMINANCE16, the result intensity must be muplied by 16), i got a weird image. only the first neighboring pixel intensity is substracted, all the others are not. as a result, the output image is very bright( most pixels are white). if i substract each neighboring pixel seperately, each result is correct. how come i can substract only one pixel?
thanx
alex

Relic
03-09-2005, 12:17 AM
With only the 12 least significant of the 16 bits used you have a value range from 0.0 to 1/16. Ok, there is your factor of 16. Only the final result gets the scaling.
If you used a filter which is

-1 -1 -1
-1 9 -1
-1 -1 -1Lets assume you have a full intensity pixel in the center and 0 on the eight neighbours. This is scaled by your algorithm by 9 and then by 16.
This is 9 times brighter than white.
You need to include a bias in your calculations which brings the grayscales around a center value.

Actually to preseve the best possible precision you shouldn't multiply the pixels by 16 inside the algorithmus (shader?), you should do that before you download the 12 bits into the LUMINANCE16 image. (Sounds like I'm repeating myself.)
Color values from textures are in the range of 0.0 to 1.0, if you're not working with float textures.

guyinhell
03-09-2005, 03:08 AM
thank you for your reply, Relic.
i checked ma program again. actually the problem occurs at the blending stage.
i have two textures, one is the image i am processing, the other one is an overlay image. some part in this image is transparent, so the processed image can be display on the transparent area. before the blending, everything is fine, i have the enhanced image i want. but if i belnd this two image, i have the result as i described. the overlay image is of 8 bits. the program works perfectly for 8 bits image processing. with 12 bits image processing, if i blend the original image and the overlay image, the result is also correct. do you have any ideas? many thanx
alex

def
03-09-2005, 04:17 AM
What blending mode do you use? A GL_LUMINANCE16 texture will give you alpha values of 1.0. Depending on which blending mode this could cause your problems.

guyinhell
03-09-2005, 04:21 AM
the alpha value of the processed image is 1.0 for all the pixel. on the overlay image, some areas are with alpha value 0.0.
i used the mix function in the fragment shader. for original (unprocessed) image, it works as i expect. but here, it gets on my nerve. :( :( :( :( :(

guyinhell
03-10-2005, 12:42 AM
the code is as follow
#define OFFSET 1.0 / 1024.0

uniform sampler2D image12Bits;
uniform sampler2D overlayImage;

void main (void)
{
vec4 colorPixel = 16.0 * 9.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) );

vec4 colorLeft = 16.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) + vec2(-OFFSET, 0.0) );
vec4 colorRight = 16.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) + vec2(OFFSET, 0.0) );
vec4 colorUp = 16.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) + vec2(0.0, OFFSET) );
vec4 colorDown = 16.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) + vec2(0.0, -OFFSET) );

vec4 colorUpLeft = 16.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) + vec2(-OFFSET, OFFSET) );
vec4 colorUpRight = 16.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) + vec2(OFFSET, OFFSET) );
vec4 colorDownLeft = 16.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) + vec2(-OFFSET, -OFFSET) );
vec4 colorDownRight = 16.0 * texture2D(image12Bits, vec2(gl_TexCoord[0]) + vec2(OFFSET, -OFFSET) );

vec4 colorImage = ( colorPixel - colorLeft - colorRight - colorDown - colorUp
-colorUpLeft - colorUpRight - colorDownLeft - colorDownRight);

vec4 colorOverlay = texture2D(overlayImage, vec2(gl_TexCoord[7]) );

// vec4 color = mix (colorImage , colorOverlay, colorOverlay.a);

vec4 color = colorImage * ( 1.0 - colorOverlay.a )+ colorOverlay * colorOverlay.a;

gl_FragColor = vec4 ( vec3(color), 1.0); if i just display colorImage(processed image),change the last line as
gl_FragColor = vec4 ( vec3(colorImage), 1.0); the result is as expected. but when i mix with the overlay image, the result is too bright.
please help me...

def
03-10-2005, 02:34 AM
have you tried this:

vec4 color = mix (colorImage , colorOverlay, colorOverlay.aaaa);Maybe colorOverlay.a is interpreted by the mix function as {0.0,0.0,0.0,1.0} and not as a float?
Seems strange though...

guyinhell
03-10-2005, 03:45 AM
thanx def, i just tried. it didnt work :( :confused:

yooyo
03-10-2005, 11:30 AM
Can you post your texture uploading code? I suspect that you use misuse short (which is signed) and GL_UNSIGNED_SHORT as texture format.

yooyo

guyinhell
03-10-2005, 10:39 PM
here you go yooyo, my code for uploading the texture
glTexImage2D ( GL_TEXTURE_2D,
0,
GL_LUMINANCE16,
1024,
1024,
0,
GL_LUMINANCE,
GL_UNSIGNED_SHORT,
m_image12Bits ); am pretty sure this is unsigned. i did already some processing for this texture before( not in this case), something like color inversion ( in other shaders). everyrhing works, except in this case.