PDA

View Full Version : Trying to put 16bit Data in 1 channel



Ftored
05-22-2008, 12:54 AM
Hello,

Didnt know where to post this and i though this maybe a good place. I am workingon A GPGPU application. But hte problem i have i think is more general than that. I try to put 16 bit data onto the red channel of a texture. I use this


glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTE R,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTE R,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
// glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glPixelStorei(GL_PACK_ALIGNMENT, 1);

//Moving data to Texture

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F_ARB, size, size, 0, GL_RED, GL_UNSIGNED_SHORT, texPtr);

When i use UnsignedShort i get invalid enumerant error. Does that means my card does not support it?
I tried format Luminance16, intencity16. I put internal format GL_FLOAT_R32_NV but again nothing.

Atm i use it as GL_UNSIGNED_BYTE but i can see some difference in my results from the ones i should have.

Anyone can help me why i get this errors with GL_UNSIGNED_SHORT?
P>S my card is a 7600GT

-NiCo-
05-22-2008, 01:02 AM
First of all, check the documentation of the image reader to find out what this 16-bit data represents. There are numerous 16-bit formats such as:

GL_UNSIGNED_SHORT_5_6_5
GL_UNSIGNED_SHORT_5_6_5_REV
GL_UNSIGNED_SHORT_4_4_4_4
GL_UNSIGNED_SHORT_4_4_4_4_REV
GL_UNSIGNED_SHORT_5_5_5_1
GL_UNSIGNED_SHORT_1_5_5_5_REV

Ftored
05-22-2008, 01:07 AM
It is 16bit grayscale..Well it just says that.:$. And from what i seen in the actuall code. If the image is rgba it converts it to grayscale by adding all 3 channels and then divides by 3. And stores it on unsigned char. If it is already grayscale it just reads it and returns to to unsigned char.

-NiCo-
05-22-2008, 01:16 AM
Well, that looks like bad code to me. Weighting all 3 color channels equally when converting to grayscale is not a good idea. Anyway, are you sure it's 16 bit if it returns unsigned char*, I mean, does it perform an explicit typecast or something?

Ftored
05-22-2008, 01:25 AM
Well this is the cod e i found there


const unsigned OUT_BYTES = LodePNG_InfoColor_getBpp(infoOut) / 8; /*bytes per pixel in the output image*/
const unsigned OUT_ALPHA = LodePNG_InfoColor_isAlphaType(infoOut); /*use 8-bit alpha channel*/


if(OUT_ALPHA) out[OUT_BYTES * i + 3] = 255;
out[OUT_BYTES * i + 0] = out[OUT_BYTES * i + 1] = out[OUT_BYTES * i + 2] = in[2 * i];
if(OUT_ALPHA && infoIn->key_defined && 256U * in[i] + in[i + 1] == infoIn->key_r) out[OUT_BYTES * i + 3] = 0;

I left out a lot of code, that i think its irrelevant with it. As i see, this returns just 8 bit back, and puts it in all 3 r,g,b channels. In alpha it puts 255.

But anw, if it 8bit we it would mind using it into Unsigned_Short which is bigger?

-NiCo-
05-22-2008, 01:45 AM
Ehm, that's exactly the opposite, they're creating an rgba image from a grayscale image and as far as I can tell it's 8bit unsigned char, don't know where you get the 16 bit from. You want to upload the "out" array, right?

PS. looks like this code is going to generate strange result for OUT_BYTES==1 and OUT_ALPHA!=0

Ftored
05-22-2008, 02:03 AM
Ehm, that's exactly the opposite, they're creating an rgba image from a grayscale image and as far as I can tell it's 8bit unsigned char, don't know where you get the 16 bit from. You want to upload the "out" array, right?

PS. looks like this code is going to generate strange result for OUT_BYTES==1 and OUT_ALPHA!=0

Basically i hoped that this reader was 16bit cause my image is 16bit. It the out array i get back.
Hm..i poste wrong code i think earlier. I ll post the whole thing for 16bit image.


else if(infoIn->bitDepth == 16)
{
switch(infoIn->colorType)
{
case 0: /*greyscale color*/
for(i = 0; i < numpixels; i++)
{
if(OUT_ALPHA) out[OUT_BYTES * i + 3] = 255;
out[OUT_BYTES * i + 0] = out[OUT_BYTES * i + 1] = out[OUT_BYTES * i + 2] = in[2 * i];
if(OUT_ALPHA &amp;&amp; infoIn->key_defined &amp;&amp; 256U * in[i] + in[i + 1] == infoIn->key_r) out[OUT_BYTES * i + 3] = 0;
}
break;
case 2: /*RGB color*/
for(i = 0; i < numpixels; i++)
{
if(OUT_ALPHA) out[OUT_BYTES * i + 3] = 255;
for(c = 0; c < 3; c++) out[OUT_BYTES * i + c] = in[6 * i + 2 * c];
if(OUT_ALPHA &amp;&amp; infoIn->key_defined &amp;&amp; 256U * in[6 * i + 0] + in[6 * i + 1] == infoIn->key_r &amp;&amp; 256U * in[6 * i + 2] + in[6 * i + 3] == infoIn->key_g &amp;&amp; 256U * in[6 * i + 4] + in[6 * i + 5] == infoIn->key_b) out[OUT_BYTES * i + 3] = 0;
}
break;
case 4: /*greyscale with alpha*/
for(i = 0; i < numpixels; i++)
{
out[OUT_BYTES * i + 0] = out[OUT_BYTES * i + 1] = out[OUT_BYTES * i + 2] = in[4 * i]; /*most significant byte*/
if(OUT_ALPHA) out[OUT_BYTES * i + 3] = in[4 * i + 2];
}
break;
case 6: /*RGB with alpha*/
for(i = 0; i < numpixels; i++)
{
for(c = 0; c < OUT_BYTES; c++) out[OUT_BYTES * i + c] = in[8 * i + 2 * c];
}
break;
default: break;
}
}

I really needed to read the imagfe as 16bit. If if it wasnt 16bit would it apper like the one i have in 16bit?

-NiCo-
05-22-2008, 02:11 AM
IT appears that the original data is 16 bit and the reader always converts it to 8bit RGBA so you should use GL_RGBA8 internal format with GL_UNSIGNED_BYTE data type if you want to use the "out" array. It will look like the 16 bit array, but from a computational point of view you will have lost some accuracy. If you want to keep the accuracy you should work with the "in" array and GL_UNSIGNED_SHORT.

Ftored
05-22-2008, 04:06 AM
I cant really figure out. Anyone knows how to read 16bit png images or dicom images in c? Any libraries? Or how to convert dicom images in some recognisable form in c?

-NiCo-
05-22-2008, 04:09 AM
But you're already read it in 16bit. The "in" array contains 16bit values but the "out" array contains 8bit values. That can't be too hard to understand?

Ftored
05-22-2008, 04:22 AM
But you're already read it in 16bit. The "in" array contains 16bit values but the "out" array contains 8bit values. That can't be too hard to understand?

The hard part is that why the in is 16 bit and the out is 8? I found the code that its executed is this for my images:


for(i = 0; i < numpixels; i++)
{

if(OUT_ALPHA) out[OUT_BYTES * i + 1] = 255;
out[OUT_BYTES * i] = in[2 * i];
if(OUT_ALPHA &amp;&amp; infoIn->key_defined &amp;&amp; 256U * in[i] + in[i + 1] == infoIn->key_r) out[OUT_BYTES * i + 1] = 0;
}

I need to change this:

out[OUT_BYTES * i] = in[2 * i]; ??
And put

out[OUT_BYTES/2 * i] = in[i];
out[OUT_BYTES/2 * i+1] = in[i+1];
??

I just tried to make the pointer of out to show at the pointer of in.. But it says they are not the same type:p

-NiCo-
05-22-2008, 04:40 AM
Can you provide the full image reader code and an example image otherwise this is going to take ages.

Ftored
05-22-2008, 04:48 AM
:D..Oki. I ll attach the image and the reader. You dont know how i appreciate what you doing. The library for png is lodepng. The image is a bit big in size. about 1.2mb. I ll put both of them in a rar and post you in a pm the link.:)

-NiCo-
05-22-2008, 06:08 AM
Okay, so first of all, download the new C++ header/source from this (http://members.gamedev.net/lode/projects/LodePNG/) site.

Then use this code:



std::vector<unsigned char> buffer, image;
LodePNG::loadFile(buffer, "1.dcm.png");
LodePNG::Decoder decoder;
//do not convert to color but keep 16bit intensity
decoder.getSettings().color_convert = 0;
decoder.decode(image, buffer.empty() ? 0 : &amp;buffer[0], (unsigned)buffer.size());

glGenTextures(1, &amp;dcim);
glBindTexture(GL_TEXTURE_2D, dcim);
//set swap bytes to true to account for different endian-ness, it's possible you have to remove this depending on your architecture
glPixelStorei(GL_UNPACK_SWAP_BYTES,GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, decoder.getWidth(), decoder.getHeight(), 0, GL_LUMINANCE, GL_UNSIGNED_SHORT,&amp;image[0]);
//restore swap bytes
glPixelStorei(GL_UNPACK_SWAP_BYTES,GL_FALSE);

Ftored
05-22-2008, 06:36 AM
Well now i dont get any overflows. But how do i display in screen the luminance? I mean in the shader. I should use the last channel from gl_FragColor ?? Lets say gl_FragColor(0.0,0.0,0.0,0.5)??

Also now my framebuffer says incomplete attachment. I used the same glTexImage2D as the one you posted me for the output texture as well..Just with NULL as data

-NiCo-
05-22-2008, 07:42 AM
You can't use luminance/intensity textures as an FBO attachment on GF7 series. You can sample any of the r,g,b components to get the luminance value but you need an rgba texture (or single float texture rectangle) for FBO attachment to write to the FBO.

Ftored
05-22-2008, 07:49 AM
Ahh..Didint know htis.. Gonna try it right now, and post my results:D

-NiCo-
05-22-2008, 07:52 AM
Are you sure you need the 16bit data because the input image looks rather noise to me so I doubt you'll be able to get any more information out of the 16 bit values than the 8bit ones.

Ftored
05-22-2008, 07:57 AM
Well now the fbo is ok.. I retrun the texture.a value as red and i get a full red rectangle..:D

Well it looked a bit noisy to me too. Maybe the dicom to png image converter doesnt work as well as it should?? $.. Basically i wanna compare the losses in doing image processing on gpgpu. So i ll compare my results with my original image.

Ftored
05-22-2008, 08:37 AM
I was checking some hting and i noticed that the image reader reads x2 times the pixels that are on the screen. I mean if i print how many gery pixels it found it says
2097152 instead of 1024*1024 which is the half of what i get.

Why is that? So it creates the 16bits of image? And when i read back the results. They are all 0 until about place 500000 where they become -431682000

-NiCo-
05-22-2008, 08:46 AM
?? And how exactly do you query the amount of grey pixels? And how do you read back the results? And what results?

Ftored
05-22-2008, 08:59 AM
For pixels read i use the size() command of a vector. Ok..that was very stupid by me as i understand the mistake now :P..

I mean i cant render the texture on screen like before, and i print 0 with the method i used before to print the values of the image. This method was like this:


void *pixels =(void *)malloc(sizeof(float)*size*size);
float* data = (float*)(pixels);

glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glReadPixels(0, 0, size, size,GL_RED,GL_UNSIGNED_SHORT,data);

and then just print data as a normal float array

Or:

unsigned char *pixelsRed = NULL;
pixelsRed = new unsigned char[size*size];
memset(pixelsRed, 0, sizeof(2*unsigned char)*size*size);

glReadPixels(0, 0, size, size,GL_RED,GL_UNSIGNED_SHORT,data);

and printed the data like this in the second case:

for (int i=0; i<size*size; i++) {
itoa(data[i], text, 10); // using itoa with base = 10
printf( "%d: %s\n",i, text);



This way the first code gave me results unclamped, and the second clamped. But this was when i used the GL_UNSIGNED_BYTE earlier. I even reconstructed the picture in matlab and looked ok..

I suspect that maybe something is wrong with the whole binding of the textures.

Ftored
05-22-2008, 09:02 AM
<-Stupid inside. :mad: .. I just forgot to put:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTE R,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTE R,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);

wehere i was creating the texture :D

Question is which is the best way to save the output?

-NiCo-
05-22-2008, 09:59 AM
You really should try to understand the different data types and the conversions e.g. if you allocate a float array (1 float = 4 bytes = 32 bit) then you're using readpixels with GL_UNSIGNED_SHORT which means that you allocated twice the necessary amount because unsigned short = 2 bytes, not 4. And then you print the array as float? which means you print a 32 bit value which is than a combination of 2 unsigned short values? Try reading the spec on how data types are converted between internal and external formats.

Ftored
05-22-2008, 10:13 AM
Any articles i could look at?

Problem is i am under stress and cant really think positive..
Now is ok.. I just read FLOAT back from the fbo and everything is ok:).. Thansk for all the help -nico-. I couldnt have done it without you.