Trying to put 16bit Data in 1 channel

Hello,

Didnt know where to post this and i though this maybe a good place. I am workingon A GPGPU application. But hte problem i have i think is more general than that. I try to put 16 bit data onto the red channel of a texture. I use this

glBindTexture(GL_TEXTURE_2D, texName);
	glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
//	glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glPixelStorei(GL_PACK_ALIGNMENT, 1);

	//Moving data to Texture
	
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F_ARB, size, size, 0, GL_RED, GL_UNSIGNED_SHORT, texPtr);

When i use UnsignedShort i get invalid enumerant error. Does that means my card does not support it?
I tried format Luminance16, intencity16. I put internal format GL_FLOAT_R32_NV but again nothing.

Atm i use it as GL_UNSIGNED_BYTE but i can see some difference in my results from the ones i should have.

Anyone can help me why i get this errors with GL_UNSIGNED_SHORT?
P>S my card is a 7600GT

First of all, check the documentation of the image reader to find out what this 16-bit data represents. There are numerous 16-bit formats such as:

GL_UNSIGNED_SHORT_5_6_5
GL_UNSIGNED_SHORT_5_6_5_REV
GL_UNSIGNED_SHORT_4_4_4_4
GL_UNSIGNED_SHORT_4_4_4_4_REV
GL_UNSIGNED_SHORT_5_5_5_1
GL_UNSIGNED_SHORT_1_5_5_5_REV

It is 16bit grayscale…Well it just says that.:$. And from what i seen in the actuall code. If the image is rgba it converts it to grayscale by adding all 3 channels and then divides by 3. And stores it on unsigned char. If it is already grayscale it just reads it and returns to to unsigned char.

Well, that looks like bad code to me. Weighting all 3 color channels equally when converting to grayscale is not a good idea. Anyway, are you sure it’s 16 bit if it returns unsigned char*, I mean, does it perform an explicit typecast or something?

Well this is the cod e i found there

  const unsigned OUT_BYTES = LodePNG_InfoColor_getBpp(infoOut) / 8; /*bytes per pixel in the output image*/
  const unsigned OUT_ALPHA = LodePNG_InfoColor_isAlphaType(infoOut); /*use 8-bit alpha channel*/


 if(OUT_ALPHA) out[OUT_BYTES * i + 3] = 255;
            out[OUT_BYTES * i + 0] = out[OUT_BYTES * i + 1] = out[OUT_BYTES * i + 2] = in[2 * i];
            if(OUT_ALPHA && infoIn->key_defined && 256U * in[i] + in[i + 1] == infoIn->key_r) out[OUT_BYTES * i + 3] = 0;

I left out a lot of code, that i think its irrelevant with it. As i see, this returns just 8 bit back, and puts it in all 3 r,g,b channels. In alpha it puts 255.

But anw, if it 8bit we it would mind using it into Unsigned_Short which is bigger?

Ehm, that’s exactly the opposite, they’re creating an rgba image from a grayscale image and as far as I can tell it’s 8bit unsigned char, don’t know where you get the 16 bit from. You want to upload the “out” array, right?

PS. looks like this code is going to generate strange result for OUT_BYTES==1 and OUT_ALPHA!=0

Basically i hoped that this reader was 16bit cause my image is 16bit. It the out array i get back.
Hm…i poste wrong code i think earlier. I ll post the whole thing for 16bit image.

 else if(infoIn->bitDepth == 16)
    {
      switch(infoIn->colorType)
      {
        case 0: /*greyscale color*/
          for(i = 0; i < numpixels; i++)
          {
            if(OUT_ALPHA) out[OUT_BYTES * i + 3] = 255;
            out[OUT_BYTES * i + 0] = out[OUT_BYTES * i + 1] = out[OUT_BYTES * i + 2] = in[2 * i];
            if(OUT_ALPHA && infoIn->key_defined && 256U * in[i] + in[i + 1] == infoIn->key_r) out[OUT_BYTES * i + 3] = 0;
          }
        break;
        case 2: /*RGB color*/
          for(i = 0; i < numpixels; i++)
          {
            if(OUT_ALPHA) out[OUT_BYTES * i + 3] = 255;
            for(c = 0; c < 3; c++) out[OUT_BYTES * i + c] = in[6 * i + 2 * c];
            if(OUT_ALPHA && infoIn->key_defined && 256U * in[6 * i + 0] + in[6 * i + 1] == infoIn->key_r && 256U * in[6 * i + 2] + in[6 * i + 3] == infoIn->key_g && 256U * in[6 * i + 4] + in[6 * i + 5] == infoIn->key_b) out[OUT_BYTES * i + 3] = 0;
          }
        break;
        case 4: /*greyscale with alpha*/
          for(i = 0; i < numpixels; i++)
          {
            out[OUT_BYTES * i + 0] = out[OUT_BYTES * i + 1] = out[OUT_BYTES * i + 2] = in[4 * i]; /*most significant byte*/
            if(OUT_ALPHA) out[OUT_BYTES * i + 3] = in[4 * i + 2];
          }
        break;
        case 6: /*RGB with alpha*/
          for(i = 0; i < numpixels; i++)
          {
            for(c = 0; c < OUT_BYTES; c++) out[OUT_BYTES * i + c] = in[8 * i + 2 * c];
          }
          break;
        default: break;
      }
    }

I really needed to read the imagfe as 16bit. If if it wasnt 16bit would it apper like the one i have in 16bit?

IT appears that the original data is 16 bit and the reader always converts it to 8bit RGBA so you should use GL_RGBA8 internal format with GL_UNSIGNED_BYTE data type if you want to use the “out” array. It will look like the 16 bit array, but from a computational point of view you will have lost some accuracy. If you want to keep the accuracy you should work with the “in” array and GL_UNSIGNED_SHORT.

I cant really figure out. Anyone knows how to read 16bit png images or dicom images in c? Any libraries? Or how to convert dicom images in some recognisable form in c?

But you’re already read it in 16bit. The “in” array contains 16bit values but the “out” array contains 8bit values. That can’t be too hard to understand?

The hard part is that why the in is 16 bit and the out is 8? I found the code that its executed is this for my images:

   for(i = 0; i < numpixels; i++)
          {

            if(OUT_ALPHA) out[OUT_BYTES * i + 1] = 255;
            out[OUT_BYTES * i] = in[2 * i];
            if(OUT_ALPHA && infoIn->key_defined && 256U * in[i] + in[i + 1] == infoIn->key_r) out[OUT_BYTES * i + 1] = 0;
          }

I need to change this:

out[OUT_BYTES * i] = in[2 * i]; ??
And put

out[OUT_BYTES/2 * i] = in[i];
out[OUT_BYTES/2 * i+1] = in[i+1];
??

I just tried to make the pointer of out to show at the pointer of in… But it says they are not the same type:p

Can you provide the full image reader code and an example image otherwise this is going to take ages.

:D…Oki. I ll attach the image and the reader. You dont know how i appreciate what you doing. The library for png is lodepng. The image is a bit big in size. about 1.2mb. I ll put both of them in a rar and post you in a pm the link.:slight_smile:

Okay, so first of all, download the new C++ header/source from this site.

Then use this code:


std::vector<unsigned char> buffer, image;
LodePNG::loadFile(buffer, "1.dcm.png");
LodePNG::Decoder decoder;
//do not convert to color but keep 16bit intensity
decoder.getSettings().color_convert = 0;
decoder.decode(image, buffer.empty() ? 0 : &buffer[0], (unsigned)buffer.size());

glGenTextures(1, &dcim);
glBindTexture(GL_TEXTURE_2D, dcim);
//set swap bytes to true to account for different endian-ness, it's possible you have to remove this depending on your architecture
glPixelStorei(GL_UNPACK_SWAP_BYTES,GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, decoder.getWidth(), decoder.getHeight(), 0, GL_LUMINANCE, GL_UNSIGNED_SHORT,&image[0]);
//restore swap bytes
glPixelStorei(GL_UNPACK_SWAP_BYTES,GL_FALSE);

Well now i dont get any overflows. But how do i display in screen the luminance? I mean in the shader. I should use the last channel from gl_FragColor ?? Lets say gl_FragColor(0.0,0.0,0.0,0.5)??

Also now my framebuffer says incomplete attachment. I used the same glTexImage2D as the one you posted me for the output texture as well…Just with NULL as data

You can’t use luminance/intensity textures as an FBO attachment on GF7 series. You can sample any of the r,g,b components to get the luminance value but you need an rgba texture (or single float texture rectangle) for FBO attachment to write to the FBO.

Ahh…Didint know htis… Gonna try it right now, and post my results:D

Are you sure you need the 16bit data because the input image looks rather noise to me so I doubt you’ll be able to get any more information out of the 16 bit values than the 8bit ones.

Well now the fbo is ok… I retrun the texture.a value as red and i get a full red rectangle…:smiley:

Well it looked a bit noisy to me too. Maybe the dicom to png image converter doesnt work as well as it should?? $… Basically i wanna compare the losses in doing image processing on gpgpu. So i ll compare my results with my original image.

I was checking some hting and i noticed that the image reader reads x2 times the pixels that are on the screen. I mean if i print how many gery pixels it found it says
2097152 instead of 1024*1024 which is the half of what i get.

Why is that? So it creates the 16bits of image? And when i read back the results. They are all 0 until about place 500000 where they become -431682000