"Drawing" 32 bit directions

I want to make a 32 bpp texture where each pixel will represent a 32 bit direction instead of a color value. I need to render the scene from certain point of view using that texture in a quad (for example), and then use glReadPixels to read the frame buffer and obtain which directions are visible from that point. But, ofcourse, I don’t want OpenGL to modify the original 32 bit values, I mean, I don’t want any kind of pixel color adjustment nor interpolation. What do I have to do?

Originally posted by dslprog:
I want to make a 32 bpp texture where each pixel will represent a 32 bit direction instead of a color value. I need to render the scene from certain point of view using that texture in a quad (for example), and then use glReadPixels to read the frame buffer and obtain which directions are visible from that point. But, ofcourse, I don’t want OpenGL to modify the original 32 bit values, I mean, I don’t want any kind of pixel color adjustment nor interpolation. What do I have to do?
First encode your direction vector into a RGB color texture (each vector component into the range 0-255). A direction normally consists of three scalar values (RGB). So you only have 8 bits precision for every value. If that’s not enough then you should use a float texture extension (never used this so i don’t know much about this) to get 24bit on ATI or 16/32 bit on nVidia hardware, IIRC.
Then:

  • Use point sampling texture filtering (GL_NEAREST).
  • Don’t use mip maps.
    glReadPixels is generally slow, so don’t expect any good performance.
  • Tunr off blending and such things that could modify the color of the frame buffer.

Hope this helps

If I hear you right you want to use texture operations to warp a 32-bit vector field.

You’ll have to have a 32-bit buffer to draw into and to use NEAREST filtering on the texture to prevent blending. There’s a bit more that I don’t know off the top of my head.

Sorry for my English, when I said 32 bit directions I meant 32 bit memory addresses, so, I need a texture composed by memory addresses instead of colours.

Make sure you do the following:

Use nearest filtering and no mip maps (mip maps are “on” by default).

Request an internal format for the texture that has a sufficient amount of bits (e.g. RGBA8)

Upload te texture correctly, i.e try using an image instead so you know the pixel-texel mapping and upload code is correct

If I understand you right, you want to render an index map with abritrary 32 bit indices. Encode your 32 bit value as RGBA (one channel for each byte), use neither mipmaps nor linear filtering and make sure your framebuffer has destination alpha. Then you can glReadPixels() the original 32-bit RGBA value and convert it back to your 32 bit index.

I hope this makes sense :slight_smile:

BTW, do you need a texture at all or would it suffice to render each object with a solid color? (For instance, if you want to render the pointer to your scene object (which would become obsolete anyway with AMD-64 :smiley: :smiley: )).

I’m almost sure the frame buffer is in RGBA(32 bpp) and my texture is also in RGBA(32 bpp) but when I write to the frame buffer an RGBA color like (0,0,0,0), what I read from it is the color (0,0,0,255), does OpenGL sets every alpha component to 255? I’ve set the pixel format to PFD_TYPE_RGBA and 32 bpp, and I’ve done the same thing with the texture:
glTexImage2D(GL_TEXTURE_2D,0,4,Width,Height,0,GL_RGBA,GL_UNSIGNED_BYTE,Data);
There’s no mipmaps, the texture filter is GL_NEAREST, no blending at all, what am I doing wrong?

You don’t have destination alpha. Use this to check your framebuffer layout:

int red_bits,green_bits,blue_bits,alpha_bits;
glGetIntegerv(GL_RED_BITS,&red_bits);
glGetIntegerv(GL_GREEN_BITS,&green_bits);
glGetIntegerv(GL_BLUE_BITS,&blue_bits);
glGetIntegerv(GL_ALPHA_BITS,&alpha_bits);
printf(<...>);

When I get color bit planes OpenGL gives me this:
8 bits for red,green and blue and 0 bits for alpha.
Now if I set the fpd.cAlphaBit field to 8, I got an alpha of 8 bits, but it still don’t work.
Microsoft’s OpenGL documents say that Alpha bit planes are not suported.

Microsfts SOFTWARE OpenGL (1.1) implementation doesn’t support alpha bitplanes, but most hardware drivers do. What does “it still doesn’t work” mean? Same result as before, alpha is 255 when read back?

Hell!

Maybe glCopyTexImage() the framebuffer to another texture then, and glGetTexImage() this… ?

Now it works! I had to set cRedBits, cGreenBits, cBlueBits and cAlphaBits to 8 in the pixel format descriptor. Also I had to change the texture environment to GL_MODULATE. At first I activated GL_DECAL because I thought this way OpenGL wouldn’t multiply the texels by the current color. But now with GL_MODULATE I need to set the current color to (1.0f,1.0f,1.0f,1.0f) so the texel keep it’s original value.

Thank you very much.

GL_REPLACE uses the texture without alteration

Originally posted by harsman:
Microsfts SOFTWARE OpenGL (1.1) implementation doesn’t support alpha bitplanes, but most hardware drivers do. What does “it still doesn’t work” mean? Same result as before, alpha is 255 when read back?
I’m certain that that part of the documentation is out of date or just wrong. I can get pixelformats with non-zero alphabits from the generic implementation:

PixelFormat 23
DRAW_TO_WINDOW | DRAW_TO_BITMAP | SUPPORT_GDI | SUPPORT_OPENGL | GENERIC_FORMAT
Color bits(shift): 32 8(16) 8(8) 8(0)
Alpha bits(shift): 8(0)
Depth bits: 32
Stencil bits: 8
Accum bits: 64 - 16 16 16 16
Renderer: GDI Generic
GL bitdepth (RGBA): 8 8 8 8

(the pixelformat number will vary on each machine, as software pixelformats get shifted by the number of hardware pixelformats).