Outputting FragColor as a ubyte

Hi

great, but I’m doing GPGPU and I don’t care about floats. My texture type is GL_UNSIGNED_BYTE and I would like to be able to write :

gl_FragColor.x = 255;

All I can do right now is casting my int to float, divide it by 255, and let the hardware remultiply it by 255 so that it fits in the GL_UNSIGNED_BYTE.

Is there any possibility there ?

Thanks

what possibility? does it work?
you mean ivec4 gl_FragColor ?

i think i never saw an ivec4 gl_FragColor.
as far as i know ints are converted to floats anyways. even bool is either 0.0f or 1.0f.

From the EXT_gpu_shader4 specification:

The binding of a user-defined varying out variable to a fragment color number can be specified explicitly. The command

void BindFragDataLocation(uint program, uint colorNumber, const char *name);

specifies that the varying out variable name in program should be bound to fragment color colorNumber when the program is next linked.

Using this, you can declare a custom integer output varying and bind it to color output 0.

How about GeForce’s pack/unpack instructions (which are present in nv40, and possibly are macros in G80)
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=239846#Post239846

All I can do right now is casting my int to float, divide it by 255, and let the hardware remultiply it by 255 so that it fits in the GL_UNSIGNED_BYTE.

Is there any possibility there ?

Not without GL 3.0. There, you can designate arbitrary outputs which you then map to particular FBO attach points. So you can create a ‘uvec4’ output that goes to a texture with an unsigned integer internal format.

Anyway, the division by 255 (multiplication by 1.0/255.0) is the fastest way, so I suggest you stick with it.

ok, thanks for your answers ! I’ll see what I can do :slight_smile: