PDA

View Full Version : Rendering to integer texture?



GCT
09-22-2010, 04:16 PM
I've got a GPGPU type of program I'm writing, it basically is accumulating some sin/cosine information in a buffer using the openGL API.

Currently I'm doing this using a couple floating point textures attached to a framebuffer, I read from one, accumulate into the other via my fragment shader and then swap them after each frame.

However, I've got to target it for an embedded platform that doesn't support writing to floating point textures, so I've got to get by with 16-bit integer textures.

What I'm curious about is what's different about reading from/rendering to an integer texture? Is there any scaling I need to do when outputting from my fragment shader? I also want to make sure that I'm setting my textures up right for 16-bit integer:


glBindTexture(GL_TEXTURE_RECTANGLE, ctx->rtt_h[ii]);
glTexImage2D (GL_TEXTURE_RECTANGLE, 0, GL_RGB16, ctx->width, ctx->height, 0, GL_RGB, GL_SHORT, zero_buff);

Any guidance given is appreciated.

Alfonse Reinheart
09-22-2010, 04:53 PM
What I'm curious about is what's different about reading from/rendering to an integer texture?

This is not an "integer texture." It is a texture that uses normalized unsigned integers to store floating-point values on the range [0, 1]. It does not store integer values on the range [0, 65536].

When rendering to a normalized texture (which you have been doing all along), the standard rules apply. You output floating-point values, which will them be clamped to [0, 1]. OpenGL then converts them automatically to normalized unsigned integers.

Reading from it works just like any color texture. You get values on the range [0, 1].

If you want to use a GL_RGB16 texture to store values outside of that range, you will have to scale down to that range before writing the values, then you need to scale up when reading them.

GCT
09-22-2010, 06:31 PM
Ah OK, I knew that I got floating point values out of the texture read but I wasn't sure if I should write [0,1] floating point values out of the fragment shader or something else.

When I run it with the 16-bit textures the output looks clamped somehow, even though I'm normalizing my output on each pass so I shouldn't be overflowing. Thanks for the tips though that'll at least help me as I debug it.