interpreting texture data in shader

When we have floating point internal format and GL_UNSIGNED_BYTE does it clamps data to 0-1 ? what if image format is GL_FLOAT? as floating points are not normalized, is there any case when the values will be clamped to 0-1?

in shader, floating point sampler will clamp data 0-1 or it will just return floating point data? if it is not normalized, how will it determine the intensity of the color?

You seem to be confusing your terminology a bit, which makes it hard to tell what you’re talking about.

The internal format of an image is the format of the data that is stored by OpenGL. If you’re using a function like glTexImage2D, that would be the third parameter.

The pixel transfer format and type (the latter being the only place where “GL_UNSIGNED_BYTE” would be valid) are about something else entirely. These parameters define how a pixel transfer operation takes place. Specifically, these values describe the format of the data you are passing to OpenGL.

So if you have a glTexImage2D command like:


glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, ..., GL_RGBA, GL_FLOAT, ptr);

What you are saying is this:

→ I want OpenGL to allocate a texture, where each texel contains 4 channels, and each channel is a 32-bit floating point value [GL_RGBA32F]. I then want to initialize this data from a pointer that contains 4 channels [GL_RGBA], and each channel is a 32-bit floating point value [GL_FLOAT].

If you did this:


glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, ..., GL_RGBA, GL_UNSIGNED_BYTE, ptr);

Here is what you’re saying:

-> I want OpenGL to allocate a texture, where each texel contains 4 channels, and each channel is a 32-bit floating point value [GL_RGBA32F]. I then want to initialize this data from a pointer that contains 4 channels [GL_RGBA], and each channel is an 8-bit unsigned normalized byte [GL_UNSIGNED_BYTE].

This means that OpenGL will have to convert each pixel from your 4-channel unsigned normalized byte to a 4-channel floating-point value. Unsigned normalized bytes always range from [0, 1], so that is the range of values that you initialize the texture with.

Also, this will be exceedingly slow, so please don’t do this…

Floating point data is floating point data. It will therefore return exactly what you put into it. If the only values you loaded were on the range [0, 1], then that’s what values you get out.

The “intensity of the color” is entirely up to how you interpret it. The data doesn’t even have to be a color; it’s just a floating-point value. It means exactly and only what you want it to mean.

If the texture needs to be a floating-point format because of subsequent usage, but the data you have to initialise it with comes as normalised bytes, letting OpenGL perform the conversion will probably be faster (or at least no slower) than performing the conversion yourself.

But if you have a free choice over either the internal or external format, the transfer will be faster if the two match.

Yes.

If format is an integer format, integer values (signed or unsigned byte, short, or int) are treated as integers. But it’s an error if internalformat is an integer format but format isn’t, or vice-versa.

When format isn’t an integer format, if type is an integer type then the values are treated as normalised, i.e. the largest representable value will be mapped to 1.0 and the smallest representable value to -1.0 for a signed type or 0.0 for an unsigned type.

If internalformat is a (signed or unsigned) normalised format (either no suffix or “_SNORM” suffix), it can only hold values between -1.0 and 1.0 (signed) or 0.0 and 1.0 (unsigned), so uploading floating-point data will result in the values being clamped.

If type is an integer type, and format isn’t an integer format, the values are treated as normalised values so they’re forced to be between -1.0 and 1.0 (signed) or 0.0 and 1.0 (unsigned). If you supply signed normalised values (e.g. GL_BYTE) but internalformat is an unsigned normalised format, negative values will be clamped to 0.0.

If internalformat is a floating-point format (with a “F” suffix) and type is a floating-point format (GL_FLOAT, GL_HALF_FLOAT, GL_UNSIGNED_INT_10F_11F_11F_REV or GL_UNSIGNED_INT_5_9_9_9_REV), no clamping is performed. At least, the values won’t be clamped to -1.0 or 1.0; values larger than those representable by the internal format will be clamped to the representable range.

No clamping will be performed when sampling textures within a shader. Clamping may be performed when a fragment shader writes values outside of the range supported by the corresponding framebuffer attachment. But reading values from a RGBA_32F texture and writing them to a RGBA_32F framebuffer attachment won’t result in clamping (unless the shader does so explicitly).

However you like. E.g. for high dynamic range (HDR) rendering, the intermediate colours may represent intensity in e.g. W/m2, with the mapping to the range of the physical framebuffer performed at the end of post-processing.

Thanks for the clarification :slight_smile: