glTexImage2D format paramets of _INTEGER flavour

They are presented in table 3.3 of Opengl 4.1 core specification (although I suppose they were introduced in 3.0).
What’s their point exactly? Because “internalformat” parameter tells GL whether texture pixel components should be integral and “type” parameter tells GL what is the type of pixel components in client memory. So I see no rationale for them and the duplication of information only leads to confusion.

What’s their point exactly? Because “internalformat” parameter tells GL whether texture pixel components should be integral and “type” parameter tells GL what is the type of pixel components in client memory.

It tells the pixel transfer whether the incoming integer pixel data is a normalized format or not. The pixel transfer happens in two stages: first, the incoming data is converted from the stored form into an intermediate (based on the format), then it is converted from the intermediate format to the internal format of the image.

Note that the actual implementation doesn’t have to do it that way in all cases. If the incoming format matches the internal format, then it can just copy the data directly. But because the pixel transfer is specified like this, the first step needs to be self-contained. So it has to know up front whether the data is an integral format or not.

[quote=Alfonse Reinheart]

What’s their point exactly?
It tells the pixel transfer whether the incoming integer pixel data is a normalized format or not.

You mean it tells GL whether to normalize the incoming integer pixel data to 0…1 or to leave it as integers? So the format parameter responsibility is not only telling GL how to interpret the pixel data from client memory (which channels the pixel is composed of) but also triggering or skipping conversion to 0…1 in the intermediate format. In my opinion that’s a bad design decision, because one parameter carries two unrelated pieces of information.
What seems worse though is the fact that the conversion-to-intermediate-format-step is exposed to the programmer. Format and internal format must match anyway, otherwise INVALID_OPERATION will be triggered.

What seems worse though is the fact that the conversion-to-intermediate-format-step is exposed to the programmer. Format and internal format must match anyway, otherwise INVALID_OPERATION will be triggered.

Where does the spec say that they have to match? Indeed, the whole pixel unpacking and conversion thing exists for the sole purpose of making it so that the given format and the internal format don’t have to match.

And this isn’t entirely useless. The way pixel transfers are defined is that the value is first converted to floating-point, then it is converted to the internal format. So you can use RED_INTEGER to pass integers directly as floats; an input value of 128 becomes a float value of 128.0. Not the most useful thing, to be sure. And probably frighteningly slow. But it does have a purpose.

3.8.3 Texture Image Specification
Textures with integer internal formats (see table 3.12) require integer data.
The error INVALID_OPERATION is generated if the internal format is integer and
format is not one of the integer formats listed in table 3.3; if the internal format is
not integer and format is an integer format; or if format is an integer format and type
is FLOAT, HALF_FLOAT, UNSIGNED_INT_10F_11F_11F_REV, or UNSIGNED_-
INT_5_9_9_9_REV.