Image Format

Revision as of 11:36, 18 June 2012 by Alfonse (Talk | contribs) (Better formatting.)

Jump to: navigation, search

An Image Format describes the way that the images in Textures and renderbuffers store their data. They define the meaning of the image's data.

There are three basic kinds of image formats: color, depth, and depth/stencil. Unless otherwise specified, all formats can be used for textures and renderbuffers equally. Also, unless otherwise specified, all formats can be multisampled equally.

Color formats

Colors in OpenGL are stored in RGBA format. That is, each color has a Red, Green, Blue, and Alpha component. The Alpha value does not have an intrinsic meaning; it only does what the shader that uses it wants to. Usually, Alpha is used as a translucency value, but it depends on what the shader does with that value.

Note: Technically, any of the 4 color values can take on whatever meaning you give them in a shader. Shaders are arbitrary programs; they can consider a color value to represent a texture coordinate, a Fresnel index, or anything else they so desire.

Color formats can be stored in one of 3 ways: normalized integers, floating-point, or integral. Both normalized integer and floating-point formats will resolve, in the shader, to a vector of floating-point values, whereas integral formats will resolve to a vector of integers.

Normalized integer formats themselves are broken down into 2 kinds: unsigned normalized and signed normalized. Unsigned normalized formats store floating-point values between 0 and 1 by converting them into integers on the range [0, MAX_INT], where MAX_INT is the largest integer for the bitdepth of that integers. For example, let's say you have a normalized integer color format that stores each component in 8 bits. If the value of a component is the integer 128, then the value it returns is 128/255, or 0.502.

Signed normalized integer formats store the values [-1, 1], by mapping signed integers on the range [MIN_INT, MAX_INT], where MIN_INT is the largest negative integer for the bitdepth in 2's complement, while MAX_INT is the largest positive integer for the bitdepth in 2's complement.

Image formats do not have to store each component. When the shader samples such a texture, it will still resolve to a 4-value RGBA vector. The components not stored by the image format are filled in automatically. Zeros are used if R, G, or B is missing, while a missing Alpha always resolves to 1.

Note: Texture swizzling can change what the missing values are.

OpenGL has a particular syntax for writing its color format enumerants. It looks like this:


The components field is the list of components that the format stores. OpenGL only allows "R", "RG", "RGB", or "RGBA"; other combinations are not allowed as internal image formats. The size is the bitdepth for each component. The type indicates which of the 5 types mentioned above the format is stored as. No type at all means normalized unsigned integers. For other types, the following suffixes are used:

  • "F": Floating-point. Thus, GL_RGBA32F is a floating-point format where each component is a 32-bit IEEE floating-point value.
  • "I": Signed integral format. Thus GL_RGBA8I gives a signed integer format where each of the four components is an integer on the range [-128, 127].
  • "UI": Unsigned integral format. The values go from [0, MAX_INT] for the integer size.
  • "_SNORM": Signed normalized integer format.

If you want a 3-component unsigned integral format, with 8 bits per component, you use GL_RGB8UI. A 1-component floating-point format that uses 16-bits per component is GL_R16F.

For each type of color format, there is a limit on the available bitdepths per component:

format type bitdepths per component
unsigned normalized (no suffix) 2*, 4*, 5*, 8, 16
signed normalized 8, 16
unsigned integral 8, 16, 32
signed integral 8, 16, 32
floating point 16, 32

* These values are restricted to "RGB" and "RGBA" only. You cannot say "GL_RG4". In the case of 2, it is restricted to "RGBA" only.

16-bit per-channel floating-point is also called "half-float". There is an article on the specifics of these formats.

The bitdepth can also be omitted as well, but only with unsigned normalized formats. Doing so gives OpenGL the freedom to pick a bitdepth. It is generally best to select one for yourself though.

Special color formats

There are a number of color formats that exist outside of the normal syntax described above.

  • GL_R3_G3_B2: Normalized integer, with 3 bits for R and G, but only 2 for B.
  • GL_RGB5_A1: 5 bits each for RGB, 1 for Alpha. This format is generally trumped by compressed formats (see below), which give greater than 16-bit quality in much less than 16-bits of color.
  • GL_RGB10_A2: 10 bits each for RGB, 2 for Alpha. This can be a useful format for framebuffers, if you do not need a high-precision destination alpha value. It carries more color depth, thus preserving subtle gradations. They can also be used for normals, though there is no signed-normalized version, so you have to do the conversion manually. It is also a required format (see below), so you can count on it being present.
  • GL_R11F_G11F_B10F: This uses special 11 and 10-bit floating-point values. An 11-bit float has no sign-bit; it has 6 bits of mantissa and 5 bits of exponent. A 10-bit float has no sign-bit, 5 bits of mantissa and 5 bits of exponent. This is very economical for floating-point values (using only 32-bits per value), so long as your floating-point data will fit within the given range. And so long as you can live without the destination alpha.
  • GL_RGB9_E5: This one is complicated. It is an RGB format of type floating-point. The 3 color values have 9 bits of precision, and they share a single exponent. The computation for these values is not as simple as for GL_R11F_G11F_B10F, and they aren't appropriate for everything. But they can provide better results than that format if most of the colors in the image have approximately the same exponent, or are too small to be significant. This is a required format, but it is not required for renderbuffers; do not expect to be able to render to these.

sRGB colorspace

Normally, colorspaces are assumed to be linear. However, it is often useful to provide color values in non-linear colorspaces. OpenGL provides support for the sRGB colorspace with two formats:

  • GL_SRGB8: sRGB image with no alpha.
  • GL_SRGB8_ALPHA8: sRGB image with a linear Alpha.

These are normalized integer formats.

When used as a render target, OpenGL will automatically convert the output colors into the sRGB colorspace if, and only if, GL_FRAMEBUFFER_SRGB is enabled. The alpha will be written as given.

Note that there are compressed forms of sRGB image formats; see below for details.

Compressed formats

Texture compression is a valuable memory-saving tool, one that you should use whenever it is applicable. There are two kinds of compressed formats in OpenGL: generic and specific.

Generic formats don't have any particular internal representation. OpenGL implementations are free to do whatever it wants to the data, including using a regular uncompressed format if it so desires. You cannot precompute compressed data in generic formats and upload it with the glCompressedTexSubImage* functions. Instead, these formats rely on the driver to compress the data for you. Because of this uncertainty, it is suggested that you avoid these in favor of compressed formats with a specific compression format.

The generic formats use the following form:


Where type can be "RED", "RG", "RGB", "RGBA", "SRGB" or "SRGB_ALPHA". The last two represent generic colors in the sRGB colorspace.

The specific compressed formats required by OpenGL are of the form:


RGTC is a special compressed format described in Red Green Texture Compression. The valid type values are "RED", "SIGNED_RED", "RG" and "SIGNED_RG". These are all normalized formats, so the difference between signed and not signed is just the difference between unsigned normalized and signed normalized.

Despite being color formats, compressed images are not color-renderable, for obvious reasons. Therefore, attaching a compressed image to a framebuffer object will cause that FBO to be incomplete and thus unusable. For similar reasons, no compressed formats can be used as the internal format of renderbuffers.


The extension GL_EXT_texture_compression_s3tc covers the popular DXT formats. It is not technically a core feature, but virtually every implementation of OpenGL written in the last 10 years uses it. It is thus a ubiquitous extension.

This extension provides 4 specific compressed formats. It implements what DirectX calls DXT1, 3, and 5. It has two versions of DXT1: one with a single-bit alpha, and one without.

The formats are: GL_COMPRESSED_RGB_S3TC_DXT1_EXT, GL_COMPRESSED_RGBA_S3TC_DXT1_EXT, GL_COMPRESSED_RGBA_S3TC_DXT3_EXT, and GL_COMPRESSED_RGBA_S3TC_DXT5_EXT. Texture compression can be combined with colors in the sRGB colorspace via the EXT_texture_sRGB extension. This defines SRGB versions o the above formats: GL_COMPRESSED_SRGB_S3TC_DXT1_EXT, GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT1_EXT, GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT3_EXT, and GL_COMPRESSED_SRGB_ALPHA_S3TC_DXT5_EXT.

Depth formats

These image formats store depth information. There are two kinds of depth formats: normalized integer and floating-point. The normalized integer versions work similar to normalized integers for color formats; they may the integer range onto the depth values [0, 1]. The floating-point version can store any 32-bit floating-point value.

What makes the 32-bit float depth texture particularly interesting is that, as a depth texture format, it can be used with the so-called "shadow" texture lookup functions. Color formats cannot be used with these texture functions.


Depth stencil formats

These image formats are combined depth/stencil formats. They allow you to allocate a stencil buffer along with a depth buffer.

This does not mean that you can access stencil values in a shader. Sampling from a depth/stencil texture works exactly as though it were a depth only texture. The stencil buffer is only there as part of the storage.

There are only 2 depth/stencil formats, each providing 8 stencil bits: GL_DEPTH24_STENCIL8 and GL_DEPTH32F_STENCIL8.

Note: OpenGL does provide stencil-only image formats, in the form of GL_STENCIL_INDEX8 and so forth. Never use these. No drivers ever supported these, and you will get GL_FRAMEBUFFER_UNSUPPORTED errors if you try. Just use packed depth/stencil formats

Required formats

The OpenGL specification is fairly lenient about what image formats OpenGL implementations provide. It allows implementations to fall-back to other formats transparently, even when doing so would degrade the visual quality of the image due to being at a lower bitdepth.

However, the specification also provides a list of formats that must be supported exactly as is. That is, the implementation must support the number of components, and it must support the bitdepth in question, or a larger one. The implementation is forbidden to lose information from these formats. So, while an implementation may choose to turn GL_RGB4 into GL_R3_G3_B2, it is not permitted to turn GL_RGB8 into GL_RGB4 internally.

These formats should be regarded as perfectly safe for use.

Texture and Renderbuffer

These formats are required for both textures and renderbuffers. Any of the combinations presented in each row is a required format.

Base format Data type Bitdepth per component
RGBA, RG, RED unsigned normalized 8, 16
RGBA, RG, RED float 16, 32
RGBA, RG, RED signed integral 8, 16, 32
RGBA, RG, RED unsigned integral 8, 16, 32

Also, the following other formats must be supported for both textures and renderbuffers:

  • GL_RGB10_A2
  • GL_R11F_G11F_B10F

Texture only

These formats must be supported for textures. They may be supported for renderbuffers, but the OpenGL specification does not require it.

Base format Data type Bitdepth per component
RGB unsigned normalized 8, 16
RGBA, RGB, RG, RED signed normalized 8, 16
RGB float 16, 32
RGB signed integral 8, 16, 32
RGB unsigned integral 8, 16, 32
RG, RED unsigned integral Compressed with RGTC

These additional formats are required:

  • GL_SRGB8
  • GL_RGB9_E5

Legacy Image Formats

As with other deprecated functionality, it is advised that you not rely on these features.

Luminance and intensity formats are color formats. They are one or two channel formats like RED or RG, but they specify particular behavior.

When a GL_RED format is sampled in a shader, the resulting vec4 is (Red, 0, 0, 1). When a GL_INTENSITY format is sampled, the resulting vec4 is (I, I, I, I). The single intensity value is read into all four components. For GL_LUMINANCE, the result is (L, L, L, 1). There is also a two-channel GL_LUMINANCE_ALPHA format, which gives (L, L, L, A).

Intensity comes in 8 and 16-bit flavors (GL_INTENSITY8, GL_INTENSITY16). Similarly, luminance and luminance/alpha formats come in 8 and 16-bit flavors (GL_LUMINANCE8, GL_LUMINANCE_ALPHA16).

This was more useful in the pre-shader days, when converting a single-channel image into a multi-channel image was harder than doing a swizzle mask like:


Luminance and intensity are not considered color-renderable. Therefore, you cannot bind textures of this format to a FBO.

Texture objects can have swizzle masks set on them that allows you to replicate this functionality in a more generic way.

See Also