Difference between revisions of "Texture"

From OpenGL.org
Jump to: navigation, search
(Parameters: formatting.)
(Texture image units)
Line 152: Line 152:
What image unit a {{apifunc|glBindTexture}} call binds the texture to depends on the current active texture image unit. This value is set by calling:
What image unit a {{apifunc|glBindTexture}} call binds the texture to depends on the current active texture image unit. This value is set by calling:
  void {{apifunc|glActiveTexture}}( GLenum {{param|texture}} );
{{funcdef|void {{apifunc|glActiveTexture}}( GLenum {{param|texture}} );}}
The value of {{param|texture}} is {{enum|GL_TEXTURE0}} + ''i'', where ''i'' is a number on the half-open range [0, {{enum|GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS}}). This will cause the texture image unit ''i'' to be the current active image unit.
The value of {{param|texture}} is {{enum|GL_TEXTURE0}} + ''i'', where ''i'' is a number on the half-open range [0, {{enum|GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS}}). This will cause the texture image unit ''i'' to be the current active image unit.

Revision as of 17:41, 5 January 2013

A texture is an OpenGL Object that contains one or more images that all have the same image format. A texture can be used in two ways. It can be the source of a texture access from a Shader, or it can be used as a render target.


For the purpose of this discussion, an image is defined as a single array of pixels of a certain dimensionality (1D, 2D, or 3D), with a particular size, and in a specific format.

A texture is a container of one or more images. But textures do not store arbitrary images; a texture has specific constraints on the images it can contain. There are three defining characteristics of a texture, each of the defining part of those constraints: the texture type, texture size, and the image format used for images in the texture. The texture type defines the arrangement of images within the texture. The size defines the size of the images in the texture. And the image format defines the format that all of these images share.

There are a number of different types of textures. These are:

  • GL_TEXTURE_1D: Images in this texture all are 1-dimensional. They have width, but no height or depth.
  • GL_TEXTURE_2D: Images in this texture all are 2-dimensional. They have width and height, but no depth.
  • GL_TEXTURE_3D: Images in this texture all are 3-dimensional. They have width, height, and depth.
  • GL_TEXTURE_RECTANGLE: The image in this texture (only one image. No mipmapping) is 2-dimensional. Texture coordinates used for these textures are not normalized.
  • GL_TEXTURE_BUFFER: The image in this texture (only one image. No mipmapping) is 1-dimensional. The storage for this data comes from a Buffer Object.
  • GL_TEXTURE_CUBE_MAP: There are exactly 6 distinct sets of 2D images, all of the same size. They act as 6 faces of a cube.
  • GL_TEXTURE_1D_ARRAY: Images in this texture all are 1-dimensional. However, it contains multiple sets of 1-dimensional images, all within one texture. The array length is part of the texture's size.
  • GL_TEXTURE_2D_ARRAY: Images in this texture all are 2-dimensional. However, it contains multiple sets of 2-dimensional images, all within one texture. The array length is part of the texture's size.
  • GL_TEXTURE_CUBE_MAP_ARRAY: Images in this texture are all cube maps. It contains multiple sets of cube maps, all within one texture. The array length * 6 (number of cube faces) is part of the texture size.
  • GL_TEXTURE_2D_MULTISAMPLE: The image in this texture (only one image. No mipmapping) is 2-dimensional. Each pixel in these images contains multiple samples instead of just one value.
  • GL_TEXTURE_2D_MULTISAMPLE_ARRAY: Combines 2D array and 2D multisample types. No mipmapping.

Texture sizes have a limit based on the GL implementation. For 1D and 2D textures (and any texture types that use similar dimensionality, like cubemaps) the max size of either dimension is GL_MAX_TEXTURE_SIZE. For array textures, the maximum array length is GL_MAX_ARRAY_TEXTURE_LAYERS. For 3D textures, no dimension can be greater than GL_MAX_3D_TEXTURE_SIZE in size.

Within these limits, the size of a texture can be any value. It is advised however, that you stick to powers-of-two for texture sizes, unless you have a significant need to use arbitrary sizes.

Mip maps

When a texture is directly applied to a surface, how many pixels of that texture (commonly called "texels") are used depends on the angle at which that surface is rendered. A texture mapped to a plane that is almost edge-on with the camera will only use a fraction of the pixels of the texture. Similarly, looking directly down on the texture from far away will show fewer texels than an up-close version.

The problem is with animation. When you slowly zoom out on a texture, you start to see aliasing artifacts appear. These are caused by sampling fewer than all of the texels; the choice of which texels are sampled changes between different frames of the animation. Even with linear filtering (see below), artifacts will appear as the camera zooms out.

To solve this problem, we employ mip maps. These are pre-shrunk versions of the full-sized image. Each mipmap is half the size of the previous one in the chain, using the largest dimension of the image . So a 64x16 2D texture can have 6 mip-maps: 32x8, 16x4, 8x2, 4x1, 2x1, and 1x1. OpenGL does not require that the entire mipmap chain is complete; you can specify what range of mipmaps in a texture are available.

Some texture types have multiple independent sets of mipmaps. Each face of a cubemap has its own set of mipmaps, as does each entry in an array texture. However, the texture as a whole only has one setting for which mipmaps are present. So if the texture is set up such that only the top 4 levels of mipmaps present, you must have them for all mipmap chains in the texture.

When sampling a texture (see below), the implementation will automatically select which mipmap to use based on the viewing angle, size of texture, and various other factors.

When using texture sizes that are not powers of two, the half-size of lower mipmaps is rounded down. So a 63x63 texture has as its next lowest mipmap level 31x31. And so on.

The base level of a mipmap chain is the largest one. It is also the one that defines the full size of the texture. OpenGL numbers this mipmap level as 0; the next largest mipmap level is 1, and so on.

The base level of a texture does not have to be loaded. As long as you specify the range of mipmaps correctly, you can leave out any mipmap levels you want.

Texture Objects

Anatomy of a Texture
Diagram of the contents of a texture object

Textures in OpenGL are OpenGL Objects, and they follow the standard conventions of such. So they have the standard glGenTextures, glBindTexture, as you would expect.

The target parameter of glBindTexture corresponds to the texture's type. So when you use a freshly generated texture name, the first bind helps define the type of the texture. It is not legal to bind an object to a different target than the one it was previously bound with. So if you generate a texture and bind it as GL_TEXTURE_1D, then you must continue to bind it as such.

As with any other kind of OpenGL object, it is legal to bind multiple objects to different targets. So you can have a GL_TEXTURE_1D bound while a GL_TEXTURE_2D_ARRAY is bound.


Texture objects come in three parts: storage, sampling parameters, and texture parameters. There are numerous functions to create a texture's storage; so many that the article needs its own page to describe them all.


Texture objects have parameters. These parameters control many aspects of how the texture functions.

Texture parameters are set with the following functions:

void glTexParameter[if]( GLenum target​, GLenum pname​, T param​);

void glTexParameter[if]v( GLenum target​, GLenum pname​, T *params​ );

void glTexParameterI[i ui]v( GLenum target​, GLenum pname​, T *params​ );

These function set the parameter values param​ or params​ for the particular parameter pname​ in the texture bound to target​.

In the anatomy of a texture object image above, it shows three pieces of data: Texture Storage, texture parameters, and sampling parameters. It's important to understand that both of the last two kinds of data are set by the same functions for textures. Certain parameters are about the texture itself, and some are about sampling from them.

This section will describe the texture parameters only.

Mipmap range

The parameters GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL (integer values) define the closed range of the mipmaps that are to be considered available with this texture. Nothing can cause the sampling of mipmaps smaller than GL_TEXTURE_BASE_LEVEL and nothing can cause the sampling of mipmaps greater than GL_TEXTURE_MAX_LEVEL. This even filters into GLSL; the texture size functions will retrieve the size of GL_TEXTURE_BASE_LEVEL, rather than the size of mipmap level 0.

Note that immutable storage textures will already have these values set to the mipmap range of the storage. You can set them to be smaller, but it is an error to go outside of the available mipmap range for the immutable storage.

Swizzle mask

Texture Swizzle
Core in version 4.5
Core since version 3.3
ARB extension ARB_texture_swizzle
EXT extension EXT_texture_swizzle

While GLSL shaders are perfectly capable of reordering the vec4 value returned by a texture function, it is often more convenient to control the ordering of the values from code. This is done through swizzle parameters.

Texture objects (and only texture objects. Not sampler objects) can have swizzling parameters. This only works for textures with color image formats. Each of the four output components, RGBA, can be set to come from a particular color channel.

To set the output for a component, you would set the GL_TEXTURE_SWIZZLE_C texture parameter, where C is R, G, B, or A. These parameters can be set to the following values:

  • GL_RED: The value for this component comes from the red channel of the image. All color formats have at least a red channel.
  • GL_GREEN: The value for this component comes from the green channel of the image, or 0 if it has no green channel.
  • GL_BLUE: The value for this component comes from the blue channel of the image, or 0 if it has no blue channel.
  • GL_ALPHA: The value for this component comes from the alpha channel of the image, or 1 if it has no alpha channel.
  • GL_ZERO: The value for this component is always 0.
  • GL_ONE: The value for this component is always 1.

You can also use the GL_TEXTURE_SWIZZLE_RGBA parameter to set all four at once. This one takes an array of four values. For example:

//Bind the texture 2D.
GLint swizzleMask[] = {GL_ZERO, GL_ZERO, GL_ZERO, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);

This will effectively map the red channel in the image to the alpha channel when the shader accesses it.

Stencil texturing

Stencil Texturing
Core in version 4.5
Core since version 4.3
Core ARB extension ARB_stencil_texturing

A texture with a depth image format is normally considered a depth component texture. This means that non-depth comparison access will return a single floating-point value (as depth components are either normalized integers or floats). Depth textures in this way can be considered a special form of single-channel floating-point color textures.

However, if the texture uses a packed depth/stencil image format, it is possible to access the stencil component instead of the depth component. This is controlled by the parameter GL_DEPTH_STENCIL_TEXTURE_MODE.

When the parameter is set to GL_DEPTH_COMPONENT, then accessing it from the shader will access the depth component as a single float, as normal. But when the parameter is set to GL_STENCIL_COMPONENT, the shader can access the stencil component.

This parameter changes the very nature of the texture access. The stencil component is an unsigned integer value, so you must use an unsigned integer sampler when accessing it. When accessing the stencil component of a 2D depth/stencil texture, you must use usampler2D​.

Note: Though this parameter affects sampling, it is not a sampling parameter. As such, you cannot bind the same texture object to two image units and use two different samplers to fetch the depth and stencil components. However, you can create a view of the texture (both that and this are GL 4.3 features), and set different texture parameters into the different views. One view for the depth, one view for the stencil.

Sampling parameters

Sampling is the process of fetching a value from a texture at a given position. GLSL controls much of the process of sampling, but there are many parameters that affect this as well.

These parameter are shared with Sampler Objects, in that both texture objects and sampler objects have them.

Texture image units

Binding textures for use in OpenGL is a little weird. There are two reasons to bind a texture object to the context: to change the object (modify it's storage or its parameters or to render something with it.

Changing the texture's stored state can be done with the above simple glBindTexture call. However, actually rendering with a texture is a bit more complicated.

A texture can be bound to one or more locations. These locations are called texture image units. OpenGL contexts have a maximum number of texture image units, queriable from the constant GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.

What image unit a glBindTexture call binds the texture to depends on the current active texture image unit. This value is set by calling:

void glActiveTexture( GLenum texture​ );

The value of texture​ is GL_TEXTURE0 + i, where i is a number on the half-open range [0, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS). This will cause the texture image unit i to be the current active image unit.

Each texture image unit supports bindings to all targets. So a 2D texture and an array texture can be bound to the same image unit, or different 2D texture can be bound in two different image units without affecting each other. So which texture gets used when rendering? In GLSL, this depends on the type of sampler that uses this texture image unit.

Note: This sounds suspiciously like you can use the same texture image unit for different samplers, as long as they have different texture types. Do not do this. The spec explicitly disallows it; if two different GLSL samplers have different texture types, but are associated with the same texture image unit, then rendering will fail. Give each sampler a different texture image unit.

The glActiveTexture function defines the texture image unit that any function that takes a texture target as a parameter uses.

GLSL binding

Programs are one of the two users of textures. In order to use textures with a program, the program itself must use certain syntax to expose texture binding points.


A sampler in GLSL is a uniform variable that represents an accessible texture. It cannot be set from within a program; it can only be set by the user of the program. Sampler types correspond to OpenGL texture types.

Samplers are used with GLSL texture access functions.

The process of using textures with program samplers involves 2 halves. Texture objects are not directly associated with or attached to program objects. Instead, program samplers reference texture image unit indices. And whatever textures are bound to those image units at the time of rendering are used by the program.

So the first step is to set the uniform value for the program samplers. For each sampler uniform, set its uniform value to an integer on the range [0, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS). When the time comes to use the program directly, simply use glActiveTexture and glBindTexture to bind the textures of interest to these image units.

The textures bound to the image unit set in the sampler uniforms must match the sampler's type. So a sampler1D​ will look to the GL_TEXTURE_1D binding in the image unit it is set in.

If a Sampler Object is bound to the same texture image unit as a texture, then the sampler object's parameters will replace the sampling parameters from that texture object.


Images within a texture can be used for arbitrary image load/store operations. This is done via image variables, which are declared as uniforms. Image uniforms are associated with an image unit (different from a texture image unit). The association works similarly as for sampler uniforms, only the number of image units per shader stage is different from the number of texture image units per shader stage.

Images are bound to image units with glBindImageTexture.

Render targets

Through the use of a framebuffer object, individual images within a texture can be the destination for rendering.