Common Mistakes: Deprecated

From OpenGL.org
Jump to: navigation, search

This page describes mistakes that are commonly made when using features deprecated in more recent versions of OpenGL.

glEnableClientState(GL_INDEX_ARRAY)

What's wrong with this code?

glBindBuffer(GL_ARRAY_BUFFER, vboid);
glVertexPointer(3, GL_FLOAT, sizeof(vertex_format), 0);
glNormalPointer(GL_FLOAT, sizeof(vertex_format), 20);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_INDEX_ARRAY);
glBindBuffer(GL_ELEMENT_ARRAY, iboid);
glDrawRangeElements(....);

The problem is that GL_INDEX_ARRAY does not mean what this programmer thinks it does. GL_INDEX_ARRAY has nothing to do with indices for your glDrawRangeElements. This is for color index arrays.

Never use these. Just use a color array, as follows.

glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertex_format), X);
glEnableClientState(GL_COLOR_ARRAY);

glInterleavedArrays

This call is for automatic interleaving of vertex data. Never do this. It is always preferable to use manual interleaving (by setting the stride parameter on the gl*Pointer calls appropriately). An example of proper stride usage:

struct MyVertex
{
float x, y, z; //Vertex
float nx, ny, nz; //Normal
float s0, t0; //Texcoord0
float s1, s2; //Texcoord1
};
 
//-----------------
 
glVertexPointer(3, GL_FLOAT, sizeof(MyVertex), offset);
glNormalPointer(GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*3);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*6);
glClientActiveTexture(GL_TEXTURE1);
glTexCoordPointer(2, GL_FLOAT, sizeof(MyVertex), offset+sizeof(float)*8);

Misaligned vertex formats

During vertex specification, it is generally best if all of the components of a vertex format are aligned to 4 bytes. So if you do something like this:

  glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(vertex_format), X);

The next component of the format should be padded out from the end of the color by 1 byte. Thus, you should have:

  glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(vertex_format), 0);
  glVertexPointer(GL_FLOAT, sizeof(vertex_format), 4);

Instead of this:

  glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(vertex_format), 0);
  glVertexPointer(GL_FLOAT, sizeof(vertex_format), 3);

There is one extra byte of wasted space in the first one, but it is preferable to the misaligned float value.

glTexEnvi

Since a lot of tutorials call glTexEnvi when they create a texture, quite a few people end up thinking that the texture environment state is part of the texture object.

glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

States such as GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T, GL_TEXTURE_MAG_FILTER, GL_TEXTURE_MIN_FILTER are part of the texture object.

glTexEnv is part of the texture image unit (TIU), not the texture object.

When you set this it will effect any texture attached to the TIU and it only has affect during rendering.

You can select a TIU with glActiveTexture(GL_TEXTURE0+i).

Also keep in mind that glTexEnvi has no effect when a fragment shader is bound.

And in the end, cleanup

  glDeleteTextures(1, &textureID);

glAreTexturesResident and Video Memory

glAreTexturesResident doesn't necessarily return the value that you think it should return. On some implementations, it would return always TRUE and on others, it returns TRUE when it's loaded into video memory. A modern OpenGL program should not use this function.

If you need to find out how much video memory your video card has, you need to ask the OS. GL doesn't provide a function since GL is intended to be multiplatform and on some systems, there is no such thing as dedicated video memory.

Even if your OS tells you how much VRAM there is, it's difficult for an application to predict what it should do. It is better to offer the user a feature in your program that let's him controls "quality".

ATI/AMD created GL_ATI_meminfo. This extension is very easy to use. You basically need to call glGetIntegerv with the appropriate token values.

http://www.opengl.org/registry/specs/ATI/meminfo.txt

http://developer.download.nvidia.com/opengl/specs/GL_NVX_gpu_memory_info.txt