Texture Coordinate Convention

I recently read the specification for the ARB_texture_rectangle extension. I was surprised to find that it used the range [0…dim] for indexing into the texture image. I found it odd because an image with a dimension of 512 is discretized from [0…511] when indexing into the data. Therefore I thought the extension would have used [0…dim-1] instead of [0…dim]. It seems that there is an implicit convention about whether the texels represent the center of an image pixel, or the edge of an image pixel. Can anybody shed some light on this topic? It is starting to make my brain hurt.

-Thanks

For a 512 wide texture the range is [0…512] in the sense it would be [0…1] in normalized range. For normalized coords 0 and 1 would return the same value assuming GL_REPEAT, and similarly 0 and 512 would return the same value for rects if GL_REPEAT would have been supported. So this is consistent.

The easiest way to think of it is to draw say a 4x4 texture like this:

+-+-+-+-+
|o|o|o|o|
+-+-+-+-+
|o|o|o|o|
+-+-+-+-+
|o|o|o|o|
+-+-+-+-+
|o|o|o|o|
+-+-+-+-+

“o” are the texel centers. “+” are texel corners. Going from left to right you have texel corners at 0, 1, 2, 3 and 4. So the entire width of the texture is [0…4]. However, texel centers are at 0.5, 1.5, 2.5 and 3.5.

Thanks for the ascii art. That is what I expected. So if I wanted to draw that image on the screen, I would map 0…1 or 0…512 to a polygon with a vertex at 0 and another at 512. This would match my vertices to the image corners.

Correct, and due to the texture coordinate interpolation in the fragment domain and OpenGL’s rasterization rules the texels are actually sampled at the texel centers in a 1:1 texel-to-pixel mapping like yours.

The even simpler example would have been a 1x1 texture.

DX9 has a different story. Ouch!
http://msdn2.microsoft.com/en-us/library/bb219690.aspx