Slightly "short" texture coords

I inherited a DX7 engine to port to OpenGL and needed to get text drawing working. For some, for me unknown reason, the engine subtracted 1/2 texel in the x-direction from each glyph coord in the font textures.

I tested this on all h/w I have available, and even Mesa, and no matter how I tried could I get anything but <specific vendor> to display “good” results for font drawing with such texture coords. All else looked … well basically like ****.

While I haven’t studied the spec’s too carefully in this area, is it undefined or implementation defined if an implementation truncates or not, and if so towards zero, infinity or closest integer, when drawing a textured triangle in ortho mode where the S coord is to the left of the pixel it’s supposed to exactly fill on the framebuffer? Could it be due to differently (where allowed) implementations of other GL states?

(I have fixed this problem now, but having encountered this discrepancy I really, really want to try to understand this fully - why it looks OK on <vendor> hardware+driver, but not on anything else)

Originally posted by tamlin:
I inherited a DX7 engine to port to OpenGL and needed to get text drawing working. For some, for me unknown reason, the engine subtracted 1/2 texel in the x-direction from each glyph coord in the font textures.

The DX uses different texture sampling rules than the OGL and the half texel offset is result of that.

In the OGL the texture coordinate [0,0] correspond to the center of the texel while in the DX it is the top left corner of the texel. In both APIs the integer screen coordinates correspond to centers of the pixels. Without any bias in the text drawing code, the DX sampling would be done at coordinates that are precisely between four texels and the sampling result would depend on HW implementation and enabled filtering modes. This will result in bad text quality and this is the reason why there is bias applied by your DX application.

This is described on one paper on nVidia pages and also in DX9 documentation in chapter “Directly Mapping Texels to Pixels”

Obviously if you use the DX bias in OGL application, you will get the problem that the bias solves in the DX.

In the OGL the texture coordinate [0,0] correspond to the center of the texel
No. Texture coordinates (0,0) and (1,1) are exactly on the bottom left resp. top right corners of a 2D texture image.
If your texture would be 1x1 in size and you map it to one pixel on the screen the texture coordinate interpolation in the rasterizer will result in coordinates (0.5, 0.5) at the pixel’s center and that’s also the texel’s center in this 1x1 case where OpenGL samples it.
DX9 screwed it because of the top left texture sampling.
There is just running another thread about HDR downsamplíng here which explains it.