half-float texture values problem

Hello,

I’ve been trying to use LUMINANCE16F & RGB16F to store index data for use in a shader. The problem is that sometimes, occasionally ( that’s the worst ), my data is wrong, without me changing them. Some elements might get nearby values, so the break my algorithm.

For example, some adjacent data in the texture are like this :

47 47 48 48 49 49

& sometimes with the same texcoords, they become like this

47 47 48 48 48 49

With the 32F types, the problems vanish. Since my indices are actually integers, stored in half-float form, there shouldn’t be any precision problems ( I guess ).

Does anybody know what could be the cause of this?
Or any other suggestions for storing index data?
I use a 7800GS, so no SM4 for me :frowning:

Many thanks,
babis.

How to you readout the data?
I.e. what are the texcoords and matrix setup you’re using to access the texels?
If you’re running on a razor edge condition which is for example accessing texels with nearest filtering on the corner or edge of a texel, the tiniest floating point rounding difference will result in different texels accessed.
You should sample in the center of the texels.

Since my indices are actually integers, stored in half-float form, there shouldn’t be any precision problems ( I guess ).

Integers are 32 bit precise. 16F is only 10 bit precise.
You say it vanishes with 32F format texels, so what’s your range of indices?

Last week, I had a similar problem uploading 16 bit integers. I made the mistake of assuming _16f formats could hold the data and obviously they can’t as Relic already pointed out. I solved the problem by using LUMINANCE16 as the internal texture format. If your indices are already integers, couldn’t you just upload directly them (external format GL_SHORT) without converting them to half-float?

@Relic

I render a quad with same texcoord range, same viewport, same ortho projection & same vertex values. Why should I sample in the center?? I’ve actually made a rounding function so I get exactly 0.0, 1.0, etc. as tex coords (I use texture rectangle), with filtering also to nearest. The problem texels are not necessarily on the edge, they can be internal ( for example, 10 consecutive problematic ones).

Mmm I think I forgot to do the maths…Sometimes my indices get more than 1024, so that explains the problem I guess…

@Nicolai

I was actually reluctant to use integer textures as I’ve heard of problems with their use, at least in older cards (bad excuse, I know). If unsigned short Luminance & RGBA are supported normally then it’s perfect for my use, I guess I’ll try & see what happens.

Thank you for your answers!

When using texture rectangles you should still sample at
0.5 , 1.5 , 2.5, …
in order to sample the pixel centers.

You misunderstood, I didn’t say texture edges, it’s about the individual texels inside your texture.
For algorithms which use textures to store a number of values in a 2D grid you need to make sure you know which one you pick. And picking exactly between two texels can flip to the either or other texel with the tiniest floating point error.

Easiest eaxmple about how OpenGL decides where to fetch a texel when rasterizing:
Let’s render a 1x1 texture on a 1x1 viewport.
Let the projection matrix be an ortho 2D from (0, 1, 0, 1), and the modelview matrix the identity.
Render a quad which matches that, with texture coordinates including the whole single texel texture (aka. a 1-to-1 mapping of pixels to texels):
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(1.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0.0f, 1.0f);
glEnd();

Now where do you think will the interpolated texture coordinate inside the fragment program sample your texture?

Answer at TexCoord (0.5, 0.5), because OpenGL’s sample point for the that pixel is at the pixel center and the interpolation on the texture coordinate results in (0.5, 0.5).
(BTW, this is different in DX9: http://msdn2.microsoft.com/en-us/library/bb219690(VS.85).aspx and DX10 changed that.)

So if you generate your texture coordinates by not interpolating them in a 1-to-1 fashion, but use them to directly index into a grid of 2D values stored inside a texture with nearest filtering, you should be as near to texels center as possible, to be sure you pick the correct texel.
For GPGPU algorithms this is imperative.

But in your case it’s probably just the range. :wink:

Thank you guys for your answers, I hope it’s the last time I’m confused with this sampling. No problem with my mapping, it’s 1:1, but a bit … off by 0.5 :slight_smile: And the worst thing is that, although sampling wrong, it works.

As I feared, using *16 formats with UNSIGNED_SHORT makes the shaders run in snail-speed mode.
Should I guess the *16 internal formats using integer datatypes are not so well supported in hw?

Any other solution besides EXT_texture_integer ( which my hw doesn’t support), better than 32F ??

That is surprising on a GF7 card which is fairly new (it does not do that on GF8). Snail-speed means fall back on software rendering (if you’re on windows maybe this tool can help identify the problem - http://developer.nvidia.com/object/nvemulate.html)..) The actual datatype used when uploading the texture data should not matter. The GL drivers performs the (necessary) conversion on the data.

Have you tried using GL_LUMINANCE8_ALPHA8? The data in the shading would then have to be unpacked to 16 bit. AFAIR the sampler in the shader would return normalized floats [0…1]. Maybe with some shader coder like, (sample.r * 255) * 256 + (sample.a * 255)?

EDIT: Then you would use the BYTE external format and pay attention to byte ordering :slight_smile:

Surprising indeed, since Geforce FX (== Geforce 5) series already had native support for 16 bit textures.

Sorry for the misinformation, the only ‘suprising’ thing was that I was running in debug mode.

Anyway, I’ve trying to change to UNSIGNED_SHORT with RGBA16 & LUMINANCE16_ALPHA16 formats. The second one works ok, but the first one doesn’t.

I check in my texture creation code the size in bits for each component with getTexParameter. For the luminance alpha format, it returns correctly 16 & 16 each. For the rgba format, I get 8 bits each. I’ve tried 4,GL_RGBA,GL_RGBA16 for internal formats, but nothing, all return 8 bits.

What could be wrong here?

Thanks,
babis.

Here’s a list where you can see whether precision substitution is applied or not. It hasn’t been updated in a while though…

Thanks for the link! Going back to floats for the RGBA then…