This was a problem I’ve encountered several years ago, but since it could be neglected then I repressed it.
Yesterday it appeared once again and now it is crucial to be solved. The problem is that I cannot use GL_LINEAR (minification/magnification) filtering on textures with GL_R16UI internal format. It seems like GL_NEAREST is used instead. Is it something common and normal, or I have some other problem in the application? Did anyone have a similar issue?
That’s by design. Pure integer formats don’t allow any form of filtering (they are treated as “incomplete” if the current sampler would need filtering.)
If you want to filter them, do it manually in a shader (you get to decide what happens with the low bit.)
That’s exactly what I was afraid of. I’ve chosen GL_R16UI format because it gives me the best precision/size ratio. GL_R32F is twice the size of GL_R16UI, while GL_R16F doesn’t have enough precision. I’ll try filtering in the shader (VS), but I guess the performance will not be great because of multiple texture readings plus setting more uniforms in order to precisely define extent and boundaries of the texels (for each layer of the array), plus math for calculating offset and linear interpolation. Texture filtering is something we’ve got for free.
:doh:
Unfortunately, GL_ARB_texture_gather is not supported in GL3.x, or to be more precise in NV Cg gp4vp profile.
That significantly limits the usage, but I’ll try it certainly.
Thanks for the suggestion, but glTexImage3D() simply refuses to accept it as an internal format if I provide integer source data.
Here is the corresponding code segment: