Detect 64bpp/128bpp texture filtering support

Any idea about how to detect if the graphics card supports filtering/mipmapping for a specific texture format? I’m specialy interested in R16G16, A16R16G16B16, A16R16G16B16F and A32R32G32B32F formats.

I know this can be done in DX9 using the IDirect3DDevice9::CheckDeviceFormat() method with the D3DUSAGE_QUERY_FILTER|D3DUSAGE_QUERY_WRAPANDMIP flags… just need an OpenGL equivalent…

Also there other question related… If a graphics card supports the ARB floating point extension… is forced to expose full mipmapping/filtering for all the FP texture formats involved?

thx!

If a graphics card supports the ARB floating point extension… is forced to expose full mipmapping/filtering for all the FP texture formats involved?
Essentially, yes. However, it may do so with software rendering.

As to the initial question, no, there is no way to tell if a particular filtering parameter will work on a particular texture format.

The only 100% sure way is to test it yourself.
Try rendering a small polygon offscreen and see if you got filtered texture and compare performance to unfiltered texture. Add try/catch just in case test crashes.

If you want simplier approach then for FP16 textures check for GL_NV_fragment_program_3 - on NVIDIA this extension is suported since GeForce 6 and this GPU can filter/blend FP16 textures.

On ATI you need to check for GL_ATI_shader_texture_lod. That’s FP16 blending.
As for filtering it wasn’t supported on Radeon X1k and below. X2k should support it - check for geometry shaders or some other SM4.0 feature.

I personally stick to testing it myself. That’s mainly because when I wrote my HDR support there was no ATI gpu that could support FP16 filtering so I didn’t know what extension to check for. At least I have real test now. I don’t get information like “this should work”, but just “this works”.

A manual test is to enable the filter, and use something like:

gl_FragColor = fract( 64.0 * filtered value);

If ther are only 4 bands the filter runs with 8 bit.

Other question: It is possible to force a 16 filtering per channel if the texture is 8 bit only? That would be nice for removing colorbands in some algorithms…

It is possible to force a 16 filtering per channel if the texture is 8 bit only?
No, but it wouldn’t be a bad idea if someone came up with something. Since the results are going to be shoved into glslang registers anyway, there’s little point in making someone use 64bpp colors when they just need the higher-res filtering.

ok thx for the answers!