oh, which GPUs do that then? I would have thought the Intel ones at most as there vertex shaders run in software anyway
According to delphi3d.net - GeForce FX returns non-zero value for MAX_VERTEX_TEXTURE_UNITS.
Another reason to test features yourself:
columns:
16fpf - 16-bit floating point texture filtering
32fpf - 32-bit floating point texture filtering
16fpb - 16-bit floating point blending
32fpb - 32-bit floating point blending
sp - best shader precision
VTF - vertex texture fetch
gl_FF - gl_FrontFacing (in GLSL)
values:
-
- runs in hardware
S - runs in software
GPU 16fpf 32fpf 16fpb 32fpb shp VTF gl_FF
GeForceFX S S S S 16 S S
GeForce6 + S + S 32 + +
RadeonX - - - - 32 - S
RadeonX1 - - + - 32 - S
Now which extension or variable would tell you that?
When I implemented my HDR support I had to add 2 code pathes - for GeForce 6 and for Radeon X1 (no fp16 filtering!). Now I wanted to release my application but knew next radeon will very likely support fp16 filtering and could run the NVIDIA codepath, which was faster. How to test for a GPU that doesn’t even exist but you know it will?
Another example - 16-bit vs 32-bit GLSL fragment shaders. On GeForce FX 32-bit shaders work slow, so you better use 16-bit shaders. On GeForce 6 / Radeon 9800 32-bit shaders work fast, so you don’t want to waste precision.
How would you ask GPU which shader precisoin you should use?
Another one - NPOT textures on Radeon X - they work (with some limitations), but are not reported by driver.
We could go on. In some cases, asking driver about certain feature is not enough. That’s why we have posts here like: “How to detect GeForce FX”, “How to detect Radeon HD”, “How to detect if feature is working in software”, which means someone allready ran into problem with some GPU.
The common answer is - “Check GL_RENDERER string and if it’s GeForce FX, don’t use that feature”, then someone says “Don’t do that. GL_RENDERER string may change.”. My solution is just an answer to that problem.
I meant a broad range of gfx. cards for validating if the tests work consistently
Do you mean that if we don’t add such automatic tests to application, then we don’t have to test our application on various GPU’s at all?
I think you’re missing the point here:
This is the situation:
- you have an application that uses feature X
- you find out that this feature is not working on GPU A
- you fix your application and test in to GPU A again
Now the question is how would you fix your application?
Solution:
- tell user than your application does not support GPU A
- tell the aplication not to use feature X on GPU A and test if your application works on GPU A. Few days later someone says: “this feature is not working on GPU B”…
- tell your applcation that feature X may fail on SOME GPU’s and it should check if the feature is forking - test your application on GPU A
So the point is - you don’t add a bunch of test at start of your applicatoin just like that - to give yourself more trouble. What you do is to develop your applicatoin normally, check for extensions and GPU capabilities as you would normally do.
Only when you run into a problem when asking driver about a feature is not enough, then add a test to your application. You are allready in a situatoin when you have to test this feature on that GPU to see if you fixed it correctly.