To me all the info tell that this machine can fully support a shader program, while the real machine make us think that we need to disable shader support on it.
Why? The shader code follows below.
Thanks,
Alberto
// size of kernel for this execution
const int KernelSize = %len%;
// array of offsets for accessing the base image
uniform float Offset[KernelSize];
// value for each location in the convolution kernel
uniform float KernelValue[KernelSize];
// image to be convolved
uniform sampler2D BaseImage;
void main()
{
int i;
vec4 sum = vec4(0.0);
for (i = 0; i < KernelSize; i++)
{
vec4 tmp = texture2D(BaseImage, gl_TexCoord[0].st + vec2(Offset[i], 0));
sum += tmp * KernelValue[i];
}
gl_FragColor = sum;
}
To me “supported” and “usable” are kind of orthogonal. A small micro benchmark at runtime, during an “auto-detect settings” phase, allow to make a better decision whether to use a feature or not.
The user should always be able to force the use of any supported feature, even if it does not pass the “usable” framerate, but defaults settings should really take in account the real performance.
Maybe on this platform, Separable Convolution would be a better fit for your algorithm. This would reduce the number of texture lookups at the expense of an intermediary texture write.
The X300/X500 cards are very, very slow. A 19-tap kernel will basically destroy them.
Unfortunately, version numbers don’t tell a whole lot about performance. You either have to measure at runtime, as suggested, or build a list of video cards beforehand.
Ok, the only viable solution is to test speed at runtime. So we can check and disable blurring.
In general what is the recover approach in the case the time becomes acceptable? (I know that in shader case it’s impossible to get better results) You set a flag = false and never do the computation again, but what if the model changes and blurring can be done?
You mean, when the user upgrades its video card ?
A big fat button labelled “re-detect graphic settings”.
Or you can do that silently at each startup (it should be fast).
Or check the GL_VENDOR GL_RENDERER GL_VERSION strings, if any one changes, redo the auto-detect.
The best solution will depend on your application.
I still have some trouble understanding what you sell exactly, is that a low level graphic engine, a scenegraph, … ?
I mean in general, suppose you have a very complex scene and you decide to turn off some feature to keep the navigation fast enough, then the scene becomes simpler: what is the best approach to re-activate complex/slow features? If you continuously try to see how fast complex features are you will end up with a slow fps.
Do you remember some program that instead of objects draws boxes to allow smooth navigation on slow machines? Perfect, suppose that the scene comes simpler while navigating, how do you re-activate accurate object representation?
Or maybe there is no way and the user needs always to press one button to change LOD and get different performaces?
We develop a small software component that allow 3D models visualization.