now, if I pass inc as uniform variable, the FS takes 681 ms. to complete 1000 iterations (the shader is executed 1000 times), instead, if I replace inc with a numeric value (0.1 for example) it takes 699 ms.
I think that a code that uses a constant numeric value should be faster than the same code with an uniform variable, so why does this happen?
You are aware that such things completely depend on the shader compiler; with other words, on the driver? Son the behaviour will differ based on your graphics card and driver version. The values you are measuring are very close to each other (they are practically equal), I suppose the shader compiler just inlines the uniform variable as a constant (Nvidia drivers do that onolder cards, since uniforms have to be hard-coded n the shader anyway).
You don’t know what my GL/GLSL code does nor the screen/texture size, how can you say that?
If you are talking about the execution of a shader, then “screen/texture size” is entirely irrelevant. However, if you’re measuring the time it takes to execute a particular rendering command, then it is entirely relevant.
So the question stands: what are you measuring here?
A better way to determine the shader performance, is to use a occlusion and a timer (Nvidia only) query. With both values it’s easy to calculate the fillrate or averange shader run time.
glFinish();
t1 = SDL_GetTicks();
for(int i = 0; i < num_it; i++)
{
...
}
glFinish();
t2 = SDL_GetTicks();
The time I wrote is the time needed by 1000 iterations on a 800x800 RGB texture.
I didn’t give you more details because I’m not complaining about the execution time, I was just wondering why a shader that uses a uniform variable is faster than the same one that uses a numerical value.