I want to create glow effect in my project. I came across the code from ATI SDK. They used shaders for performing the gaussian blur on the texture. This is the code:
My question is: Why do they compute texure coords in vertex shader and not in fragment shader ?
Don’t they retrieve texels around vertices only?
What about the rest of the texture ? It seems they use the same texcoords for most of the texture, dont they ?
I think it’s because texture coordinates are applied to vertex not fragments (pixels). In a fragment shader you have pixels and texels not vertex and texture coordinates. They are the result of both the modelview and projection transformations.
Forgive me if I’m wrong, I’ve never done any shaders at all yet !
The vertices here are in screen space. The texture coordinates in texture space.
Doing the calculation in the vertex progam does this four times for a single full window quad.
The fragment program will interpolate the attributes for you to have the right texture coordinates available at each pixel.
If you would do the offset calculation on fragment level you would do it per screen pixel, e.g. 1600*1200 times.
What do you think is faster?
Originally posted by enmaniac: I just didn’t know the fragment program interpolates the coords
To nitpick a little, it’s not the fragment shader that interpolates the coord. It’s done as part of the rasterization and the interpolated coordinates are passed to the fragment shader.
Originally posted by yooyo: ATI has limited dependend texture read. If you modify texcoord in fragment shader you can do that only 4 times in shader.
There’s no limit on the number of dependent texture reads, but there’s a limit in number of indirections. The limit is on how long a chain of dependent texture reads can be. This means you can do something like this and it will be fine:
vec4 sum = vec4(0.0);
for (int i = 0; i < 10; i++){
sum += texture2D(Texture, texCoord + offset[i]);
}
On the other hand, this will run in software:
vec2 coord = texCoord;
for (int i = 0; i < 10; i++){
coord += texture2D(Texture, coord).xy;
}
Originally posted by enmaniac:
[QBIn the shaders above the texCoords calculated in vertex shader are passed into fragment shader. This is easy in GLSL but how to do it using CG ?[/QB]
It’s just as easy. Take a look at the Cg User’s Manual that came with the Cg download zip from NVIDIA. The User’s Manual also has a good number of example vertex/fragment shaders to gaze upon.
Isn’t there any other way to pass data aquired in vertex shader into fragment shader like that in GLSL example using some varying parameters rather then using texcoords…
Whether you use texcoords or invent your own name for the varying, it all ends up the same way : they are interpolated by a limited amount of resources.
It’s more a question of readability. Anyone disagree?
CG probably doesn’t support this GLSL feature. We wouldn’t want 2 languages 100% identical