Shader issue

Hi!

I want to create glow effect in my project. I came across the code from ATI SDK. They used shaders for performing the gaussian blur on the texture. This is the code:

[Vertex shader]

uniform vec2 invSize;

varying vec2 texCoordM;
varying vec2 texCoordB0;
varying vec2 texCoordF0;
varying vec2 texCoordB1;
varying vec2 texCoordF1;
varying vec2 texCoordB2;
varying vec2 texCoordF2;

void main(){
	gl_Position = gl_Vertex;
	vec2 texCoord = gl_MultiTexCoord0.xy + 0.25 * invSize;

	texCoordM = texCoord;
	texCoordB0 = texCoord - invSize;
	texCoordF0 = texCoord + invSize;
	texCoordB1 = texCoord - invSize * 2.0;
	texCoordF1 = texCoord + invSize * 2.0;
	texCoordB2 = texCoord - invSize * 3.0;
	texCoordF2 = texCoord + invSize * 3.0;
}


[Fragment shader]

uniform sampler2D Image;

varying vec2 texCoordM;
varying vec2 texCoordB0;
varying vec2 texCoordF0;
varying vec2 texCoordB1;
varying vec2 texCoordF1;
varying vec2 texCoordB2;
varying vec2 texCoordF2;

void main(){
	vec4 sampleM  = texture2D(Image, texCoordM);
	vec4 sampleB0 = texture2D(Image, texCoordB0);
	vec4 sampleF0 = texture2D(Image, texCoordF0);
	vec4 sampleB1 = texture2D(Image, texCoordB1);
	vec4 sampleF1 = texture2D(Image, texCoordF1);
	vec4 sampleB2 = texture2D(Image, texCoordB2);
	vec4 sampleF2 = texture2D(Image, texCoordF2);

	gl_FragColor = 0.1752 * sampleM + 0.1658 * (sampleB0 + sampleF0) + 0.1403 * (sampleB1 + sampleF1) + 0.1063 * (sampleB2 + sampleF2);
}
  

My question is: Why do they compute texure coords in vertex shader and not in fragment shader ?
Don’t they retrieve texels around vertices only?
What about the rest of the texture ? It seems they use the same texcoords for most of the texture, dont they ?

Hmm…I am confused! Please help!

Thx in advance

I think it’s because texture coordinates are applied to vertex not fragments (pixels). In a fragment shader you have pixels and texels not vertex and texture coordinates. They are the result of both the modelview and projection transformations.

Forgive me if I’m wrong, I’ve never done any shaders at all yet !

Hope that’s right :slight_smile:

The vertices here are in screen space. The texture coordinates in texture space.
Doing the calculation in the vertex progam does this four times for a single full window quad.
The fragment program will interpolate the attributes for you to have the right texture coordinates available at each pixel.
If you would do the offset calculation on fragment level you would do it per screen pixel, e.g. 1600*1200 times.
What do you think is faster? :wink:

Thx A LOT both of you :slight_smile:

Now it is clear :slight_smile: I just didn’t know the fragment program interpolates the coords :wink:

ATI has limited dependend texture read. If you modify texcoord in fragment shader you can do that only 4 times in shader.

If im wrong, somebody should coorect me.

yooyo

Originally posted by enmaniac:
I just didn’t know the fragment program interpolates the coords :wink:
To nitpick a little, it’s not the fragment shader that interpolates the coord. It’s done as part of the rasterization and the interpolated coordinates are passed to the fragment shader.

Originally posted by yooyo:
ATI has limited dependend texture read. If you modify texcoord in fragment shader you can do that only 4 times in shader.
There’s no limit on the number of dependent texture reads, but there’s a limit in number of indirections. The limit is on how long a chain of dependent texture reads can be. This means you can do something like this and it will be fine:

vec4 sum = vec4(0.0);
for (int i = 0; i < 10; i++){
   sum += texture2D(Texture, texCoord + offset[i]);
}

On the other hand, this will run in software:

vec2 coord = texCoord;
for (int i = 0; i < 10; i++){
   coord += texture2D(Texture, coord).xy;
}

Thx a lot ppl!

But I have another question.

In the shaders above the texCoords calculated in vertex shader are passed into fragment shader. This is easy in GLSL but how to do it using CG ?

Originally posted by enmaniac:
[QBIn the shaders above the texCoords calculated in vertex shader are passed into fragment shader. This is easy in GLSL but how to do it using CG ?[/QB]
It’s just as easy. :smiley: Take a look at the Cg User’s Manual that came with the Cg download zip from NVIDIA. The User’s Manual also has a good number of example vertex/fragment shaders to gaze upon.

But basically it’s like this:

vertex shader (Cg):

struct vertOut
{
    float4 hpos : POSITION,
    float4 tex0 : TEXCOORD0
};

vertOut main( float4 pos : POSITION,
              uniform float4x4 mvp, 
              etc..
)
{
   vertOut vout;

   vout.hpos = mul( mvp, pos );
   vout.tex0 = whatever;

   return vout;
}

fragment shader (Cg):

float4 main( texCoord0 : TEXCOORD0,  
             uniform sampler2D texture : TEXUNIT0,
             etc...
) : COLOR
{
    float4 x = tex2D( texture, texCoord0 );

    return x;
}

-SirKnight

Ok…So if I am going to calculate 7 different texcoords in vertex shader should I pass them into fragment shader in different texcoords like this:

vertex shader (Cg):
struct vertOut
{
  float4 hpos : POSITION,
  float2 texM : TEXCOORD0,
  float2 texF0 : TEXCOORD1,
  float2 texF1 : TEXCOORD2,
  float2 texF2 : TEXCOORD3,
  float2 texF3 : TEXCOORD4,
  float2 texF4 : TEXCOORD5,
  float2 texF5 : TEXCOORD6
};

vertOut main( float4 pos : POSITION, float2 uv : TEXCOORD0, uniform float2 invSize, uniform float4x4 mvp,               etc..)
{
   vertOut vout;
   vout.hpos = mul( mvp, pos );  
 
   vout.texM = uv;
   vout.texF0 = uv+invSize;
   vout.texF1 = uv-invSize;
   vout.texF2 = uv+invSize*2.0;
   vout.texB0 = uv-invSize*2.0;
   vout.texB1 = uv+invSize*3.0;
   vout.texB2 = uv-invSize*3.0;

   return vout;
}

fragment shader (Cg):
float4 main( VertOut vin, uniform sampler2D texture : TEXUNIT0,             etc...) : COLOR
{    
  float4 a = tex2D( texture, vin.texM );         
  float4 b = tex2D( texture, vin.texF0 );         
  float4 c = tex2D( texture, vin.texF1 );         
  float4 d = tex2D( texture, vin.texF2 );         
  float4 e = tex2D( texture, vin.texB0 );         
  float4 f = tex2D( texture, vin.texB1 );         
  float4 g = tex2D( texture, vin.texB2 );         

  return some_color_based_on_a_b_c_d_e_f_g;
}
  

Isn’t there any other way to pass data aquired in vertex shader into fragment shader like that in GLSL example using some varying parameters rather then using texcoords…

Whether you use texcoords or invent your own name for the varying, it all ends up the same way : they are interpolated by a limited amount of resources.
It’s more a question of readability. Anyone disagree?

CG probably doesn’t support this GLSL feature. We wouldn’t want 2 languages 100% identical :slight_smile: