GLSL to CG

Hi,

I have been using GLSL in my engine for a long time, and i always have problems and bugs with ATI cards, it gets to the point of working correctly in some ATI cards, and not in others.
I know ATI support on GLSL is very weak, so i was gonna try to convert my shaders to CG.
I’m ok with fragment shaders, but vertex shaders work diferently in CG and GLSL.
So, if anyone knows both API’s, can you guys please convert this example to CG, so i can use it has a start point?
It’s just a syntax diference i think…
This shader, is calculating the vertex position, calculates the fog coordinate and gets the projected texture coord, to project a shadow.
thanks for any help.

  
varying vec4 projCoord;
varying vec4 projCoord2;

void main()
{

     vec4 realPos = gl_ModelViewMatrix * gl_Vertex;
     projCoord = gl_TextureMatrix[0] * realPos;
     gl_FogFragCoord = abs(realPos.z/realPos.w);
     projCoord2 = gl_MultiTexCoord1;
     gl_Position =  ftransform();

}

Your probably on a PC. Here is my mac experience.

Thats funny. I got frustrated with CG and went to GLSL on the mac. I experienced the inverse situation.

I did months of research and came to the conclusion that CG on the mac cannot do shadows on my macs Radeon ATI 9800 card. The closest I got was to do a texture projection but the depth values were wrong and contradicted the nvidia documentation. It simply did not work. So, I junked CG. I’m not going to have different shader code for different cards.

Yet, when I use glsl I’m getting correct depth values and projections for my shadow maps on my radeon 9800. It works as documented. And that is exactly how I’ll leave it. If it does not work on nvidia oh farking well.

Furthermore, the CG is going to get compiled into arb assembly. Likely, the lowest common denominator on the mac. So, on ati you might not enjoy better faster feature sets.

I think its a mistake to goto CG. I’d stick with the standard.

I hate to say it but my impression of shaders at this point is not that good. Seems like a real tangled ball of yarn. Seems slow and hacky. Incompatibilities everywhere. I’ll just have to tell users “oh f*&cking well, call nvidia and ati. Not my problem and read my software license.” Make the crappy hardware and drivers look like crap.

No wonder people want to develop on consoles.

Had to rant. Over and out. :cool:

I’m on a PC yes, so my experience may be different than in Mac’s.
I think GLSL doens’t take much attention from the drivers vendors at all, and ATI in this aspect is much much worse than Nvidia.
What is happening in ATI cards(in some of them at least) is that the depth values are ignored, no comparison in made between depth values, and so i get everything black in the game. I really can’t mantain a glsl path if everytime they update the drivers(ati), something fails, or doens’t work as it should.
That’s why i’l change to CG, to give it a try, CG is more used than GLSL so it may behave correctly on ATI, we shall see.
Anyways, any chance you know how to convert those shader lines ??

thanks,
Bruno

Originally posted by Bruno:

What is happening in ATI cards(in some of them at least) is that the depth values are ignored, no comparison in made between depth values, and so i get everything black in the game.

What depth format do you use for your shadowmaps? Some ATI cards support only the 16bit one (DEPTH_COMPONENT16).

Do you have match between depth compare mode (COMPARE_R_TO_TEXTURE_ARB) and type of sampler uniform used for the shadowmap (sampler2DShadow)?

[b]QUOTE]What depth format do you use for your shadowmaps? Some ATI cards support only the 16bit one (DEPTH_COMPONENT16).

Do you have match between depth compare mode (COMPARE_R_TO_TEXTURE_ARB) and type of sampler uniform used for the shadowmap (sampler2DShadow)? [/b]
Thanks…
I know the ATI limitations on depth format, so yes i’m using DEPTH_COMPONET16, and on the shader i’m also using sampler2DShadow, and the sampler is COMPARE_R_TO_TEXTURE_ARB.
In any case, if neither of this was true(except the depth component), it would fail on nvidia cards.

Originally posted by Bruno:

In any case, if neither of this was true(except the depth component), it would fail on nvidia cards.

If I remember correctly the results of mismatched compare state and sampler are undefined so it might appear to work on some hw.

How do you render into the shadowmap? If you are using FBO, are you using the setup without color buffer?

The CG code that should do the same thing as your GLSL code is something like:

struct Input {
   float4 position : POSITION ;
   float4 texture1 : TEXCOORD1 ;
} ;

struct Output {
   float4 position : POSITION ;
   float4 projCoord : TEXCOORD0 ;
   float4 projCoord2 : TEXCOORD1 ;
   float fog : FOGC ;
} ;

Output main(
   Input input,
   uniform float4x4 model_view,
   uniform float4x4 texture_matrix,
   uniform float4x4 model_view_projection
)
{
   Output output ;
 
   float4 realPos = mul( model_view, input.position ) ;
   output.projCoord = mul( texture_matrix, realPos ) ;
   output.fog = abs( realPos.z / realPos.w ) ;
   output.projCoord2 = input.texture1 ;
   output.position = mul( model_view_projection, input.position ) ;

   return output ;
}

where you need to use CG runtime to setup matrix tracking or upload corresponding matrix uniforms manually.

ADDED: For position invariance there might be necessary to use some compiler option or binding semantics for the model_view_projection uniform depending on the profile.

Additionally if you intend to compile into the arbvp1 profile only, there is possibility to access tracked OGL state (matrices,materials & lights) using special “structure” similiar to: state.matrix.mvp

Hi again,

thanks for the code, i’l see if i can switch from glsl to cg smoothly(i hope). It’s really a shame that glsl is not more supported, is so much cleaner and “beautifull code wise” than Cg, but oh well…
I’m rendering with a FBO yes, with the color buffer disabled, like this :

glFramebufferTexture2DEXT( GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, ShadowB, 0 ) ;
glFramebufferTexture2DEXT( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, 0, 0 );
glDrawBuffer( GL_NONE ) ;
glReadBuffer( GL_NONE ) ;

thanks again,
Bruno