Converting a vertex program to GLSL

I’ve been at this for quite some time, and lately, I’ve been having trouble finding other available OpenGL users out there to add their two cents, if possible. I have this vertex program that was originally written using GL_NV_vertex_program and I’m trying to port it to GLSL. What I have managed to do was to convert it to GL_ARB_vertex_program successfully, but the GLSL version doesn’t work properly (getting garbage all over the screen).

I’ve isolated the problem to a bad translation to GLSL. The offending instructions? I don’t know what they are yet, but I have the feeling it involves the instructions that have the more complex and less obvious swizzling methods. Since I didn’t write this code myself, there’s only so much that I can decipher from an obvious level. This code was actually an old submission to an old version of the NVSDK from early 2001.

Before I go further, I’ll show you the ARB version, and my GLSL version, so you can see what I’m talking about. I hate to just dump some code on you guys, but I’m starting to get desperate.

ARB (works):


!!ARBvp1.0

# Ported from !!VP1.0 to !!ARBvp1.0 
# by blueshogun96

ATTRIB iPos = vertex.position;
ATTRIB iCol = vertex.color;
ATTRIB iTex = vertex.texcoord;
ATTRIB iNor = vertex.normal;

PARAM mvp[4] = {state.matrix.mvp};
PARAM itmv[4] = {state.matrix.modelview.invtrans};

#PARAM c0 = program.local[0];
#PARAM c1 = program.local[1];
#PARAM c2 = program.local[2];
#PARAM c3 = program.local[3];
#PARAM c4 = program.local[4];
#PARAM c5 = program.local[5];
#PARAM c6 = program.local[6];

PARAM c8 = program.local[8];
PARAM c9 = program.local[9];
PARAM c10 = program.local[10];
PARAM c11 = program.local[11];
PARAM c12 = program.local[12];
PARAM c13 = program.local[13];
PARAM c14 = program.local[14];
PARAM c15 = program.local[15];
PARAM c16 = program.local[16];
PARAM c17 = program.local[17];

OUTPUT oPos = result.position;
OUTPUT oCol = result.color;
OUTPUT oTex = result.texcoord;
#OUTPUT oNor = result.normal;

TEMP R0;
TEMP R1;
TEMP R2;
TEMP R3;
TEMP R4;
TEMP R5;
TEMP R6;

# R0 = (y coordinate of vertex * k) + (w * t)
MAD R0.x, iPos.y, c16.x, c16.z;

# R1 = cos(R0), R0 = any floating point number
MUL R0.x, c11.w, R0.x; 
EXP R0.y, R0.x; 
SLT R2.x, R0.y, c11; 
SGE R2.yz, R0.y, c11; 
DP3 R2.y, R2, c14.zwzw;   
ADD R5.xyz, -R0.y, c10; 
MUL R5, R5, R5;
MAD R0, c12.xyxy, R5, c12.zwzw; 
MAD R0, R0, R5, c13.xyxy;      
MAD R0, R0, R5, c13.zwzw;      
MAD R0, R0, R5, c14.xyxy;      
MAD R0, R0, R5, c14.zwzw;      
DP3 R1.x, R0, -R2; 

# R2 = R1 * c[10] + 1.0
MAD R2.x, R1.x, c10.w, c10.z;

# R3 = perturbed vertex
# R4 = perturbed normal
MOV R2.yw, c16;
MUL R3, iPos, R2.xyxw;
MUL R4, iNor, R2.yxyw;

# Transform vertices into clip space via modelview-projection matrix
DP4 oPos.x, R3, mvp[0];
DP4 oPos.y, R3, mvp[1];
DP4 oPos.z, R3, mvp[2];
DP4 oPos.w, R3, mvp[3];

# Transform normals via inverse transpose modelview matrix & normalize
# R5 = transformed normal
DP3 R5.x, R4, itmv[0];
DP3 R5.y, R4, itmv[1];
DP3 R5.z, R4, itmv[2];
DP3 R5.w, R5, R5;
RSQ R5.w, R5.w;
MUL R5, R5, R5.w;

# Get unit length eye vector to vertex....this is how we texture map
# R6 = vector
ADD R6, c8, iPos;
DP3 R6.w, R6, R6;
RSQ R6.w, R6.w;
MUL R6, R6, R6.w;

# Multiply by 0.5 for texture wrapping
MUL R6, R6, c10.y;
	    
# Texture coord is dot product of normal and vector from eye to vertex
DP3 oTex.x, R5, R6;

# Pass color through
MOV oCol, iCol;

END

GLSL (doesn’t work, lots of garbage):

/* General purpose vectors */
vec4 R0, R1, R2, R3, R4, R5, R6;

/* Constants */
uniform vec4 c8;
//uniform vec4 c9;
uniform vec4 c10;
uniform vec4 c11;
uniform vec4 c12;
uniform vec4 c13;
uniform vec4 c14;
//uniform vec4 c15;
uniform vec4 c16;
//uniform vec4 c17;

/* Matrices */
mat4 mvp = gl_ModelViewProjectionMatrix;
mat3 itmm = gl_NormalMatrix; //transpose(gl_ModelViewInverseMatrix);

void main()
{
    R0.x = ( gl_Vertex.y * c16.x ) + c16.z;
    
    R0.x *= c11.w;
    R0.y = exp( R0.x );
    R2.x = ( R0.y < c11.y ) ? 1.0 : 0.0;
    R2.y = R2.z = ( R0.y >= c11.y ) ? 1.0 : 0.0;
    R2.y = dot( R2, c14.zwzw );
    R5/*.xyz*/ = -R0.y + c10;
    R5 = R5 * R5;
    R0 = ( c12.xyxy * R5 ) + c12.zwzw;
    R0 = ( R0 * R5 ) + c13.xyxy;
    R0 = ( R0 * R5 ) + c13.zwzw;
    R0 = ( R0 * R5 ) + c14.xyxy;
    R0 = ( R0 * R5 ) + c14.zwzw;
    R1.x = dot( R0, -R2 );
    
    R2.x = ( R1.x * c10.w ) + c10.z;
    
    R2.yw = c16.yw;
    R3 = gl_Vertex * R2.xyxw;
    R4.xyz = gl_Normal * R2.yxy;
    
    gl_Position = R3 * mvp;
    
    R5.xyz = R4.xyz * itmm;
//    R5 = R4 * itm;
    /*R5.w = dot( R5, R5 );
    R5.w = inversesqrt( R5.w );
    R5 = R5 * R5.w;*/
    R5 = normalize(R5);

    /*R6 = c8 + v_pos;
    R6.w = dot( R6, R6 );
    R6.w = inversesqrt( R6.w );
    R6 = R6 * R6.w;*/
    R6 = normalize( c8 + gl_Vertex );
    
    R6 = R6 * c10.y;
    
    gl_TexCoord[0].s = dot( R5, R6 );
    
    gl_FrontColor = gl_Color;
}

My theory is that there are 4 instructions that I did not translate properly.

  1. SLT R2.x, R0.y, c11;
  2. SGE R2.yz, R0.y, c11;
  3. ADD R5.xyz, -R0.y, c10;
  4. MOV R2.yw, c16;

This is where I’m kinda confused. I understand what these instructions do, but I do not understand how the swizzling effects the output of each operation.

1 & 2: I understand that x and then yz are the values being written to, but is the value in R0.y being compared to c11.y or the first component, c11.x? Or all of them?
3. This is legal in GL_NV/ARB_vertex_program, but not in GLSL, so I don’t understand how ARB handles this either. Is R0.y being subtracted from each component in c10, and input into R5.xyz?
4. This also confuses me. So, am I moving c16.xy only into R2.yw? Or am I moving c16.yw only?

Right now, I’m going through the ARB_vertex_program documentation trying to get a better understanding of how the swizzling works because I think I’ve been misunderstanding it the whole time. The official documentation isn’t very straight forward IMO either. Any ideas? Thanks.

Shogun

You might have more success if you just try to program from the comments rather than trying to code each statement

for example


// R0 = (y coordinate of vertex * k) + (w * t)
float R0 = Position.y * C16.x + C16.z;
// R1 = cos(R0), R0 = any floating point number
float R1 = cos(R0);

Wow, I didn’t realize anyone had responded to this thread. Thanks, but I tried that, and it broke everything.

At this point, I almost got it working. The vertices still appear warped (as if the perspective was wrong) and I don’t know why that is. My new shader looks like this:

/* Vertex attributes */
attribute vec4 v_pos;
attribute vec4 v_normal;
varying float v_tex;

/* Texture sampler */
uniform sampler1D texture;

/* General purpose vectors */
vec4 R0, R1, R2, R3, R4, R5, R6;

/* Constants */
uniform vec4 c8;
uniform vec4 c9;
uniform vec4 c10;
uniform vec4 c11;
uniform vec4 c12;
uniform vec4 c13;
uniform vec4 c14;
uniform vec4 c15;
uniform vec4 c16;
uniform vec4 c17;

/* Matrices */
mat4 mvp = gl_ModelViewProjectionMatrix;
mat3 itmm = gl_NormalMatrix;


void main()
{
    R0 = R1 = R2 = R3 = R4 = R5 = R6 = vec4( 0, 0, 0, 1 );
    R0.x = ( v_pos.y * c16.x ) + c16.z;

    R0.x *= c11.w;
    R0.y = exp( R0.x );
    R2.x = ( R0.y < c11.y ) ? 1.0 : 0.0;
    R2.y = ( R0.y >= c11.y ) ? 1.0 : 0.0;
    R2.z = ( R0.y >= c11.y ) ? 1.0 : 0.0;
    R2.y = dot( R2, c14.zwzw );
    R5 = -R0.y + c10;
    R5 = R5 * R5;
    R0 = ( c12.xyxy * R5 ) + c12.zwzw;
    R0 = ( R0 * R5 ) + c13.xyxy;
    R0 = ( R0 * R5 ) + c13.zwzw;
    R0 = ( R0 * R5 ) + c14.xyxy;
    R0 = ( R0 * R5 ) + c14.zwzw;
    R1.x = dot( R0.xyz, -R2.xyz );
    //R1.x = cos(R0.x);
    
    R2.x = ( R1.x * c10.w ) + c10.z;
    
    R2.yw = c16.yw;
    R3 = v_pos * R2.xyxw;
    R4.xyz = v_normal.xyz * R2.yxy;
    
    gl_Position = R3 * mvp;
    
    R5.xyz = R4.xyz * itmm;
    R5.xyz = normalize(R5.xyz);
    R6.xyz = normalize( c8.xyz + v_pos.xyz );
    R6 = R6 * c10.y;
    
    v_tex = dot( R5.xyz, R6.xyz );
    
    gl_FrontColor = gl_Color;
}

I REALLY want to post a screenshot of this, but the forum won’t let me (why is that?) and I think that would give you all a better idea of what is going on. I think it has something to do with the W coordinate and I don’t fully understand much of the math/operations that are going on here. At this point, I’m ready to just pay someone to get it working because I have to get this fixed by January 18th (no pressure).

Any more ideas?

Shogun.

I don’t fully understand much of the math/operations that are going on here

Can’t help you much here:sorrow:

Why do you initialize the the R-registers with vec4(0,0,0,1) and not vec4(0,0,0,0)?

you can’t post links until you have made a few normal post just to get down on spamming. You can post the link but just remove the http and change “.” to “dot” - most people understand this here.

[QUOTE=tonyo_au;1257204]
Can’t help you much here:sorrow:

Why do you initialize the the R-registers with vec4(0,0,0,1) and not vec4(0,0,0,0)?

you can’t post links until you have made a few normal post just to get down on spamming. You can post the link but just remove the http and change “.” to “dot” - most people understand this here.[/QUOTE]

I can’t remember exactly why I did that. There was some breakage that I think I fixed, but once again, I can’t remember. Changing it back didn’t really do anything.

Anyway, I’ll link a few pictures so you can all see what is going on visually.

Good:

Bad:

I’m quite sure it has something to do with the 2nd chunk of code. Still not sure if I converted it correctly, but I’ll keep going over the ARB_vertex_program documentation in case I missed some detail. Thanks.

Shogun.

I see a couple of things that look odd


void main()
{
    R0 = R1 = R2 = R3 = R4 = R5 = R6 = vec4( 0, 0, 0, 1 );
    R0.x = ( v_pos.y * c16.x ) + c16.z;
 
    R0.x *= c11.w;
    R0.y = exp( R0.x );
    R2.x = ( R0.y < c11.y ) ? 1.0 : 0.0;
    R2.y = ( R0.y >= c11.y ) ? 1.0 : 0.0;
    R2.z = ( R0.y >= c11.y ) ? 1.0 : 0.0;
    R2.y = dot( R2, c14.zwzw );
    R5 = -R0.y + c10;                                 // R5.xyz = 
    R5 = R5 * R5;
    R0 = ( c12.xyxy * R5 ) + c12.zwzw;
    R0 = ( R0 * R5 ) + c13.xyxy;
    R0 = ( R0 * R5 ) + c13.zwzw;
    R0 = ( R0 * R5 ) + c14.xyxy;
    R0 = ( R0 * R5 ) + c14.zwzw;
    R1.x = dot( R0.xyz, -R2.xyz );
    //R1.x = cos(R0.x);
 
    R2.x = ( R1.x * c10.w ) + c10.z;
 
    R2.yw = c16.yw;
    R3 = v_pos * R2.xyxw;
    R4.xyz = v_normal.xyz * R2.yxy;
 
    gl_Position = R3 * mvp;                           // looks odd usually mvp * R3
 
    R5.xyz = R4.xyz * itmm;
    R5.xyz = normalize(R5.xyz);
    R6.xyz = normalize( c8.xyz + v_pos.xyz );
    R6 = R6 * c10.y;
 
    v_tex = dot( R5.xyz, R6.xyz );

Okay, now it’s mostly fixed. The jellyfish are taking shape right now. I didn’t know that the order of multiplication of vertices and matrices mattered. Thanks very much, I really appreciate this.

Everything looks correct for the first second, then the jellyfish suddenly expand and the vertices are thrown in different directions. Not sure why that happens, but now that I know that it’s almost working, I can focus more on the timing constants.

One more thing I noticed is that when I comment out ‘+ c16.z’, every thing works, but the jellyfish tentacles don’t animate. I’m beginning to wonder if there is a difference of floating point precisions in glProgramLocalParameter4fARB and glUniform4f, because even the tentacle animation works for just a second exactly with the + c16.z in the second statement. If it helps, I’d like to share how this constant is calculated.


void Jellyfish::render( )
{
  static float lastTime = 0.0f;
  float dt;
  float time = 0.0f;
  static float ourTime = 0.0f;
  float k = 0.25f;    // inverse wavelength of ripple
  float w = 3.0f;     // inverse period of ripple

  // What follows is the CPU's only per-frame computations
  // for animating the jellyfish
  time = queryElapsedTime( );

  dt = time - lastTime;
  lastTime = time;
  float hfactor = .805f - 0.195f * sinf(w * ourTime);
  if ( cosf(w * ourTime) < 0.0f )
  {
    dt *= 2.0f;
  }
  ourTime += dt;

  // Now send some parameters and draw the jellyfish
#if USE_ARB_VERTEXPROGRAM
  glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, 16, k, hfactor, w * ourTime, 1.0f);
#else
    glUseProgram(vertexProgramsNV[0]);
    glUniform4f(uniform_constants[16], k, hfactor, w * ourTime, 1.0f);
    glUseProgram(0);
#endif
    
  glCallList(jellyfishList);
}

So, that’s how c16.z is calculated. I highly doubt that matters, but either there’s still a bug in my conversion, or I assume it has to do with maybe a difference between the two constant functions?

Thanks again for your help so far, I am forever appreciative!

Shogun.

I don’t know the function glProgramLocalParameter4fARB although looking at ARB_vertex_program it looks like it could be an obsolete function; but using an absolute index may be giving use a different variable to what you think.