Weird problem with gl_ModelViewMatrix

I have a problem with OpenGL’s coordinate transformation (translation, rotation, scaling) I cannot solve myself.

I am maintaining an OpenGL port of an old 3D shooter game (Descent) and have implemented translucent shield spheres for the robots and players. Currently the entire shield lights up when being hit. Now I want it to only light up around the hit point. So I wrote a little shader problem that takes the hit point and uses the distance of each vertex (texel) to the hitpoint to dim the corresponding pixel:


varying vec3 vertPos;
void main() 
{
gl_TexCoord [0] = gl_MultiTexCoord0;
gl_Position = ftransform();
gl_FrontColor = gl_Color;
vertPos = vec3 (gl_ModelViewMatrix * gl_Vertex);
}
	
	
uniform sampler2D sphereTex;
uniform vec3 vHit;
uniform float fMaxDist;
varying vec3 vertPos;
void main() 
{
vec3 scale;
float scale = 1.0 - clamp (length (vertPos - vHit) / fMaxDist, 0.0, 1.0);
gl_FragColor = texture2D (sphereTex, gl_TexCoord [0].xy) * gl_Color * scale;
}

Now my program can do the coordinate transformation by software or via OpenGL. When I have the sphere coordinates transformed by software (slow) ahead of rendering (setting gl_ModelViewMatrix to identity) everything works as intended (in that case “vertPos = vec3 (gl_Vertex)” - no multiplication with gl_ModelViewMatrix).

When I have OpenGL transform the coordinates (fast) and render the shield sphere, the sphere looks alright, is at the proper place, has the proper orientation (it is textured, so I can see that), but the vertex - hit point comparison doesn’t work. I have to add that I transform the hit point by software ahead of using it in the shader to avoid having the shader to multiply it with gl_ModelViewMatrix for each texel (but if don’t use the software transformation and multiply with gl_ModelViewMatrix instead, it still doesn’t work).

I have no clue what I am doing wrong, as rendering the sphere using OpenGL transformation works fine - some misconception about gl_ModelViewMatrix? What do I overlook? Can someone please enlighten me?

You didn’t say in which space the hit point is. Is it in object (model) space?

Well, that’s where I expect it to be.

As I a said, when having all transformations done by the CPU, everything works fine, i.e. the hit point is properly transformed to some point at the surface of the transformed shield sphere.

If I let only the hit point be transformed by the CPU and let OpenGL handle the sphere vertex transformations, the sphere is at the proper place, but either the hit point or gl_ModelViewMatrix * gl_Vertex apparently is not where I expect it to be. It is correct that all OpenGL translations, rotations and scalings I have specified at some point of time are mangled into gl_ModelViewMatrix, right? (Provided I did glMatrixMode (GL_MODELVIEW) before.)

I assume that if both software and hardware transformation render the sphere properly, the software transformation must be working properly and identically to the hardware transformation. So why is the software transformed hit point off in regard to hardware transformation?

You can argue that it is rather hard to verify whether hardware and software transformation work the same way, but I can render the ingame actors (e.g. robots) both ways, and they look right either way.

Still, I must be making a mistake or a wrong assumption somewhere, but I cannot figure which.

Btw, setting up a transformation in my program uses a central function which depending on a switch (use OpenGL or use software) either stuffs the offset and view matrix passed to it into OpenGL or mangels them into the software view matrix.

The sphere consists of a bunch of pre-computed vertices with a distance of 1.0 from the sphere center. When rendering it using OpenGL transformation it gets translated to the object it surrounds, rotated according to its orientation and scaled with the object’s size. When rendering it using software transformation, each vertex is scaled with the object’s size and then transformed. The visual result is the same.

Sorry for being a bit dense. You’re saying that “vHit” is in model (object) space? After you transform it by gl_ModelViewMatrix, “vertPos” is in view space, so “vHit” must also be in view space. Alternatively you don’t transform gl_Vertex and do the length computation in model (object) space.

Actually I am only saying that I believe that I am doing the exact same transformations on the sphere vertices in software and OpenGL mode, but the distance test not working in OpenGL mode leads me to the conclusion that that may be a wrong assumption and that I just fail to understand the difference between the two rendering approaches.

And yes, I think all that stuff gets transformed to view space. In OpenGL mode, the sphere vertices are in model space when passed to the renderer, and OpenGL transforms them to view space. The hitpoint however gets transformed to view space by my software.

I would start by not transforming vHit in software, instead doing it in the shader. Also, if vHit is supposed to be in model (object) space, then you shouldn’t have to transform either, and just use gl_Vertex directly (obviously this will affect the “strength” of the fMaxDist parameter). In either case, I would visualize “scale” directly by using it as the color (skipping the texture stuff).

I have tried all permutations of transformations and not transforming, and none of them worked. What I particularly have tried is to pass vHit as vec4 in model space and compute scale like this:


float scale = 1.0 - clamp (length (vertPos - vec3 (gl_ModelViewMatrix * vHit)) / fMaxDist, 0.0, 1.0);

Edit:

I have found a way to do it in hardware now. Your hint not to use gl_ModelViewMatrix at all lead me on the right track. All I had to do is to normalize and rotate the hitpoint inverse to the model’s rotation.

Thank you for your help.