Rotate normals with vertices

I’ve decided to give up on the textures until I know the normals are coming out right.

So I have a simple vertex shader that allows you to rotate a 3D object around the Y axis, and of course the first thing I noticed is that the lighting/shading does not adjust itself to the rotation. That is, the light is out front, but when you rotate the object, the shading on it corresponds to it’s original, unrotated position. How do I deal with this?


void rotateY (inout vec4 vert, float rads);

uniform float YRot;
varying float Diffuse;

void main() {
	rotateY (gl_Vertex, YRot);
	vec3 vnormal = normalize(gl_NormalMatrix * gl_Normal);
	Diffuse = max(dot(vnormal, gl_LightSource[0].position.xyz), 0.0);
	gl_Position = ftransform();
}


void rotateY (inout vec4 vert, float rads) {
	vec4 old = vert;
	mat2 rotY = mat2(cos(rads), -sin(rads), sin(rads), cos(rads));
	vert.xz = old.xz * rotY;
}

The lighting math I took from the “Swiftless Game Programming Site” which seems very good; I’d like to move on to per-pixel lighting but need to understand some things about this first.

Sounds like you want the light to remain fixed in “world” space rather than rotate with the object as if it is fixed in “object” space.

You need to make sure that when calling "glLightfv( GL_LIGHT#, GL_POSITION, … ) the MODELVIEW is set to the appropriate world-to-eye transform “for the light source”. If your eye frustum is not moving in world space and you want your light source fixed in world space, then this means that every frame you need to make sure the exact same MODELVIEW transform is in-place before calling the above glLightfv call for the light source POSITION. The passed position is immediately transformed to eye space using the then-active MODELVIEW matrix.

The lighting math I took from the “Swiftless Game Programming Site” which seems very good

Here’s an idea: stop taking the math from other places. You are not talking about complicated stuff here; you’re just transforming things and doing basic lighting computations.

There are a lot of things wrong with that code. It’s like Riven: a world built from many disparate parts that contradicts itself in many places, yet multiple contradictions somehow cancel each other out into something that kind-of works.

For starters, you can’t write to an attribute. So I’m not even sure how your use of an inout parameter with gl_Vertex even works.

Second, you never rotate the normal or the light direction. So there’s no reason your lighting should change.

And then there’s this:

gl_Position = ftransform();

This is either a copy-and-paste error, or you don’t fully understand what this line does.

ftransform() uses the fixed-function matrices and pipeline to compute the position as though the fixed-function pipeline had done it. Notice how ftransform() does not take parameters. That’s because it doesn’t need them: it pulls the uniforms (constant) and attributes (also should be constant) directly as the fixed function set them and does the computation.

Now, you’re obviously seeing the object rotate, which I assume you’re not doing with fixed-function matrices. So clearly, you are somehow changing the supposed-to-be-constant attribute, and ftransform is somehow reading this not-constant attribute and using it. All of this is highly against the spec, so you likely found a driver bug. Congratulations.

I decided doing rotation/transformation in the shader is not worth the effort.

Fortunately, it turns out you can use the same algorithm for both the vertex and the normal, which this is pretty handy if they are interleaved in a buffer object:


void rotate (glThing *obj, float rads, int a, int b) {
	int i;
	float x, z, cs = cos(rads), sn = sin(rads);
	for (i=0; i<obj->len; i+=3) {
		x = obj->data[i+a];
		z = obj->data[i+b];
		obj->data[i+a] = x*cs - z*sn;
		obj->data[i+b] = x*sn + z*cs;
	}
	glBindBuffer(GL_ARRAY_BUFFER, obj->VBO);
	glBufferSubData(GL_ARRAY_BUFFER,0,obj->len*sizeof(GLfloat),obj->data);
	glBindBuffer(GL_ARRAY_BUFFER, 0);
}

So rather than having a static array, keeping track of the objects rotation and using that every frame in the shader, I can perform a rotation once on the data when needed outside the shader, and still not have to resort to any fixed pipeline functions.

I decided doing rotation/transformation in the shader is not worth the effort.

Knowing how to properly use vertex and fragment shaders is far from “not worth the effort.” The fact that every GL game that uses shaders does transforms in their shaders is proof enough for that.

Sure, but 95% of the shader stuff I’ve seen is not concerned about transformation [edit: oops! I’m using “transformation” as an umbrella term for “rotation and translation” – silly me] at all – it’s about materials and effects. Vis, most games, I guess I will have to take your word for it, since I’m not about to check. In any case, that’s not a reason in itself: most games do not use openGL at all, so by this logic openGL is a waste of time.

Also: those games are made by teams of paid professionals. I have no professional aspirations here, I don’t need to do that, and the GL stuff I’m doing is as a solo hobbyist. I have other stuff where I work as part of a team, and in those situations I respect the decisions made.

Cost-benefit analysis is an inevitable aspect of programming. If all we were interested in was pure performance in the result, no matter how much work or how long that takes, using high level languages and libraries would be pointless.

Getting more substantial, if I do the transforms in the shader, then I am tied to a model whereby the object data is static and must be adjusted with a plethora of trig calls every frame. Eg, if the object yaws 45 degrees and stays there, the shader must keep on applying that rotation every frame.

But by modifying the buffer data when the rotation occurs, I can do the expensive math ops only when necessary and update the data used by the shader instead, which significantly simplifies things and I would doubt could hurt performance – altho I will not claim it’s better, since all those teams of professionals cannot be wrong, as you imply.

Anyway, I think this is simple miscommunication on my part – by “rotation and transformation” I meant “rotation and translation”. Someone else is now telling me I was silly to consider performing rotations in the shader, and that that would be very unusual.

But by modifying the buffer data when the rotation occurs, I can do the expensive math ops only when necessary and update the data used by the shader instead, which significantly simplifies things and I would doubt could hurt performance – altho I will not claim it’s better, since all those teams of professionals cannot be wrong, as you imply.

So you’re saying that vertex shaders, which are designed to perform multiple parallel vector operations, are incapable of keeping up with doing a simple matrix multiply on every incoming vertex per frame. And that it would be faster for you to compute the data on the CPU, which is not designed to perform multiple parallel vector operations despite attempts like SSE, upload it across a slow memory bus to the GPU, and then have the GPU pull from that every frame.

This might function for small models, but it would scale very poorly for any real scene.

Anyway, I think this is simple miscommunication on my part – by “rotation and transformation” I meant “rotation and translation”.

Either way, this is still the wrong way to go for such a simple transform.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.