Removing Deprecated Features

Hello! I’m in the process of removing the deprecated features from my program and I’m having some trouble. First off how do the “in” variables work? Here is the base of my vert program:

#version 150 

precision highp float;

uniform mat4 ModelMatrix;
in vec3 Vertex;

void main()
{	
	gl_Position = ModelMatrix * vec4(Vertex, 1.0f);
}

and it works, but I don’t understand how (I understand the ModelMatrix part), but how does OpenGL know that the Vertex variable should have the vertex coordinates? My second problem is the gl_NormalMatrix and gl_Normal. I can’t find any information on how to calculate these or how to get them into the vert program. Any help on this would be greatly appreciated! Thanks!

It doesn’t - it works by chance :wink:

You are supposed to glGetAttribLocation for ‘Vertex’ of glBindAttribLocation to discover or set attribute number your attribute variable gets its values from, or use layout(location=<attrib index>) if you got GL_ARB_explicit_attrib_location.

Driver has to assign the attribute to some location on its own, you just happen to be lucky and use the location it assigned you.

normal matrix is inverse transpose of modelview (you compute it in shader or pass as uniform).
gl_Normal - you need to pass this as another attribute yourself.

Thanks for the reply! I now have everything from my first post fixed, but now I have another problem. I can’t seem to position a light without using any of the fixed function features. This is the vert shader I’m using (From the Lighthouse3d tutorial):

uniform mat4 ModelMatrix, FinalMatrix, NormalMatrix;
uniform vec3 LightPos;
uniform vec4 Diffuse, Ambient;

in vec4 Vertex;
in vec4 Normal;

out vec3 CamPos;
out vec4 diffuse,ambientGlobal, ambient;
out vec3 normal,lightDir,halfVector;
out float dist;
		 

void main(void)
{
	vec4 ecPos;
	vec3 aux;
		
	normal = vec3(normalize(NormalMatrix * Normal));
		
	/* these are the new lines of code to compute the light's direction */
	ecPos = ModelMatrix * Vertex;
	aux = vec3(vec4(LightPos, 1.0f) - ecPos);
	lightDir = normalize(aux);
	dist = length(aux);
	
	halfVector = normalize((LightPos + CamPos) / (abs(LightPos + CamPos)));//gl_LightSource[0].halfVector.xyz);
		
	/* Compute the diffuse, ambient and globalAmbient terms */
	diffuse = gl_FrontMaterial.diffuse * Diffuse;
		
	/* The ambient terms have been separated since one of them */
	/* suffers attenuation */
	ambient = gl_FrontMaterial.ambient * Ambient;
	ambientGlobal = vec4(0.0f, 0.0f, 0.0f, 1.0f);//gl_LightModel.ambient * gl_FrontMaterial.ambient;
			
	gl_Position = FinalMatrix * Vertex;
}

The lighting itself looks alright (good enough for now) but the light position is not correct and moves with the camera. I don’t know how to fix this for I’m not using any OpenGL lighting functions.

With this code I believe you need to send the light position in eye space to the shader, and the lighting is computed in that space. Depending upon what is in your “FinalMatrix”

In what space are you sending the light position? With the fixed function pipeline it would do the transform for you, not so now.

Thanks for the reply!
FinalMatrix = Projection Matrix * Model View Matrix
I’m new to this whole “space” thing but I’m almost positive that my light is in object space, so I’ll have to convert that to eye space correct? I’m also a bit lost on how the light is actually positioned. In the fixed function I would just use the glLightfv function but how does it work when you position it in the shader?

I’ve tried to make my light stay in a fixed position for the last few days with no success. Hopefully someone else out there can figure out what I’m doing wrong so, here’s my current vert shader code:

uniform mat4 ModelMatrix, ProjectionMatrix, NormalMatrix;
uniform vec4 LightPos;
uniform vec4 Diffuse, Ambient, Specular;

in vec4 Vertex;
in vec4 Normal;
out vec4 diffuse,ambientGlobal, ambient;
out vec3 normal,lightDir,halfVector;
out float dist;
		 
void main(void)
{
	normal = vec3(NormalMatrix * Normal);
	vec4 evPos = ModelMatrix * Vertex;
	vec4 evLightPos = normalize(ModelMatrix * LightPos);
	vec4 evCamPos = normalize(ModelMatrix * vec4(CamPos, 1.0f));
	vec3 aux = vec3(evLightPos - evPos);
	lightDir = normalize(aux);
	dist = length(aux);
	
	halfVector = vec3((evCamPos + evLightPos) / abs(evLightPos + evCamPos));
		
	diffuse = Diffuse;
		
	ambient = Ambient;
	ambientGlobal = vec4(0.0f, 0.0f, 0.0f, 1.0f);
	gl_Position = ProjectionMatrix * evPos;
}

I send the light’s position in object space so that’s why I multiply by the model matrix to get the eyeview space. Any solution to this problem is greatly appreciated!

You’re using the same modeling matrix to position your object as your vertex. So the light should appear nailed to the object you’re rendering, which is probably not what you want…

Thanks for the reply and no that’s not what I want. So what modeling matrix am I suppose to use for the object and which one for the vertex? (This whole matrix thing is new to me)

Well, first of all, matrices just take points and vectors from one space to another. Nothing rocket science here.

Second, given your code:


vec4 evPos = ModelMatrix * Vertex;
...
gl_Position = ProjectionMatrix * evPos;

“ModelMatrix” in your code should really be named “ModelViewMatrix”, because it is both the modeling and viewing transforms for the object you are rendering concatenated together. This is the usual thing you want to do. That is, have a combined ModelView matrix which transforms object-space positions directly into eye-space:

Click here for a bigger diagram from here.

Finally, to position your light separately, you just need to assign it some other position. Mainly, stop using the object your rendering’s ModelView transform to position it.

Number of ways to do this but the usual one is to just assign some world-space position to your light, and then per-frame transform it by the Viewing transform (world-space->eye-space transform) on the CPU and then pass the light’s position into your shader already in eye coordinates. That way you don’t waste vertex shader cycles for every vertex redundantly transforming your light position from some other space into eye space. Though you could do that if really wanted to.

Another option is to position the light in some object space, and then push it up to wold with that object space’s Modeling transform, and then (as before) push it up to eye space using the Viewing transform.

Whatever way you want to do it.

Thanks for the detailed reply! I almost have it working. The light does not move with the camera any more but when I rotate the camera the light orbits around the camera. I think it might have something to do with the way that I’m transforming the camera. This is what I’m doing to rotate and move the camera (This is called before I render an object):

TranslateMatrix(ModelViewMatrix, -CameraPosition.m_x, -CameraPosition.m_y, -CameraPosition.m_z, true);
RotateMatrix(ModelViewMatrix, -CameraRotation.m_x, X_AXIS);
RotateMatrix(ModelViewMatrix, CameraRotation.m_y, Y_AXIS);

The results are stored in the ModelViewMatrix and it is then sent to the shader. I’m not sure if that’s how you are suppose to transform the camera so that’s why I’m a bit unsure about it.

EDIT: Found this article, can’t use the solution directly because it uses the fixed function pipeline but it shed some light on what the problem is. Hopefully I can figure out how to fix it.

FINAL EDIT: AHHHHHHHHH! I finally got it! In the end what I did was I made a light matrix out of the light’s position and the camera’s position/rotation. Then multiplied that by the difference of the lights position and the camera’s position, and it worked! Thanks a lot for everyone’s help!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.