How do I get my gluLookAt in shaders

Hello,

As I am learning how to deal with using shaders and my next question is how do I get the following into the shaders?


void Init(void)
{
.....
                glViewport(0,0,ClientWidth,ClientHeight);
		glMatrixMode(GL_PROJECTION);

		glLoadIdentity();
		gluPerspective(45.0,ClientWidth/ClientHeight,
					   0.1,1700);
		glMatrixMode(GL_MODELVIEW);
.....
}

void Render(void)
{
....
		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
		glLoadIdentity();

		gluLookAt(FCamera.X,FCamera.Y,FCamera.Z,FEye.X,FEye.Y,FEye.Z,
			       FUp.X,FUp.Y,FUp.Z);

....
}

If the above snippets, the glView and gluPerspective only change if the window size changes, so they are somewhat fixed(less likely updated). The gluLookAt is in the actual render routine so it is most likely updated the most.

The big question is what matrix can I pull to send to the shader? I know that I need a uniform to do this, but I am uncertain how to access the current matrix (composite of the above) so that it can be passed to the shader.

Thanks

[QUOTE=williajl;1291804]
The big question is what matrix can I pull to send to the shader? I know that I need a uniform to do this, but I am uncertain how to access the current matrix (composite of the above) so that it can be passed to the shader.[/QUOTE]
If you’re using fixed-function matrix operations, you can access the matrices in the shader using the compatibility uniforms: gl_ModelViewMatrix, gl_ProjectionMatrix, etc (there are also variables for products, inverse, transpose, etc).

But if you’re using shaders exclusively, it’s better to use a client-side library (e.g. GLM) to construct the matrix then upload it using glUniformMatrix*(). Don’t use glGet() to retrieve matrices other than for debugging, as it can cause a pipeline stall.

[QUOTE=GClements;1291807]If you’re using fixed-function matrix operations, you can access the matrices in the shader using the compatibility uniforms: gl_ModelViewMatrix, gl_ProjectionMatrix, etc (there are also variables for products, inverse, transpose, etc).

But if you’re using shaders exclusively, it’s better to use a client-side library (e.g. GLM) to construct the matrix then upload it using glUniformMatrix*(). Don’t use glGet() to retrieve matrices other than for debugging, as it can cause a pipeline stall.[/QUOTE]

Unfortunately, when I use gl_ModelViewMatrix,gl_ProjectionMatrix or gl_ModelViewProjectionMatrix, the program refuses to compile because it says the global variables was depreciated after 140. Is there something I am missing here?

You probably need to add a #version directive specifying the compatibility profile, e.g.


#version 330 compatibility

If a #version directive lacks a profile, the core profile is assumed.

[QUOTE=GClements;1291811]You probably need to add a #version directive specifying the compatibility profile, e.g.


#version 330 compatibility

If a #version directive lacks a profile, the core profile is assumed.[/QUOTE]

This did the trick for now.

The next question is, is the algorithm’s used to calculate the modelview and projection matrix available so that I can eventually remove the compatibility by creating my own functions and passing in a composite matrix via a uniform?

Specifically the algorithms used by gl_Perspective/gl_View and gl_LookAt?

Or, is there a real performance penalty for using compatibility mode in the shaders?

The gluPerspective and gluLookAt reference pages document the matrices which are used.

Probably not. However, the compatibility profile isn’t supported on MacOSX (so if you want to use any version after 2.1 there, you’re limited to the core profile), and OpenGL ES is based around the core profile.