Camera stuff

Ok i know this is a newb question…

When im placing an object in the scene
i translate first then rotate

when using a camera, i do negative rotations, then a negative translation of my camera position??

is this correct??

Kent

basic camera movement and object movement

// Camera
glRotate // rotate here to move camera in arc around point
glTranslate // Point in which to move camera out
glRotate // Rotate camera on it axis

//Objects
glPushMatrix();
glRotate // rotate object around world axis
glTranslate // move object to a point in space
glRotate // rotate object on it’s axis
draw_object
glPopMatrix();

Repeat above push/pop matrix for each object

ok, i think i follow…
im kinda confused on the second rotate… (rotate on ‘its’ axis)?

what more would you need than to translate the object, then do whatever rotations…

then do the same w/ the camera in the opposite order?

As you may know openGl does not move the camera, but you move the world around it.

The Camera is always at 0,0,0 so we translate and rotate to get part of our world we want in front of it.

Also all of our object are drawn in reference to 0,0,0 at the thier initial state.

Now Camera and objects will be acted on in reverse order by translate and rotate.

Camera example:

glRotatef( 15, 1, 0, 0)
glTranslatef( 5, 0, 0);

Our view will be first translated out on the x axis 5 units, then rotated 15 degrees. But remember all operations are done base on the origin of 0,0,0
The results is that the camera is moved in a arc of 15 degrees on the x axis, with a radius of 5 units.

Now lets swap the two:

glTranslatef( 5, 0, 0);
glRotatef( 15, 1, 0, 0)

Now the camera is rotated 15 degrees on the x axis at 0,0,0 and then translated 5 units out. Thus we now have the camera looking at a 15 angle on the x-axis and 5 units on the x axis from the origin.

You see the results are totally diffrent, the same goes for objects and the order.

When we use the above for the camera operations, we must have these operation effect each object in our scene, but not have one objects translate/rotate effect anothers. So we use Push/pop Matrix, to save the matrix before we draw our object.

Object:

glPushMatrix() // Save current matrix to stack
glTranslate // move object to location in our world
glRotate // Rotate object own its axis
draw_object
glPopMatrix() // Load back matrix state before object operations.

If we did not use the push/pop matrix the translate/rotate would effect the location of our next object.
Note sometimes we can use the effect of the first object on the seconed object to our advantage.

See my clock demo on my website, very good demo on the useage of push/pop matrix…
http://www.angelfire.com/linux/nexusone/index.html

Originally posted by Kent767:
[b]ok, i think i follow…
im kinda confused on the second rotate… (rotate on ‘its’ axis)?

what more would you need than to translate the object, then do whatever rotations…

then do the same w/ the camera in the opposite order?[/b]

ok…i follow…(i think hehe)
I just want to make sure my code is working as follows (its hard to tell when its working w/out displaying a lot of other stuff to reference…

my graphics class works on this principle

void GFX: raw(){ //the main graphic routine
SetCamera();
DrawObjects();
DrawParticles();
Render();
}

void GFX::SetCamera()
{
glRotatef(local->CurrentCam->zang,0,0,1);
glRotatef(local->CurrentCam->yang,0,1,0);
glRotatef(local->CurrentCam->xang,1,0,0);
glTranslatef(local->CurrentCam->xpos,local->CurrentCam->ypos,local->CurrentCam->zpos);
}

void GFX: rawObjects()
{
for (int x=0;xnumObjects;x++)
{
glPushMatrix();
if (local->Objects.IsTransluscent)
TransparentMode();
else
UnTransparentMode();

  	glTranslatef(local->Objects[x].xpos,local->Objects[x].ypos,local->Objects[x].zpos);
  	glRotatef(local->Objects[x].xang,1,0,0);
  	glRotatef(local->Objects[x].yang,0,1,0);
  	glRotatef(local->Objects[x].zang,0,0,1);
  	glCallList(local->Objects[x].ModelID);
  glPopMatrix();

}
}

void GFX: rawParticles()
{
glDepthMask(GL_FALSE);
glDisable(GL_FOG);

for (int x=0;xnumParticles;x++){
glPushMatrix();
glTranslatef(local->Particles.xpos,local->Particles.ypos,local->Particles.zpos);
glRotatef(-local->CurrentCam->xang,1,0,0);
glRotatef(-local->CurrentCam->yang,0,1,0);
glRotatef(-local->CurrentCam->zang,0,0,1);
glColor4f(1.0f,1.0f,1.0f,1.0f);
glBindTexture(GL_TEXTURE_2D,PartTextures[local->Particles.ID]);
glBlendFunc(GL_SRC_ALPHA,GL_ONE);
glEnable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);glVertex2f(-0.5f,-0.5f);
glTexCoord2f(0.0f,1.0f);glVertex2f(-0.5f,0.5f);
glTexCoord2f(1.0f,1.0f);glVertex2f(0.5f,0.5f);
glTexCoord2f(1.0f,0.0f);glVertex2f(0.5f,-0.5f);
glEnd();
glPopMatrix();
}
glDepthMask(GL_TRUE);
glEnable(GL_FOG);
}

void GFX::Render()
{
numparts=0;
Fog(true);
glEnable(GL_LIGHT0);// Turn on a light with defaults set
glLoadIdentity();
SwapBuffers(g_hDC);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glClearColor(0.3f, 0.3f, 0.3f, 0.5); }

i know its lengthy in syntax… (im using another class to handle data sharing by the graphics and physics classes, (as well as any other class i choose to incorporate)… there’s probably a better way but im stubborn…

so, if my camera position is set to the center of an arbitrary object position and the camera angles are set to the same object’s angles, then the camera ‘should’ be at the center of that object, facing out the front…

it seems to work, but… im not sure if something ‘could’ be going wrong?

Kent

You code looks ok.

Your camera will be moved in a arc fasion around the origin of 0,0,0 in a radius based on the values set by the translate function.

The camera will alway’s be looking at 0,0,0.

Is this the effect your looking for?

The raw object’s look ok, should be drawn after the camera routine is called.

The particals routine… not sure if that is correct. Does it draw as you expect?

Also on render… I always swap buffers after everything is drawn, look’s like you do it before??

[This message has been edited by nexusone (edited 01-29-2003).]

the camera should be looking from the object’s position, and along the object’s respective angles… (think of it like a camera… in the center of the object…) im going to tailor it more later, I just want to have a camera that basically follows an object and faces the same direction…

as far as rendering goes… im not sure i follow your question… GFX: raw() adds objects, adds particles then renders…(which swaps buffers)…

The particles seem to display ok, though there are still tranparency issues (if im viewing a particle through a transparent object)… Again, this is still small stuff I just haven’t worked the kink’s out of…
In the fure i’m going to have 2 seperate linked lists for my Objects, so that I can draw the opaque objects then the transparent objects… but yeah the particles seem to be facing the camera at this point…

Kent

On the particles I was thinking that you have them always facing the camera…
Draw them as if you would a object.

On the rendering, I am talking about order of processing the scene.

  1. Clear matrix and colorbuffer
  2. Set Projection matrix
  3. Set Model view matrix
  4. Set camera position
  5. Draw_objects
  6. Swap_buffers

Originally posted by Kent767:
[b]the camera should be looking from the object’s position, and along the object’s respective angles… (think of it like a camera… in the center of the object…) im going to tailor it more later, I just want to have a camera that basically follows an object and faces the same direction…

as far as rendering goes… im not sure i follow your question… GFX: raw() adds objects, adds particles then renders…(which swaps buffers)…

The particles seem to display ok, though there are still tranparency issues (if im viewing a particle through a transparent object)… Again, this is still small stuff I just haven’t worked the kink’s out of…
In the fure i’m going to have 2 seperate linked lists for my Objects, so that I can draw the opaque objects then the transparent objects… but yeah the particles seem to be facing the camera at this point…

Kent[/b]

[This message has been edited by nexusone (edited 01-29-2003).]

The goal was to have the particles face the camera… rather than having to render a fireball for example, i could use a 1 polygon quad thats texture mapped… which always faces the camera… so from any angle, it will look like a fireball… or point lights, etc…

I didn’t really see any noticeable difference when changing the order of clearing the scene/color stuff etc…

just want to clarify rotations…
right now im rotating on the xangle, yangle, and zangle… so the yangle rotation would be relative to how the xangle would rotate… and the zangle rotation would be relative to the x and then y rotations??
If this is so, thats going to make a lot of things difficult to code (up for example would be relative… as well as any other direction…) in games… a the controls are always relative to the way your vehicle (or whatever) is rotated… how is that accomplished?? I think this project is getting nastier and nastier the more im relying on vector math…

Thank you very much for the help, you’ve already clarified a lot for me…

Kent

[This message has been edited by Kent767 (edited 01-29-2003).]

Yea, 3D world is made of vectors and to move through it will need to use vector math.

Also for gravity and other motion physics you will need to use vector math. for object movement.

On particles you are talking about what is called billboarding. You have a quad that faces the camera at all times.
When the billboard is in front of the camera is say 0 degrees of rotation in reference to the camera.
As camera moves away from the billboard the quad is rotated to face the camera at all times.
In think it is a inverse rotation, but have to look back at the math for rotation of the billboard in relation to the camera.

Originally posted by Kent767:
[b]The goal was to have the particles face the camera… rather than having to render a fireball for example, i could use a 1 polygon quad thats texture mapped… which always faces the camera… so from any angle, it will look like a fireball… or point lights, etc…

I didn’t really see any noticeable difference when changing the order of clearing the scene/color stuff etc…

just want to clarify rotations…
right now im rotating on the xangle, yangle, and zangle… so the yangle rotation would be relative to how the xangle would rotate… and the zangle rotation would be relative to the x and then y rotations??
If this is so, thats going to make a lot of things difficult to code (up for example would be relative… as well as any other direction…) in games… a the controls are always relative to the way your vehicle (or whatever) is rotated… how is that accomplished?? I think this project is getting nastier and nastier the more im relying on vector math…

Thank you very much for the help, you’ve already clarified a lot for me…

Kent

[This message has been edited by Kent767 (edited 01-29-2003).][/b]

:slight_smile: thats what my particles appear to be doing, didn’t think there was a problem there…

just trying to get the rotation stuff worked out still… my trig calcs to convert the rotation from angles to a unit vector have so far not gone correctly…

local->Objects[1].xforce=cos(local->Objects[1].zang)*velocity;
local->Objects[1].yforce=sin(local->Objects[1].xang)*cos(local->Objects[1].yang)velocity;
local->Objects[1].zforce=local->Objects[1].xforce
sin(local->Objects[1].yforce)*velocity;

that is what i have currently… i think the problem is that im using my calculations based of my y and z angles… and not looking at my first rotation (the x rotation)… so my calculations should rather be on my x and y angles…