about transformation

Hi,
I read about in some articles,“if we determine the mesh real position in 3d space,we need transformation–position,orientation,scaling”
Here is my question,why do we need the three piece of infomations,if a draw a mesh in 3ds max,then i got the vertex coordinations of this mesh,then tell opengl draw it with them!
I think the three parameters just three kinds of operation,not used when we draw a mesh first,am i right?!?!Sorry,i just be a newbie!!!

OpenGL “concatenates” the model coordinates with the view coordinates into the modelview coordinates. I think they did so because this makes things more easy (for example, moving the view from a vector v is just the same as moving the world by -v).

So, if you draw a world in 3DS Max, and when you read the data, you get the “final” coordinates, which mean the coordinates in world space, you don’t need to apply any transforms on them (translation, rotation or scaling), but only the view transform (the one made by calling gluLookAt for example).
But if you have a model in 3DSMax, and you need this model to move, for example, then you will need to transform the model coordinates so that the transform achieve what you want (rotate the model around the y axi, then translate it at the place (mx,my,mz)…). This is always the case for moving characters for example. And this is generally never the case for static geometry like buildings, roads…
With generally, I mean you can overcome this rule for things like trees (you will then define a tree around the origin in your modeler and use it many times in your scene by translating it, rotating and scale it to make the belief of a complex scene with many different trees). And for this, to know the real world coordinates of each vertex of the trees, you will need to apply the effective transforms to each vertex. This can be useful to detect collisions, for example.

The (position, orientation, scaling) triplet is one way to specify transformation. You don’t always need all of these for model, and indeed in some case you can use identity model transform and not need any of them. Then there are other kinds of transformations like shearing that you may also need some time.

When you have animated 3d model, position and orientation for example can change over time, and it makes sense to use (animated) values for these.

However, typically at least either model or view is not static. Thus most 3D content gets transformed every frame anyway. This is typically implemented through matrix * vertex multiplication on the GPU. For this, the transformation matrix must be specified. The matrix can be combination of any translation, rotation, scaling etc., and series of those. So, the transformation functionality is there on the GPU, and likely being used for almost all content.

With legacy OpenGL API you can use translate, rotate and scale commands. The driver will however turn these into a matrix for GPU use. You can also use load matrix command to specify your very own transformation - which can be something else than translation, rotation and/or scaling.

With shaders, you can write transformation without matrices, but this is only suitable for very limited transformations, so it is rare.

No, you don’t necessarily need to translate, rotate and scale the coordinates of a mesh. If you’re certain that a mesh will never change its position, orientation and scale, you can define/export it in world coordinates.

However, if you’re doing animation you might not want to scale anything, but most certainly rotate and/or translate the object. A case where you need to scale, for instance, could be something that grows dynamically.

Modelling packages, like Blender, also do this. If you scale a model in Blender and export the mesh to, say, X3D, it writes model coordinates and also the above properties to the file so your application can compute a model matrix.

To understand transformations you can work through http://www.arcsynthesis.org/gltut/Positioning/Positioning.html.

so what the coordinates passed to glVertex(…)?local,world?in opengl,where is the default origin point in the world coordinate frame?the center of the viewport???is that default origin point of coordinate frame of camera?sorry,newbie~~~

Vertices are given in local object coordinates. If you have a ball with radius 1.0, then all vertices x,y,z components are <= 1.0 (and x * x + y * y + z * z == 1).

If your model is in the centre of the world and properly oriented etc., it’s transformation is identity and object coordinates match with world coordinates.

If your camera has identity transform, camera coordinates match world coordinates.

If both model and camera have identity transform, object and world and camera coordinates all match. This is a good setup to start experimenting with transforms - you can manually add Z (negative) Z offset to vertices in order to move them a bit further from camera so you can better see them.

Now, legacy OpenGL fixed function does not have separate model and view transform. This may cause some confusion, as you can only tell OpenGL the combine modelview matrix.

Once you have something on screen with identity modelview, you can move the model. When that works as you’d expect, move the camera (view). Then rotate model. Then rotate view. Then combine these.

See also: http://www.opengl.org/resources/faq/technical/transformations.htm#tran0011