synchronize orthographic with perspective

Hi everyone. I have a question about synchronizing orthographic projection with perspective one.

I am trying to create a 2.5d fighting game (like street fighter 4).

Right now I render both the background scenery and the character with perspective projection with FOV = 30.0 degree .

It come out alright but the character look to distort when they are near the edge of the screen (typical perspective effect).

So I thinking about render the character using orthographic projection instead (I believe this is what most 2.5d game do) but I have a problem making an

orthographic projection matrix that will produced the roughtly same dimension of the rendered character as the original perspective projection.

Anyone know how can I archieved such an effect.

some image to illustrate the effect I want (using blender)

the original image render with perspective projection (distort when near edge of the frustum)******

***************the image render with orthographic projection that produced the same dimension as above image (which is I want to know how to calculate this) ******************

Thank in advance

somboon

If by 2.5D you mean 3D rendered but 2D range of movement like Stret Fighter 4, Trine, Blade Kitten, and such then you are wrong.

Just keep prespective rendering, but keep 2 things in mind :
1- you can go below 30° horizontal fov if you feel that too much perspective distortion happens
2- these game typically keep the main character near the middle of the screen, where there is less distortion.
3- the near-the-edge distortion does not look so strange when everything is 3D perspective rendered.

I tried render every thing with perspective projection with using fov = 15.0, this help near edge distortion of the character but make the scene look ugly (the scene become so wide it lost the perspective ness).

what I really want is to have the scene render with perspective (30-45 degree) and the character with no perspective at all .

The scene becomes too wide by decreasing your fov? That doesn’t sound right. Did you pull your camera back to compensate for the narrower fov?

Yes I pull the camera back.

and yeah, “wide” is not a right word let just said the background lost too much perspective.

fov = 30 (nice background / ugly character distortion)

fov = 15 (fix character distortion but background lost too much perspective)

This is just an idea; I haven’t tried it so I don’t know if it will work or not, but I think you might be able to make it do what you want.

Background information: there are several different ways to calculate perspective, but the commonly used perspective projection is the “divide-by-Z” technique. In projection matrix terms, that is implemented by copying the pre-projected Z coordinate of each vertex to the W coordinate position in the point’s vertex vector. Later, in the OpenGL fixed function portion of the pipeline (both programmable and fixed function pipelines), the Perspective Divide is performed which divides the X,Y,Z coordinates of each vertex by that vertex’s W coordinate.

For orthographic projection, rather than copy the Z coordinate to the W coordinate’s position, the W coordinate is left alone (which means it should have a value of one). Later, during the Perspective Divide, dividing X, Y, Z by W=1 leaves them unchanged.

So, to synchronize orthographic projection size with Z perspective projection size, you need to duplicate the change of scale implicit with the Z divide in the orthographic projection.

The issue with Z perspective divide is that each vertex’s coordinates are divided by their own Z coordinate, and since each vertex may have a different Z coordinate, you get different scaling of the vertex’s XYZ depending on the Z. To get orthographic projection, you have to use the same divisor for every vertex. That means, you need to store the same value in the W component of every vertex, so that when the Perspective Divide occurs, you get orthographic projection. Ideally, you’d like that constant W component to be the average Z coordinate of all the vertices in your model.

Here’s how you should be able to implement it: on your CPU, calculate the average Z of all the vertices in your model. Store that one value as the W component in each vertex in your model, and pass those XYZW coordinates to the GPU. Your Modelview transformation matrix needs to do exactly the same transformations for the W component as it does for the Z component. That means the fourth row of your Modelview matrix should be identical to the third row (define the third row the way you ordinarily would, and just copy it to the fourth row, too).

Now, when you transform your model’s vertices with the Modelview matrix, each W coordinate will be the averaged value of all your model’s Z coordinates, and will undergo the same transformations as your model’s vertices’ Z coordinates. Your projection matrix should be the same as it would be for Z divide perspective, except the fourth row should be [0 0 0 1] rather than [0 0 1 0] (so each vertex keeps its own W coordinate rather than being replaced with its Z coordinate as is usually done). Now, when your vertices are processed by the Perspective Divide hardware, each vertex in the model will be divided by the same averaged Z coordinate, giving you orthographic projection but with the same average scale factor that your model would have gotten with a Z divide perspective projection, synchronizing orthographic scale to perspective scale.

I don’t know if I made it sound complicated, but I think it should really be very easy to do, with just a few modifications to the way you currently do it.

I’ve just given this some more thought and I think I messed some things up in the advice I gave above.

Here’s what I think you need to do:

  1. On your CPU, calculate the average XYZ values of all the vertices in your model (to calculate the point in the middle of your model).

  2. You will still define the W coordinates with 1 (or just don’t define them explicitly so that they will implicitly be assigned a value of 1) for each vertex, just like you usually do.

  3. Define your Modelview matrix just as you usually do.

  4. On the CPU, for each frame, transform the point in the center of your model with the Modelview matrix. All you care about, though, is the resulting Z value, which I’ll call transformedAverageZvalue.

  5. For each frame, define your Projection Matrix just as you usually do, except the fourth row should be
    [0 0 0 transformedAverageZvalue]. This will cause each of your vertices, after multiplication by the projection matrix, to have the model’s transformed average Z value in the W coordinate location.

That’s it. I think that should work for you.

Thank for the answer. I tried implementing the method you suggest and it indeed give flatten model exact same size as when render with perspective.

But there also some problem. When viewing from some angle, it look like the model had been clip against some invisible wall (I clear all the depth buffer and only render the flatten mesh when test this ).

and I also use [0 0 0 -transformedAverageZvalue] in stead of [0 0 0 transformedAverageZvalue] (the latter give incorrect result)

*** this is how I calculate gl_Position in vertex shader***

gl_Position = projectionMatrixmodelviewMatrixvec4(vertex.xyz,1.0);

the vertex are declare/send to shader as a [vec3 vertex] (no w)
the projection matrix are using the modified one mention above.

do you know any reason that cause this ?

Finally fixed it !

the invisible wall clipping error I got earlier is from the depth buffer problem.

In order for the flatten mesh to keep it own original depth it had to be premodified with

gl_Position.z = (gl_Position.z)*(avgZ/viewZ); where tfZ = original modelView transformed depth and avgZ is averageZ of the interest transformed scene/mesh.

the result look perfect now for all viewing direction.