View Full Version : converting software engine to opengl

Hi.

I'm trying to convert a (simple) software rendering 3d engine to opengl. So far I have learnt how to set up opengl and draw simple coloured triangles using glVertex3f with 3 floating point numbers...

I would like to be able to keep my previous code for vertex rotations, projections etc, if possible, and only provide opengl with screen coordinates of my scene.

My problem is: glVertex3f accepts z,y,z values between 0 and 1.

Do I just convert each triangles projected coordinates (which are currently x and y values between 0 and screen width/height) to values between 0 and 1 (and for z use the "world" z value)?

Or do I need to use opengl functions right from "world" coordinates, not projecting myself? If so...can you give me some tips as to which functions I should be looking at?

Thanks :)

I don't know if there's some scaling under the hood, but I've used values greater than [0, 1] with glVertex3f for a long time now. So I don't think you should be doing any projections yourself. But you should pay attention when you create the projection matrix (view frustum). This is what will scale your objects. I mean a line of -1, 1 on the x axis is huge on a frustum (-1, 1, -1, 1, 1, 10), but would be tiny in a frustum of (-200, 200, -200, 200, 1, 10). Also if your previous engine used window coordinates you could consider creating a frustum of

-windowWidth / 2, windowWidth / 2,

-windowHeight / 2, windowHeight / 2,

zNear, zFar

so that 1 point in world coordinates will correspond to 1 pixel.

Should the values I put into glVertex3f be "world" coordinates? Do you mean that if I use a projection matrix, opengl does the projections automatically?

(I have not used any form of opengl matrix before)

Thanks

(Also...do I need to use glu for this?)

zeckensack

01-16-2005, 08:09 AM

Originally posted by _Dan:

Should the values I put into glVertex3f be "world" coordinates? Do you mean that if I use a projection matrix, opengl does the projections automatically?

(I have not used any form of opengl matrix before)

ThanksOpenGL's projection and modelview matrices are just 4x4 matrices you can populate with arbitrary values to get any linear transformation you want. You can supply user contructed matrices to glLoadMatrix or glMultMatrix.

The only thing you must keep in mind is that OpenGL maps a unit cube to the viewport. I.e. (0;0;0) is the center of the viewport, right in the middle of the depth buffer range. (-1;-1;-1) is the bottom left near corner, (1;1;1) is the top right far corner.

That's after transformation and after perspective divide.

It's certainly possible to construct a matrix where there's no correlation between z and w, hence no "automatic" projection. You should look up glOrtho and/or gluOrtho2D if that's what you want.

However, if you pass in 4D vertices with explicit w, OpenGL will still divide by this explicit w, even with ortho matrices, after transformation. This can't be turned off entirely. Of course, if you make sure that post-transform w is always 1, it won't change anything. OTOH you might want to use this to move the divisions off the CPU.

(Also...do I need to use glu for this?)You don't need to use glu, it's just a convenience. The glu manual will tell you how the glu matrix function construct their matrices, so you can easily replicate that in your own code.

Powered by vBulletin® Version 4.2.3 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.