Hey guys, new to the forums, also pretty new to OpenGL. How's everyone doing?
So, I've been learning some OpenGL on my own, and I feel like I've been making some progress. I'm using it for 2D graphics right now, so I set my perspective matrix to glm:rtho using the GLM math library. So, for arguments, ortho takes the starting point of the screen, and the width and height. While I was messing around with quads, I just had ortho set to -10 to 10 on both the x and y axis'. So now, I changed it to start at 0, 0, and have a width and height equal to the screen. Is that the proper way to set up and use the perspective matrix?
So, here's what it looks like so far. I'm just rendering blank quads right now. I set my ortho perspective to start at 0, 0, and have a size of 1280/720. Then, I create a quad that is of size 32x32. By that, I mean that I create 4 vertices, one at 0, 0, then I put the other vertices 32 units away. I use the term units instead of pixels because I believe that I am still working in OpenGLs NDC space (-1 to 1 ), just modified. My question is, is this the proper way to do this, or should I be calculating the screen coordinates (say, pixel location 300, 200) using an equation to put it in the -1 to 1 range that OpenGL uses?
Thanks for the help guys, I appreciate it! Just want to make sure what i'm doing is right so far
The term is "projection matrix" ("perspective" is the opposite of "ortho"). And, yes, that's a perfectly reasonable way of using it, particularly if you need to operate in "pixel coordinates". Although, it's more common to have a width and height with the correct aspect ratio but "normalised" so that one dimension (or maybe their average) is fixed. E.g. you might fix the width at "640 units" then set the height to 640 * screen_height / screen_width, so the screen is always 640 "units" wide regardless of the actual monitor resolution.
Originally Posted by mrfraggs
For games, you normally don't want the extent of the view to change with screen resolution. Try playing some games which were designed for 640x480 and which operate in units of pixels on a 1920x1200 monitor; everything is tiny. And forcing the video resolution doesn't work so well on flat-panel monitors.
After the coordinates have been transformed by the model-view and projection matrices, and converted from homogeneous to Euclidean coordinates (projective division by w), everything is clipped to the the -1..+1 range (technically, it's clipped to -w..+w before division, but it's easier to think in terms of division first then clipping to a unit cube).
Originally Posted by mrfraggs
One of the main points of having transformation matrices is so that you can work in whatever coordinate system makes sense. The only constraints are that the model-view matrix should be affine (no perspective; the bottom row should be [0 0 0 1]) and that the projection matrix shouldn't include any translation. This is because lighting is done in eye space (after model-view transformation, before projection transformation). If you aren't using lighting (or certain other features, e.g. glTexGen() in GL_EYE_LINEAR mode), the distinction between model-view and projection transformations doesn't matter; only the final result.
For an orthographic projection, use whatever coordinates you like, but you often want independence from the monitor resolution unless you really need to care about the pixel grid (and as resolutions increase, that becomes less significant). Using a negative Y scaling for a top-left origin can sometimes be useful when dealing with text or for compatibility with more primitive graphics APIs, but there's one minor caveat: doing so will change the winding direction (clockwise versus counter-clockwise) of polygons, which can affect whether a polygon is considered front-facing or back-facing.