Hi. I am too am a little new to OpenGL, but maybe I can help you out, at least give you something to think about while more experienced people reply.
I think how it works is that glLoadIdentity sets the transformation matrix to the identity matrix. The transformation matrix is what each vertex that you send between glBegin and glEnd is left-multiplied by. So right now the transformation matrix doesn’t change the vertices at all when you do T * v (transformation matrix * vertex), it’s just the identity matrix. Essentially, v = T * v.
Now, calling glTranslate and glRotate will modify the transformation matrix, so that it’s no longer merely an identity matrix, but the matrix will now translate all the vertices that it left-multiples. Now you will get something like v’ = T * v, the vertices have all been rotated and translated and whatnot, so the resulting quad that gets drawn will be a little different than the vertices you put between glBegin and glEnd.
To answer your first question: The order in which you send those commands DO matter.
For example, if you did:
glLoadIdentity
glTranslate
quad
glRotate,
then you would draw the quad before the glRotate changes your transformation matrix to add some rotation.
If you understand what each of those commands do:
glLoadIdentity - resets the transformation matrix to identity
glTranslate - adds some translation to the transformation matrix
glRotate - adds some rotation to the transformation matrix
the quad code with glBegin/glEnd - actually draws those vertices
using the current transformation matrix. The GL_QUADS enum tells OpenGL that you want to use those 4 (if you send more multiples of 4 number of vertices, it will draw more quadrilaterals) vertices to draw a quadrilateral.
you should be able to figure out what’s going on.
Now, I’m not sure about the following stuff, but here’s some more things to consider:
The glRotate and glTranslate functions apply translations and rotations to the current transformation matrix. You don’t change around which transformation matrix you are currently using, so that’s not a concern for the simple example you set up. But eventually when you start using different transformation matrices for different things, like adding 3d perspective to the things you draw via OpenGL. That’s by using glMatrixMode.
For your next question, I think you are setting up depth testing correctly. At least, that’s the standard setup that people tend to use. You need to do
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL);
only once, but you should clear the depth buffer, glClearDepth(1.0f);
glClear(GL_DEPTH_BUFFER_BIT);
every frame (if your application will draw more than just one frame.) You typically do this the same time you clear the color buffer.
Of course, you should try to develop an understanding of what is actually going on with the depth buffer and depth testing.
For your last question, you should realize what freeglut is, what OpenGL is, and how they are related. OpenGL is an API that gives allows you to take control of graphics hardware (if it is available), draws basic shapes, and colors those pixels in certain ways. And it’s your job to use these basic tools and learn advanced techniques, like lighting, texturing, to create cool things.
OpenGL is not a piece of software. It is merely a specification, which means whoever designs OpenGL states what OpenGL should do. It is up other people, typically hardware manufacturers like Nvidia and AMD, to actually make an implementation for OpenGL, by making drivers for their graphics card that can understand and perform OpenGL functions and whatnot.
Now what OpenGL does NOT do is specify, how should what OpenGL draws get sent to a window on a particular OS. Because how to open a window and put stuff on that window is not cross-platform, so that’s not OpenGL’s job to specify. That’s where freeglut comes in. For example, Windows uses something called WGL to create a “context” for OpenGL to use. A context contains all the inner workings that OpenGL needs in order to work. I think an “OpenGL context” also means the window that OpenGL will actually draw to. OS X uses CGL and X11 (Linux) uses GLX. freeglut automatically determines which OS you are on and creates the appropriate OpenGL context for you to use. And you only need to type in a little bit of freeglut code like glutDisplayFunc(&renderScene) to finally start using OpenGL in your environment.
So how “strong” freeglut is isn’t really a good question. I personally don’t use freeglut, so maybe freeglut could have some bugs or limitations that prevent you from using certain parts of OpenGL, but I’d say what really limits you from using certain parts of OpenGL is the actual OpenGL implementation (on the driver or whatever). For example, if you have an older graphics card, you might not be able to use the recent features that came with the recently released OpenGL 4, kind of like how only the newest graphics cards can use DirectX 11.
Once you get into the world of OpenGL extensions, which are extensions to the original OpenGL specification that aren’t made the Khronos group (the people that develop OpenGL), but other people like Nvidia or AMD or Microsoft. These things add additional functionality to OpenGL. Whether you can use these extensions or not depend on what driver is installed on the machine that actually runs the application, NOT on your computer where you develop the application. You will see that certain drivers support certain extensions and some don’t. In general, however, graphics card hardware is separated into “classes”, which is what “major” number of OpenGL they support. Like for Nvidia, GeForce 8 series and later can support all versions of OpenGL 3. GeForce 400 series and later can support all versions of OpenGL 4 or later.
Final words: Take all this stuff with a grain of salt, I’m still learning and I’m just trying to tell you what I’ve learned so far. I’m still very much a newbie to OpenGL, and some of what I said could be inaccurate.
More importantly, I see you are stuck using an older style of OpenGL, called “immediate-mode rendering” that uses glBegin and glEnd. Once you start making serious applications, this style of OpenGL coding incurs a huge performance penalty, because your vertex data is on the CPU. If you are familiar with locality of reference in programming, you know that you generally want the data you are using to be close to where its going to be used. This current way of coding OpenGL with glBegin and glEnd is not good for that. There are ways to use OpenGL such that the data you use is sent to the graphics card and stays there, rather than fetching data from the CPU every time.
So, if you want to learn OpenGL the right way, the modern way, check out this guy’s tutorials:
http://www.arcsynthesis.org/gltut/index.html
They are very well written, and more challenging than other OpenGL tutorials, but if you are serious about graphics programming, it’s definitely a good read.
Edit: If you do end up using those tutorials, they also do away with the “fixed-function pipeline” approach to graphics programming. Which means they do away with things like glTranslate and glRotate, amongst other things. It’s in general, a more general, programmable, approach to graphics programming. You will have to write shaders yourself and do the matrix math yourself (to a degree, but don’t worry if you aren’t familiar with matrix math or linear algebra). The author of those tutorials goes into these topics in depth.
Good luck!