OpenGL and hardware acceleration?

I’m looking for articles that explain to what extent OpenGL does hardware acceleration automatically. So far I’ve just been able to guess.

Anyone happen to know, i.e. when you just start writing simple OpenGL code to render a cube, are you automatically using hardware acceleration, or do you have to tell it to? And would OpenGL even work on a PC with a superold graphics card? Would it just be hideously slow and ugly?

Appreciating any pointers,
Joe

http://www.opengl.org/resources/faq/technical/mswindows.htm#0020

Yes, openGL uses Hardware automatically when the card supports it.

As far as I know, every vendor is free to implement hardware acceleration for each part of the pipe it wants.
This means that most of the time the function you’re using (trasform 4 vertices or subtract those two pixels) will likely be hardware accelerated.
There are some exceptions on that, but the rule of thumb is that the vendor will try to accelerate in hardware everything it can handle. Sometimes there’s no hardware to support a specific feature (example: vertex programs on GeForce1-2, fast data transfers on TNT) but the driver is still able to provide faster paths.
So, this means that most of the time, even the simplest program takes full advantage of HW features thuru OpenGL. Sometimes this acceleration is not enabled but this happens rarely.

OpenGL could work on any video card from the 90’s, and it does not necessarly need to be “slow” or “ugly”.