CPUs are not the same as GPUs. CPUs use radically different instruction sets and concepts. GPUs are programmed through abstracting hardware interfaces, this is a GOOD THING(trust me), because games can give you one executable, read on because OpenGL games actually include different code paths to optimize for different hardware.
GPUs were largely fixed function until now so the functionality and interface was the same by design and by definition, this is also a GOOD THING (trust me). The equivalent information from hardware manufacturers w.r.t optimization are in the form of proprietary OpenGL extension specifications, these have been pouring out of ATI and NVIDIA for years and together constitute a MASSIVE volume of documentation on how to program the NVIDIA and ATI hardware to get the most from them.
D3D has none of this capability because manufacturers cannot extend it. Microsoft controls the functionality totally and nothing can go in without asking them so the interface is the same and manufacturers are left to tell develipopers what parts of D3D are best for their hardware and the exact combination of state or whatever gives best results on their systems, not insignificant but nothing like the richness of the extensible OpenGL world.
Now, more recently hardware has been more programmable at a lower with increasing flexibility at each generation and manufacturers have issued MASSES of documentation on how to program and optimize for these low level instructions, while at the same time cooperating on getting this standardized so developers don’t have to worry about optimizing for multiple cards, all Windows PCs for example have a common x86 instruction set, we don’t have to worry so much about supporting MIPS or PowerPC when we optimize for Windows. We DON’T WANT to worry about making sure we’re optimized for ATI instructions vs NVIDIA instructions although a few extensions might be acceptable here too, (kinda like SSE from Intel vs 3DNow! on AMD). The lack of common interfaces for this in the past has been seen as a problem for OpenGL, rather than an advantage and manufacturers have worked to reach agreement on common instructions and interfaces to help us poor developers use fancy new features without worrying about hardware differences.
D3D never experienced any of this since shader interfaces like the rest of their D3D API was adopted and mandated by Microsoft with none of the manufacturer specific differences and optimizations that were seen in OpenGL.
Summary, Tomshardware got it wrong, VERY WRONG this time, although in some respects their complaint is an indirect plea for OpenGL, and so I suppose they can be excused. Move along there’s nothing to see, err… except mountains of documentation from NVIDIA & ATI, training seminars & MoJo days, and GDC & Siggraph papers and tutorials out the wazoo. Yup, no optimization documentation there, if you’ve been living in a cave for the past few years that is.
[This message has been edited by dorbie (edited 06-24-2003).]