Card Manufactures Make Drivers for Certin Apps

I just read over at tomshardware.com that many card manufactures do things in there drivers to make the results for certin programs(3dmark quake III). Personally it kina makes me mad. Im working on a game that is going to be on the level of quake III. Knowing that drivers are being made for some programs and not the quality of something is really disturbing to me. He also brought up another good point. CPU makers(Intel and AMD) release hole book sets about there CPU to the public for free. While video card manufactures make there cards a secret. I think if they did a similar thing to what Intel and AMD do the result would be much better running graphical apps.

Tell me what you think about this.

[This message has been edited by nukem (edited 06-24-2003).]

Old news.

CPUs are not the same as GPUs. CPUs use radically different instruction sets and concepts. GPUs are programmed through abstracting hardware interfaces, this is a GOOD THING(trust me), because games can give you one executable, read on because OpenGL games actually include different code paths to optimize for different hardware.

GPUs were largely fixed function until now so the functionality and interface was the same by design and by definition, this is also a GOOD THING (trust me). The equivalent information from hardware manufacturers w.r.t optimization are in the form of proprietary OpenGL extension specifications, these have been pouring out of ATI and NVIDIA for years and together constitute a MASSIVE volume of documentation on how to program the NVIDIA and ATI hardware to get the most from them.

D3D has none of this capability because manufacturers cannot extend it. Microsoft controls the functionality totally and nothing can go in without asking them so the interface is the same and manufacturers are left to tell develipopers what parts of D3D are best for their hardware and the exact combination of state or whatever gives best results on their systems, not insignificant but nothing like the richness of the extensible OpenGL world.

Now, more recently hardware has been more programmable at a lower with increasing flexibility at each generation and manufacturers have issued MASSES of documentation on how to program and optimize for these low level instructions, while at the same time cooperating on getting this standardized so developers don’t have to worry about optimizing for multiple cards, all Windows PCs for example have a common x86 instruction set, we don’t have to worry so much about supporting MIPS or PowerPC when we optimize for Windows. We DON’T WANT to worry about making sure we’re optimized for ATI instructions vs NVIDIA instructions although a few extensions might be acceptable here too, (kinda like SSE from Intel vs 3DNow! on AMD). The lack of common interfaces for this in the past has been seen as a problem for OpenGL, rather than an advantage and manufacturers have worked to reach agreement on common instructions and interfaces to help us poor developers use fancy new features without worrying about hardware differences.

D3D never experienced any of this since shader interfaces like the rest of their D3D API was adopted and mandated by Microsoft with none of the manufacturer specific differences and optimizations that were seen in OpenGL.

Summary, Tomshardware got it wrong, VERY WRONG this time, although in some respects their complaint is an indirect plea for OpenGL, and so I suppose they can be excused. Move along there’s nothing to see, err… except mountains of documentation from NVIDIA & ATI, training seminars & MoJo days, and GDC & Siggraph papers and tutorials out the wazoo. Yup, no optimization documentation there, if you’ve been living in a cave for the past few years that is.

[This message has been edited by dorbie (edited 06-24-2003).]

Ah crap. Not this again.

Yes all/most card manufacturers have “cheated” on benchmarks. Yes people both agree with and disagree with the practise.

CPU makers(Intel and AMD) release hole book sets about there CPU to the public for free. While video card manufactures make there cards a secret. I think if they did a similar thing to what Intel and AMD do the result would be much better running graphical apps.

Yes the CPU manufacturers do this kind of thing for the average user if you ask for it. It’s very useful when you want to write SIMD/SSE/3DNow etc. (aka assembler).

If you wanted to replace an implementation of GL/Dx then this kind of information might be useful. But these “cheats” generally just disable certain features that are enabled by the applications but aren’t necessarily required. I think from memory the Q3 cheat (nVidia?) disabled stencil testing for a performance boost in the standard Q3 tests (Could be wrong though). If you renamed the exe then you got different results. This type of “cheat” really comes down to making sure you as a developer correctly implemented your code.

Nuff said - now read the 1,000 other posts on the topic.

I still think graphic card makers should release more information on how there card works and tech details. X developors have complaned for years about the lack of info from these companies about there cards. They can hardly make the drivers that is why if you dont dl the offical nvidia drivers quake III will hardly run.

rgpc, that was actually an ATI Q3 (Quack) cheat and was texture MIP LOD settings that seemed to get totally hosed. Renaming the executable with the quackifier program improved the texture quality and reduced the performance.

As for THG, they burried this story now they want to pontificate on the issue, getting it totally arse over tit.

[This message has been edited by dorbie (edited 06-25-2003).]

Originally posted by nukem:
I still think graphic card makers should release more information on how there card works and tech details.

Well, if they’ll do that – what’s the point for them seeing it commercial glance? If there was only one 3d-chip manufacturer, then i think no probs. Plus, making an example out of Intel/AMD isn’t correct at all(and you all know why )

Originally posted by matt_weird:
[b] Well, if they’ll do that – what’s the point for them seeing it commercial glance? If there was only one 3d-chip manufacturer, then i think no probs. Plus, making an example out of Intel/AMD isn’t correct at all(and you all know why )

[/b]

They can release some schematics and vague info after the product hits the market and sometimes they do.

I find that there is enough docs (performance and technics) and examples from ATI and NV so in that respect, I give them 10/10.

Happy benchmarking!

well, yeah, that’s what i forgot to mention – i saw many diagrams in some related magazines of what the internals of those chips supposed to do, so the only thing they do not issue is the schematic designs of those chips, hehe(and now this is what i meant they won’t do that anyway in terms of commercial privacy )

Pro graphics card vendors do that all the time. They even have a pop-up menu in their control panels where you can choose whether to optimize for “Maya” or “3dxmax” or “AutoCAD” or whatever. Over there, this is considered a requirement and an important selling feature.

Originally posted by jwatte:
They even have a pop-up menu in their control panels where you can choose whether to optimize for “Maya” or “3dxmax” or “AutoCAD” or whatever.

…i think that wouldn’t be that amazing to see also the items in those pop-up menus like “Q3A test optimizing” and/or “FutureMark incredible improvement!”

I would have to agree with the idea that nVidia and ATI are perfectly free to optimize for different popular applications, as long as the optimizations are documented (at least give a hint) and they are out in the open and user selectable.

At the very minimum a single ‘turn off application specific optimizations’ in the control panel.