You're conflating two different issues. The fact that D3D has fewer evident driver bugs is not because of how it compiles shaders. It's due to several factors:However, this model seems to work for D3D with most bugs being silly driver side things.
1: Writing D3D drivers is simpler than writing GL drivers. Simpler code means less bugs.
2: D3D is more heavily used than OpenGL. Because of that, more bugs are found. And because D3D software is quite popular, they are more quickly responded to than GL bugs. The best way to find and squash bugs is to use something, and code that doesn't get used is more likely to be buggy.
Changing the language that gets compiled will change very little about how many bugs you will encounter. Indeed, you'll likely get more bugs because driver developers will have to maintain their GLSL compilers too, for backwards compatibility reasons.
If you want to decrease compiler bugs, then put together a real test suite for GLSL. Then find a way to make driver developers test and fix bugs based on it.
What kind of optimizations are you talking about? Loop unrolling? Function inlining? Dead code removal? That's not very much in the grand scheme of shader logic; most of the real optimizations will have to be done by the driver.You aren't guaranteed any form of optimization for your shaders. You can mitigate this by running the shaders through an offline "optimizer" that basically just moves text around... That idea isn't the best if you can just avoid the text distribution altogether. If you had your own shader bytecode generator, that you were in control of, you could implement any optimizations you like. (e.g., you could build your own work atop of systems like LLVM.) Not that you couldn't technically do that already, but the bytecode solution is a bit more "workable."
One hardware's optimization is another's pessimization. Unrolling a loop on one piece of hardware can give a performance boost; on another, it can make things slower. The driver knows which is better because it's hardware-specific. Better to rely on the driver to do the right thing than to rely on your personal hope that you can out-think the people who actually know their hardware.
If hardware isn't being supported, it won't get new OpenGL APIs. New APIs like this shader language of yours. Therefore, even if it could run it, it won't because the IHV isn't supporting the hardware anymore.However, SM2 (which is a large chunk of the target market for indie developers currently) should be supported. (That also corresponds roughly to the feature set available of mobile devices currently, if I'm not mistaken.)
The "large chunk of the target market for indie developers" is primarily hardware that isn't being supported. Integrated Intel chips and any of AMD's hardware pre-HD models. NVIDIA is still supporting the GeForce 6xxx and 7xxx lines, but outside of that, you've got nothing.
Thus, any effort in this regard is going to help less than half of the "target market for indie developers." So why bother?
At which point, you simply have a more cumbersome way of specifying the blending equation. That's not particularly helpful.It probably could with intrinsics, so to speak.