we’re drawing way off topic I think. Is there maybe some older “Pros and cons of high level shading” thread that we might use for our little battle?
I don’t see that this is off-topic. The topic is whether or not a high level shading language should be compiled for systems that cannot actually implement the language itself.
Do you agree with me that an OpenGL 1.1 implemention that does not support texture objects is a broken implementation? Also, do you agree that a C compiler that did not allow recursion was also broken?
If you don’t agree on these points, then it is clear that you have no particular respect for specs. In which case, knock yourself out with your pseudo-glslang compiler.
If you agree on these points, if you agree that a spec for something defines what that something both is and is not, then it is important for you to understand that low-end hardware (hardware that doesn’t support ARB_fragment_program) simply cannot implement any form of glslang.
Here are a few violations of the glslang spec that you will be imposing on code compiled with your faulty compiler:
-
Floating-point accuracy. The spec requires something like a dynamic range on the order of 2^32, with a decimal accuracy equal to or better than 1x10^-5. Pre-ARB-fragment-program hardware just can’t do this without implicit multipass (which requires a lot more than implementing the glslang extension specs).
-
Conditional branches. The spec requires a compiled version to implement arbiturary conditional branches, and most C flow-control structures. You can’t do this with older hardware. At least, not guarenteeably in every case. The spec doesn’t allow for failure due to conditional branches (though it can fail because an unrolled loop goes past the instruction limit). Failing due to the use of a loop is like failing due to adding two numbers; it’s a fundamental feature of the language that is required to work.
-
A minimum of 32 varying floats. Older hardware just doesn’t support this many. At best, you have the 8500 which supports 6. You’re not getting around this limitation without implicit multipass.
Also, note that the spec specifically requires that this is the minimum. An implementation that violates this minimum is as broken as the 1.1 implementation that doesn’t have texture objects.
Let’s analyze the third violation. Why did the ARB decide on this absolute minimum? After all, it clearly restricts glslang from any older card. So, what was the purpose?
Well, the only thing that this limit does is force a restriction on what hardware can run glslang. After all, if the minimum was something like 8 floats (two texture coordinates), somebody might get it into their heads to make a glslang compiler for TNT’s. The ARB has no interest in it because it would create a fundamental dicotomy in the language. There would be the official spec’d language, and then there would be the ‘language-in-use’.
That is the danger in creating a compiled language where you allow the compiler to fail for apparently random or arbiturary reasons. What can happen is that people start writing to the lowest-common denominator. As such, they want to make the LCD as high-end as possible. Which would be around the Radeon 9500+ level, with it’s 4-texture dependency issue.
When people use glslang, the ARB wants them to understand that the language provides them more than they ever had before. As such, it should be restricted from running on hardware that does not provide that power. What good does it do?
There are perfectly good interfaces for getting at per-vertex and per-pixel computation in older hardware. ARB_vertex_program can be run on anything. And the tex_env combine stuff is relatively powerful (though lacking texture-coordinate accessing functionality).
As such, the ARB, with this restriction, has stated that there is pre-GL2.0 hardware and post-GL2.0 hardware. Post-2.0 hardware can use glslang. Pre-2.0 hardware can’t. This is a simple, and very good, restriction. Whenever you find that you can use glslang, you will find that you have access to a great deal of power and functionality. If you can’t, you automatically know you don’t have that kind of functionality and power.
When you use OpenGL 1.0, there is the expectation of a certain level of functionality. That is, the implementation must provide 8 lights, regardless of how this may impact hardware. An implementation must provide a matrix stack of 32 elements or greater. Etc, etc. If your hardware can’t handle it, you have two options: do it in software, or don’t make a GL implementation. The third option, make it anyway and just let people use a subset of the spec, is not an acceptable option.
Glslang was not designed with the intension of ever being run on all possible hardware, just as GL 1.0 was not. Instead, it was designed to run on a certain level of hardware, with fundamental assumptions about what that hardware can and cannot do.
[This message has been edited by Korval (edited 06-29-2003).]
[This message has been edited by Korval (edited 06-29-2003).]