If your goal is to make your code run optimally anywhere, then your goal is not the secrecy of the source code. You can have multiple goals, but one has to have primacy. And if the goal is to be able to compile for any arbitrary architecture into an optimal form, then you're not going to achieve secrecy.It really takes no more thinking than "What would you prefer; C or IA32 machine code, if you were to make some code run on any of a number of CPU's?". As it seems the plain C code case is out of the picture (as that'd make it "source code", and with pissing contests and secrecy we can't expect that) there's no way to fix that flaw. What remains is another compiled representation - though still ISA-independent.
And if that's your goal, then you need a language that preserves high-level constructs. It must preserve function calls, high-level looping constructs (not merely a jump function), structs, etc.
Perhaps, but such a thing is really just glslang. Maybe an easier to parse form of it (high-level assembly), but that's all the savings you're going to get.A good (read: proper) intermediate represenation would benefit all - even if some implementations had non-optimal optimizers to turn this into optimal code for their hardware.
The problem of interest in this thread is that programs with thousands of glslang shaders take minutes, in some cases hours, to start. We want to find a way that can decrease this time.
The intermediate language issue only came up as a possible solution to this problem, and it is only useful as a solution to that problem. An intermediate language that is high enough level to remain optimal in virtually all conditions will require approximately the same compile time as glslang. This no longer makes it a solution to the problem, and thus it becomes useless.
Which is the point. You can decrease compile time (on some hardware) by making a low-level shader language. But in doing so, you make it so that only certain hardware can compile the shader optimally.
In general, I would say that readback of a fully IHV-dependent binary blob is the 100% safest way to achieve faster compile times. While the initial compile will be slow, you can pretend that it is program install time. The concern is that later changes (swapping graphics cards, etc) will force a lengthy compile, and even a driver download can cause the system to want to recompile the shader.
The modification of this method to hold textual glslang code in addition to IHV-dependent data is an alternative that exists primarily to make the binary blobs IHV-neutral. That is, a blob written by driver X can be read in driver Y without problems, though it may recompile the shader. The only problem there is that it is not possible to know if the shader will actually be recompiled or whether it will be loaded precompiled.
The third alternative is to hope that IHVs will cache compiled versions of our shaders and load them when our string matches with one in the cache. The primary disadvantage is that it isn't really a solution, as it does not rely on GL spec-defined behavior (since spec-defined behavior begins and ends with a render context).
There are two things that these three alternatives have in common that the intermediate language one doesn't. First, it retains glslang's advantage with regard to optimization. Second, and most importantly, they have a chance of actually being implemented. The ARB is not going to create a brand new shader language, and they certainly are not going to go the full D3D route with shader model nonsense.
So the intermediate language "alternative" is simply idle fantasy.