PDA

View Full Version : GLSL validation



remdul
01-11-2010, 01:28 PM
What's the best way to ensure that GLSL code compiles on most implementations? I'm developing mainly on Nvidia hardware, of which the driver isn't strict enough for my taste. I'd like to validate the code before putting out new builds to our testers, to make sure everything compiles properly on non NV hardware. It is too time consuming to wait for tester feedback for just a few syntactical/grammatical errors.

So, I have been using the 3dlabs' glslang.dll to validate code, but it has a few bugs (nested #ifdefs) and is becoming outdated (dated Sept 20, 2005). Is anyone aware of more recent code? And how about bringing this up to date and including it in the GL SDK?

How do other GL developers here test their shaders for compatibility (other than testing it on lots and lots of different hardware setups)?

DarkShadow44
01-11-2010, 01:52 PM
I only can you tell that you should take care, that you have a alternative for older hardware, that have not the extensions/functions you use
(for example sampler2DRect (1.40))
:(

Dark Photon
01-11-2010, 07:10 PM
What's the best way to ensure that GLSL code compiles on most implementations? I'm developing mainly on Nvidia hardware, of which the driver isn't strict enough for my taste.
Yeah, by default it's pretty lax, and permits Cg-isms.

You can get much stricter GLSL syntax by using:

#extension ###

where ### >= 110 (e.g. 120). See the NVidia GLSL release notes for more details.

sqrt[-1]
01-11-2010, 08:37 PM
I have often thought that someone "better than me at compilers" should write a GLSL pre-processor - Basically run the pre-processor (#defines etc and possibly #includes for files) and output validated GLSL.

As a bonus, it could do dead code elimination and constant folding. (so stupid compiler errors do not stop code from running)

I know the Cg compiler can kind-of do this, (take GLSL code and output GLSL code) but I think it only targets GLSL1.0 as a destination? (it also seems to reformat the code into an assembly like format)

nystep
01-12-2010, 12:24 AM
What's the best way to ensure that GLSL code compiles on most implementations? I'm developing mainly on Nvidia hardware, of which the driver isn't strict enough for my taste.
Yeah, by default it's pretty lax, and permits Cg-isms.

You can get much stricter GLSL syntax by using:

#extension ###

where ### >= 110 (e.g. 120). See the NVidia GLSL release notes for more details.



did you mean,

#version ### ?
#extension will ensure that an extension is supported to compile the shader.

remdul
01-12-2010, 05:05 AM
]I have often thought that someone "better than me at compilers" should write a GLSL pre-processor - Basically run the pre-processor (#defines etc and possibly #includes for files) and output validated GLSL.

As a bonus, it could do dead code elimination and constant folding. (so stupid compiler errors do not stop code from running)
The 3dlabs OpenGLCompiler supposedly does all of that. You can pass "EShOptNone", "EShOptSimple" or "EShOptFull" to ShCompile(). I use "EShOptNone" for quick validation.

Using the #version directives may work around the problem on NV, but I prefer to have some validation that is independent of hardware/driver.

Dark Photon
01-12-2010, 05:57 AM
As a bonus, it could do dead code elimination and constant folding. (so stupid compiler errors do not stop code from running)
Agreed, and for the same reason (compensating for underfunctional GLSL compilers). Not only that, would make it easier to cache this intermediate "dead code eliminated" form for subsequent runs, regardless of vendor driver, so as not to waste valuable "tens of seconds" (no, I only wish I was kidding) where the compiler goes off, parses, DAG analyzes, and throws away much of the shader due to dead code elimination for a bunch of materials.

This would be much better than the alternative, which is turning your shader into an #ifdef/#endif nightmare, trying to prevent the compiler from wasting your user's precious time doing busywork (or building careful sprintf logic outside the compiler to build dead-code-eliminated shaders -- ugly).

But then again, we're back to the wish for precompiled shaders in GLSL that can be cached on-disk..., which is another thing this'd be useful for in their absense.


I know the Cg compiler can kind-of do this, (take GLSL code and output GLSL code) but I think it only targets GLSL1.0 as a destination? (it also seems to reformat the code into an assembly like format)
I've wished for the same, targetting > GLSL 1.00, and preserving some semblance of the original variable names, rather than producing this kind of (hardly traceable) output:


...
void main()
{
_ZZ3SrZh0133 = _ZZ3SZaTMP243.x*_ZZ2Sgl_ModelViewMatrix[0];
_ZZ3SrZh0133 = _ZZ3SrZh0133 + _ZZ3SZaTMP243.y*_ZZ2Sgl_ModelViewMatrix[1];
_ZZ3SrZh0133 = _ZZ3SrZh0133 + _ZZ3SZaTMP243.z*_ZZ2Sgl_ModelViewMatrix[2];
_ZZ3SrZh0133 = _ZZ3SrZh0133 + _ZZ3SZaTMP243.w*_ZZ2Sgl_ModelViewMatrix[3];
_ZZ3SrZh0135 = _ZZ3SZaTMP244.x*_ZZ2Sgl_NormalMatrix[0];
...
It's occasionally useful though, but could be much more still with meaningful names.

However, with just the former, cgc could be an effective prefilter/precompiler for feeding all vendor's drivers (though this "kludge" wouldn't be in the best interests of OpenGL; better for GL to change the shader model to aid vendor stability and support precompiled disk-persistent shaders).

Faced with GLSL compiler quality issues from some vendors, I'd think either:
an ARB-standard and shared compiler (produces abstract parse DAG and does dead code elimination), and/or a user-space compiler->assembly + driver-space assembly->machine code modelwould lead to greater cross-vendor GLSL driver stability and consistency.

Right now we have all vendors going off to develop their own supposed-identical high-level language parser and optimizer for the exact same specs, and then wonder star-struck why they don't work exactly the same in the end (wow... you don't say.).

Dark Photon
01-12-2010, 06:19 AM
You can get much stricter GLSL syntax by using:

#extension ###

where ### >= 110 (e.g. 120).
did you mean,

#version ### ?
Yeah, my bad. Thanks for the correction.

CatDog
01-12-2010, 06:32 AM
NVemulate (http://developer.nvidia.com/object/nvemulate.html) has this "Generate Shader Portability Errors" switch... (?)

CatDog