PDA

View Full Version : #version preprocessor (OpenGL ES 2.0 and Core)



Zenja
11-30-2010, 07:33 AM
Hi everyone.
I'm developing a cross platform engine which works with both the Core profile and OpenGL ES 2.0 (iPhone). I've encountered a problem with the #version preprocessor directive.

For OpenGL >= 3.3 (core profile), the value of #version is 330. This will reject all deprecated GLSL commands (this is what I want). However, for OpenGL ES 2.0, #version is set to 100.

How does one use the same shader source for both code bases? I can get around the issue by dynamically modifying the shader source before compilation (replacing #version with platform definition), but then I'm not using the same shader source anymore.

Edit: OK, I found the answer I was looking for. OpenGL ES 2.0 has a new preprocessor definition called GL_ES. Now I can do #version checks. Leaving this post here in case someone else runs into the same problem.

kyle_
11-30-2010, 09:59 AM
It doesnt really help you, as


'The #version directive must occur in a shader before anything else, except for comments and white space.'

so


#ifndef GL_ES
#version 330
#endif

appears to be illegal.
NVIDIA doesnt care, ATI does however and on my driver fails compilation.

Zenja
11-30-2010, 01:37 PM
'The #version directive must occur in a shader before anything else, except for comments and white space.'



#ifndef GL_ES
#version 330
#endif

appears to be illegal.
NVIDIA doesnt care, ATI does however and on my driver fails compilation.

Yes, I've read about that clause and had concerns (since I dynamically add preprocessor defintions at the beginning of the shader file), but experienced no issues on nVidia 250 hardware (Windows and Linux). Since my engine knows which code path to use, I guess that I have no choice but to manually prepend the correct version info when constructing the shader. It's a shame I cannot rely on #ifdef GL_ES on AMD hardware.

Alfonse Reinheart
11-30-2010, 10:09 PM
Just use another shader string. It doesn't have to be the first thing in a shader string; just the first thing in the first shader string. Remember: the way multiple strings works is that they are concatenated together to form the input before compilation. So just have the first string be "#version 330" for OpenGL, and whatever other version you need for GL ES.

bcthund
12-02-2010, 04:40 PM
This is what you are referring to correct?



static const char *cShadedVP =
#ifdef OPENGL_ES
"#version 100"
#else
"#version 330"
#endif
"uniform mat4 mvpMatrix;"
"attribute vec4 vColor;"
"attribute vec4 vVertex;"
"varying vec4 vFragColor;"
"void main(void) {"
"vFragColor = vColor; "
" gl_Position = mvpMatrix * vVertex; "
"}";

Dark Photon
12-02-2010, 05:46 PM
This is what you are referring to correct?
You can do that, but I think what Alfonse was referring to is that glShaderSource takes an "array of strings", not just a single string. Make the first string in the array just a "#version ..." string. Nets you the same thing as the above, but selection could depend on a run-time sensed setting as well.

ugluk
12-04-2010, 02:46 PM
I've had this problem a while ago and have solved it like this:

compatibility.fs

#version [the version returned by GL query]
#if (__VERSION__ > 120)
# define IN in
#else
# define IN varying
#endif // __VERSION __

compatibility.vs

#version [the version returned by GL query]
#if (__VERSION__ > 120)
# define IN in
# define OUT out
#else
# define IN attribute
# define OUT varying
#endif // __VERSION

then I use these macros in the source, instead of in/out and varying. Everything works fine then, if I stay away from deprecated or newer stuff.

YarUnderoaker
12-13-2010, 05:56 AM
Is this condition work on AMD platform.
Because I take strange behavior - for #version 400 core __VERSION__ == 100 :eek:
Driver: Catalist 10.11

YarUnderoaker
12-13-2010, 06:09 AM
And for other versions same result.