Vertex shader failed to compile with the following errors:
ERROR: 0:1: error(#76) Syntax error unexpected tokens following #version
ERROR: error(#273) 1 compilation errors. No code generated
The reason why I want to change the version from 110 to 130: I wanted to do something like this
ERROR: 0:1: error(#202) No matching overloaded function found inverse
ERROR: 0:1: error(#202) No matching overloaded function found transpose
ERROR: 0:1: error(#160) Cannot convert from 'const float' to '3X3 matrix of float'
so I thought maybe this can be fixed by increasing the version.
Weird. Works here on NVidia. What driver vendor/version are you using? And do you know for a fact that it supports GLSL 1.3? Perhaps it doesn’t, and it’s not very good at saying so.
The reason why I want to change the version from 110 to 130: I wanted to do something like this
Oh ok, looks like I have to create an OpenGL 3 context, if I want to use inverse() in the shader.
Well, this is an excerpt of the beginning of the shader
#version 130
in vec4 a2v_position;
in vec3 a2v_normal;
//...
So there is definitely a
;). The weird thing is, if I write
#version 150
I get the following error
Vertex shader failed to compile with the following errors:
ERROR: 0:1: error(#106) Version number not supported by GL2
ERROR: 0:1: error(#76) Syntax error unexpected tokens following #version
ERROR: error(#273) 2 compilation errors. No code generated
So the compiler is in fact able to recognize #version 150 but not anything after that.
Vertex shader failed to compile with the following errors:
ERROR: 0:1: error(#106) Version number not supported by GL2
ERROR: 0:1: error(#76) Syntax error unexpected tokens following #version
ERROR: error(#273) 2 compilation errors. No code generated
The same problem under 3.1 context. I guess you are on ATI, right?
Ok, now this is just getting weird. Let’s see your glShaderSource() invocation, with a single constant (hard-coded in the source code) string being fed as input. Post a little 10-line test program that demonstrates the problem so we can get our hands on it. I suspect you’ll figure this out while you’re cooking it.
Have you considered inverting the matrix on the CPU and passing that as a uniform? If your matrix is not modified per-vertex or per-fragment, this could provide a significant speed gain.
The GLSL compiler could do this automatically if it detects that only uniforms are used as input to the “invert” function, but I don’t know if it does. I am still using GLSL 1.20 so I don’t have this function.
Yep, it’s a long shot but it could be he’s hinting at the compiler optimization trick that’s documented in the DX-files - the so-called “pre-shader”: the compiler factors out “batch constant” expressions involving uniforms and hoists them up into their own shader to be run as a sort of “batch preprocess” on the CPU. It’s an “effect” thing, but in a perfect world it could probably be a shader thing too.