GLSL not behaving the same between graphics cards

Hi,

I have the following GLSL shader that is working on a Mac computer with an NVIDIA GeForce GT 330M, a different Mac computer with an ATI Radeon HD 5750, an Ubuntu VM inside this second Mac, but not on an Ubuntu VM inside a Windows machine with a GeForce GTX 780 (all drivers up to date). The shader is pretty basic, so I’m looking for some help what might be wrong! The vertex shader looks like (I’m using the cocos2d-x game engine, which is where all of the CC_{x} variables are defined):


varying vec4 v_fragmentColor;
void main() {
    gl_Position = CC_PMatrix * CC_MVMatrix * a_position;
    gl_PointSize = CC_MVMatrix[0][0] * u_size * 1.5f;
    v_fragmentColor = vec4(1, 1, 1, 1);
}

And the fragment shader:


varying vec4 v_fragmentColor;

void main() {
   gl_FragColor = texture2D(CC_Texture0, gl_PointCoord) * v_fragmentColor; // Can't see anything
   // gl_FragColor = texture2D(CC_Texture0, gl_PointCoord); // Produces the texture as expected, no problems!
   // gl_FragColor = v_fragmentColor; // Produces a white box as expected, no problems!
}

As you can see, I’m getting very strange behavior where both the sampler, CC_Texture0, and the varying vec4, v_fragmentColor, seem to be working properly, but multiplying them causes problems. I’m reasonably confident everything else is set up right because I’m seeing it work properly on the other systems, so it seems to be related to the graphics card or some undefined behavior that I’m not aware of? Also, I’m using #version 120 ( which was needed for gl_PointCoord). Thanks for any help!

[QUOTE=parrotintheoven;1280045]I’m using the cocos2d-x game engine, which is where all of the CC_{x} variables are defined):
[/QUOTE]
So cocos2d-x is pre-processing your shaders to add the declarations for its own variables?

Have you tried retrieving the shader source code generated by cocos2d-x, e.g. using glGetAttachedShaders() and glGetShaderSource()?

[QUOTE=GClements;1280047]So cocos2d-x is pre-processing your shaders to add the declarations for its own variables?

Have you tried retrieving the shader source code generated by cocos2d-x, e.g. using glGetAttachedShaders() and glGetShaderSource()?[/QUOTE]

Yeah, I can get get the entire shader source, I just thought it might be easier if it was just the part that seemed suspect. Here is the entire vertex shader if it helps:


#version 120
uniform mat4 CC_PMatrix;
uniform mat4 CC_MVMatrix;
uniform mat4 CC_MVPMatrix;
uniform mat3 CC_NormalMatrix;
uniform vec4 CC_Time;
uniform vec4 CC_SinTime;
uniform vec4 CC_CosTime;
uniform vec4 CC_Random01;
uniform sampler2D CC_Texture0;
uniform sampler2D CC_Texture1;
uniform sampler2D CC_Texture2;
uniform sampler2D CC_Texture3;
//CC INCLUDES END


    attribute vec4 a_position;
    uniform float u_size;

#ifdef GL_ES
    varying lowp vec4 v_fragmentColor;
#else
    varying vec4 v_fragmentColor;
#endif

    void main() {
        gl_Position = CC_PMatrix * CC_MVMatrix * a_position;
        gl_PointSize = CC_MVMatrix[0][0] * u_size * 1.5;
        v_fragmentColor = vec4(1, 1, 1, 1);
    }

And the fragment shader:


#version 120
uniform mat4 CC_PMatrix;
uniform mat4 CC_MVMatrix;
uniform mat4 CC_MVPMatrix;
uniform mat3 CC_NormalMatrix;
uniform vec4 CC_Time;
uniform vec4 CC_SinTime;
uniform vec4 CC_CosTime;
uniform vec4 CC_Random01;
uniform sampler2D CC_Texture0;
uniform sampler2D CC_Texture1;
uniform sampler2D CC_Texture2;
uniform sampler2D CC_Texture3;
//CC INCLUDES END


    #ifdef GL_ES
    precision lowp float;
    #endif

    varying vec4 v_fragmentColor;

    void main() {
        gl_FragColor = v_fragmentColor * texture2D(CC_Texture0, gl_PointCoord);
        // gl_FragColor = texture2D(CC_Texture0, gl_PointCoord); // Produces the texture as expected, no problems!
        // gl_FragColor = v_fragmentColor; // Produces a white box as expected, no problems!
    }

Also I want to reemphasize that it is working on some machines, but just on my Ubuntu VM with GeForce GTX 780 it is not working correctly.

Thanks for the reply!

What happened to me once was that I did something wrong (illegal arguments to some OpenGL function) and wasnt checking for errors consistently and different drivers / GPUs reacted differently to the illegal argument. Some systems simply ignored the error and did what I was expecting them to do. Some other systems were stricted and generated an error and ignored the command. Others generated an error but still did something unpredictable with the garbage data. Are you sure you are always checking for errors, both the OpenGL function invocations AND the shader programs?

I had similar thoughts, but I’m not seeing any errors from OpenGL, cocos2d-x, or the shaders.

I never had the problem but I heard that some drivers have problems when you use 1 instead of 1.0 - but it is hearsay it should be allowed since version 120.

v_fragmentColor = vec4(1, 1, 1, 1);

Ah just tried that with no such luck. Thanks for the idea though!

You’re not using the GLES path in your shaders, right? Because the #version declarations would be a problem if you were. That said, I doubt this is related to your problem.

Just so we’re on the same page, you mean including the GLES header versus just the regular GL header right? Yes, I think I’m doing that correctly. In fact if I remove the #version 120 line I get a compilation error that gl_PointCoord is not defined.

EDIT: Sorry, now I understand what you were saying. I’m pretty sure I’m not going into the GLES path, but just to make sure, I deleted those lines and nothing changed.

Right. The #version 120. Under GL-ES, that would be invalid. So I assumed you weren’t using that path.