Porting OGL 3.3 code to a 4.1 machine blank screen

I’ve been trying to port some code from a 3.3 machine to a 4.1 machine, and nothing gets rendered.
My uberlight shader logs don’t turn anything bad, just vertex/frag linked.
Is it perhaps it’s to do with my glEnable() calls in my window setup?

glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glEnable(GL_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_PROGRAM_POINT_SIZE);
glClearColor(0,0,0.5f,0.1f);

Or is it in the shader loading code? Which btw throws no errors.

any help would be much appreciated
cheers

stefan

Hard to say. Please, post more information/code.

Do you use core profile?
glEnable(GL_ALPHA) what’s this? Should’t it be glEnable(GL_ALPHA_TEST)? (and remember GL_ALPHA_TEST is invalid in core profile).

Yepp, indeed there is no such thing as glEnable(GL_ALPHA) and alpha test is deprecated.

BTW, there is no need to “port” GL 3.3 code to 4.1 as it should work by default. Most probably you’re using some invalid stuff or there is some driver bug in the drivers of one machine (hard to tell which one).

Hi there,

Corrected the GL_ALPHA faux pas, i now no longer get a blank screen, i just get an unlit scene. This is some code, I think I’ve narrowed it down to being the LightMatricesBlock uniform buffer object, but it may be when I’m setting the vertex attributes. I’ve posted my vertex shader for my uberlight shader at the bottom. It runs fine on my nvidia 3.3 machine and on the ati 3.3 machines in the labs but on the machines with the ATI 5700 series 4.1 cards (which i have to demo on) i get this lack of lighting.

glGetProgramInfoLog returns:
Fragment shader(s) linked, vertex shader(s) linked

although the uberlight shader does not light the scene, the geometry shader for the normals works fine. As in can see the box but it’s black and unlit.
I use uniform buffer objects in the uberlight and geometry shaders, this is the code I use for setting up the uberlights LightMatricesBlock, both use another UBO called GlobalMatricesBlock which seems to work fine, so I’m guessing it must be this one that isn’t working…

/*
layout(std140) uniform LightMatricesBlock
{
vec4 Eye; //0
mat4 WorldMatrixIT; //16
mat4 WCtoLCMatrix[4]; //sizeof(GLfloat)*16 + 16
mat4 WCtoLCMatrixIT[4]; //sizeof(GLfloat)164 + 16
} LightMatrices;

*/
int NLights = 4; int vec4 = 16; int mat4 = 64;

GLuint _LightMatUBO; //Buffer objects
const int _LightMatBinding = 1; //Buffer binding point

glGenBuffers(1,& _LightMatUBO);
glBindBuffer(GL_UNIFORM_BUFFER, _LightMatUBO);
glBufferData(GL_UNIFORM_BUFFER, mat4*9 + vec4, NULL, GL_DYNAMIC_DRAW);
glBindBuffer(GL_UNIFORM_BUFFER, 0);

//Bind this buffer to the ‘1’ binding point
glBindBufferBase(GL_UNIFORM_BUFFER, _LightMatBinding, _LightMatUBO);

// . . .
// later on I add the data

glBindBuffer(GL_UNIFORM_BUFFER, _LightMatUBO);

glBufferSubData(GL_UNIFORM_BUFFER,0, vec4, _Eye.GetPointer());
glBufferSubData(GL_UNIFORM_BUFFER,vec4, mat4, _WorldMatrixit.GetPointer());
for(int i = 0; i < NLights; ++i)
{
glBufferSubData(GL_UNIFORM_BUFFER,vec4 + mat4 + mat4i,mat4,_WCtoLCMatrix[i].GetPointer());
glBufferSubData(GL_UNIFORM_BUFFER,vec4 + mat4 + mat4
NLights + mat4*i,mat4, _WCtoLCMatrixit[i].GetPointer() );
}

glBindBuffer(GL_UNIFORM_BUFFER, 0);

GLuint LightMatrices_Index = g->glGetUniformBlockIndex(_Program, “LightMatricesBlock”);
glUniformBlockBinding(_Program, LightMatrices_Index, _LightMatBinding);


In the construction of the shader, after attaching my shaders to the program and before linking the program I bind my vertex attributes:
//Vertex Attributes
g->glBindAttribLocation(_Program,0,“vVertex”);
g->glBindAttribLocation(_Program,1,“vNormal”);
g->glBindAttribLocation(_Program,2,“vTexCoords”);

this is what my uberlight vertex shader looks like:
#version 330

layout(std140) uniform GlobalMatricesBlock
{
mat4 PerspectiveMatrix; //0
mat4 ViewMatrix; //sizeof(GLfloat)*16
mat4 WorldMatrix; //sizeof(GLfloat)162
} GlobalMatrices;

layout(std140) uniform LightMatricesBlock
{
vec4 Eye; //0
mat4 WorldMatrixIT; //16
mat4 WCtoLCMatrix[4]; //sizeof(GLfloat)*16 + 16
mat4 WCtoLCMatrixIT[4]; //sizeof(GLfloat)164 + 16
} LightMatrices;

uniform mat4 ModelMatrix;

in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoords;

out vec3 LCcamera[4];
out vec3 LCpos[4];
out vec3 LCnorm[4];

smooth out vec2 vVaryingTexCoords;

void main()
{
gl_PointSize = 5.0;
gl_Position = GlobalMatrices.PerspectiveMatrix *
GlobalMatrices.ViewMatrix *
GlobalMatrices.WorldMatrix *
ModelMatrix * vVertex;

//compute world space position and normal
vec4 wcPos	= GlobalMatrices.WorldMatrix * ModelMatrix * vVertex;
vec3 wcNorm	= (LightMatrices.WorldMatrixIT * ModelMatrix * vec4(vNormal,0.0)).xyz;

//For each light
//compute light coordinate system camera positions,
//vertex position and normal

for(int i = 0; i &lt; 4; i++)
{
	LCcamera[i] = (LightMatrices.WCtoLCMatrix[i] * LightMatrices.Eye).xyz;
	LCpos[i] = (LightMatrices.WCtoLCMatrix[i] * wcPos).xyz;
	LCnorm[i] = (LightMatrices.WCtoLCMatrixIT[i] * vec4(wcNorm,0.0)).xyz;
}

vVaryingTexCoords = vTexCoords;

}

let me guess, you moved from a nvidia board to ati board?

then your best bet is to start with very small samples and work your way towards the point where the error occurs (in your code or in the ati driver… most probably in the ati driver).

HTH
-chris

edit: i just saw you mentioned that you moved to an ati board. then it is very likely that their shader compiler just does something very stupid. reduce the shader to something small and part by part enable more of the larger shader. some months ago i found a problem with loops in shaders with ati drivers, maybe there is something similar going on.

edit2: please use code tags for code samples. it is unreadable in the current form.

Right! i’ve taken your advice on board and stepped through the shader and i think i’ve found it,
by getting rid of my vertex attribute vNormal, and replacing it with gl_Normal, i kinda get something lit(albeit not v. well)
Now, as i’m binding the vertex attributes to my shader with

 
//Vertex Attributes
g->glBindAttribLocation(_Program,0,"vVertex");
g->glBindAttribLocation(_Program,1,"vNormal");
g->glBindAttribLocation(_Program,2,"vTexCoords");

But in my draw code I’m using glBegin() / glEnd(); with glVertex3f()… etc which I guess is causing the problems.
I can’t understand why one ATI card will tidy up for me and another won’t even when using the same driver, and why there should be a difference between ATI and Nvidia in the first place. But it doesn’t matter.

I take it I need to specify the vertex attributes with a vertex array object, but I’m a bit confused as to how I’d go about this. What would be the pseudocode for binding a vertex attribute array to a vertex array? Do I map say the vertex pointer to MyVertex?

But in my draw code I’m using glBegin() / glEnd(); with glVertex3f()… etc which I guess is causing the problems.

You appear to be using generic vertex attributes. You can’t use generic vertex attributes with “glVertex3f”; you must use glVertexAttrib with the attribute index you want to change.

NVIDIA implementations have a tendency to offer aliasing between generic attributes and built-in attributes. So on them, glVertex* might map to glVertexAttrib(0). ATI implementations do not. And neither does the OpenGL specification.