Combo of Intel HD 3000 with NVidia 540M

Hi Everyone,

I have a code that renders a model. It works perfectly with GLSL 1.40 on NVidia, but, when rendered the scene on Intel HD 3000 (driver version 9.17.10.2867) nothing works. The intel hd 3000 is a primary card in the OS (Windows 8 64 bit).

After spending hours in debugging and goggling I found that the following code behaves a bit differently:


CRenderEngine::__ModelMatrixUniformLocationId_Color = glGetUniformLocation(CRenderEngine::__ShaderProgramId_Color, "ModelMatrix");
CRenderEngine::__ViewMatrixUniformLocationId_Color = glGetUniformLocation(CRenderEngine::__ShaderProgramId_Color, "ViewMatrix");
CRenderEngine::__ProjectionMatrixUniformLocationId_Color = glGetUniformLocation(CRenderEngine::__ShaderProgramId_Color, "ProjectionMatrix");

ShaderProgram is initialized correctly (since it is the first program with value 1).
For NVidia chip, values for locations are 0,1,2,
But for Intel chip they are: 23855104, 23789568 and 23724032, which seems a bit not logical. glGetError gives GL_NO_ERROR.

Fragment Shader:


#version 140
in vec4 ex_Color;
out vec4 out_Color;
void main(void)
{
   out_Color = ex_Color;
}

Vertex Shader:


#version 140
in vec4 in_Position;
in vec4 in_Color;
out vec4 ex_Color;
uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
void main(void)
{
	gl_Position = (ProjectionMatrix * ViewMatrix * ModelMatrix) * in_Position;
	ex_Color = in_Color;
}

Thanks everyone for help in advance!!

Damn me if i know the logic behind these values, but uniform locations are not indices - they can be pretty much whatever and their values have no meaning (at least in theory).

I tried your code on my laptop (also hd3k), and it did work. So wherever the problem is, its not in the shader.
Are you on latest drivers? Can you show more code?

Render function makes the following calls:


	wglMakeCurrent(__CurrentHDC,__CurrentHGLRC);
	glUseProgram(CRenderEngine::__ShaderProgramId_Color);
	glUniformMatrix4fv(__ViewMatrixUniformLocationId_Color, 1, GL_FALSE, __ViewMatrix.m);

	//Render each model.
	for(GLuint i=0;i<CRenderEngine::__ModelCollection.size();i++) {
		CRenderEngine::__ModelCollection[i]->draw();
	}
	
	SwapBuffers(__CurrentHDC);


Draw function of the models updates model matrix and selects appropriate VAO for drawing.

glUniformMatrix4fv((this->_modelMatrixId)[i], 1, GL_FALSE, _modelMatrix.m);
glBindVertexArray((
_vertexArrayObject)[i]);
glDrawElements((_vertexDrawMode)[i], (_indeciesCount)[i], GL_UNSIGNED_INT, (GLvoid*)0);

_vertexDrawMode contains right now only triangles types.

I was successful running old style GL code on the Intel chip today (by old style I mean glBegin(); glVertex3f(…) … glEnd(); kind of approach). But it is not what I am looking for. Seems to me that once I enable core profile and start using shaders everything goes blank.

The drivers are the latest, I’ve checked it yesterday.

FYI: I also used Open GL Extension viewer 4.0.8 but it fails to do rendering tests on 3.0 and 3.1 profiles, complaining about missing #version in shaders. Also, please note that I am on Window 8 (x64) right now. I just want to be sure that the problem is in drivers, and not in some subtle misconception of mine.

The more I dig the more I am certain that this is a driver problem.

And thanks for you help, I really appreciate this.

OK, Problem solved. It appeared that I had a bug, which resulted in such case that rendering engine was always expecting model matrix location id to be 0. And with Intel chip it wasn’t the case.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.