GLSL Specular per-vertex lighting

Hello,

I recently started looking into using GLSL and now I’ve run into a bit of a problem while trying to implement per-vertex lighting. I’ve tried working out the maths on pen and paper, have compared my code to numerous sources online yet I can’t find out where I’ve gone wrong. Ambient and diffuse lighting are working fine (if I set either to 0, the shading changes). However, specular lighting is behaving oddly: I can’t say it’s not working at all… I think it’s better if I post full shader codes first and then explain what I’ve found.

vertex shader:

#version 400

layout (location = 0) in vec3 vertexPosition_;
layout (location = 2) in vec3 vertexNormal_;

out vec3 lightIntensity_;

uniform MainMatrices {
	mat4 projectionMatrix_;
	mat4 modelviewMatrix_;
	mat3 normalMatrix_;
};

uniform LightProperties {
	// position of the light source in eye coordinates
	vec4 lightPosition_;
	// ambient light intensity
	vec3 La_;
	// diffuse light intensity
	vec3 Ld_;
	// specular light intensity
	vec3 Ls_;
};

uniform MaterialProperties {
	// ambient reflectivity
	vec3 Ka_;
	// diffuse reflectivity
	vec3 Kd_;
	// specular reflectivity
	vec3 Ks_;
	// shininess factor (usually [1,200])
	float shininess_;
};

vec3 calculatePhongShading(in vec4 coordEye, in vec3 normal) {
	// find the light's position w.r.t. the vertex position
	vec3 relPos = normalize(vec3(lightPosition_ - coordEye));

	// find the reflected vector
	vec3 ref = reflect(-relPos, normal);

	// find the "reverse" vertex direction vector (origin for eye coordinates at camera's position)
	// "reverse" needed due to the direction of the reflected vector being opposite to the eye coordinate w.r.t. normal
	vec3 vertDir = normalize(-coordEye.xyz);

	// find the ambient component
	vec3 ambient = La_ * Ka_;

	// find the dot product in advance between relPos and normal
	// saves having to recalculate it in order to check if specular component exists
	float relPosDotNormal = max(dot(relPos, normal), 0.0);

	// find the diffuse component
	vec3 diffuse = Ld_ * Kd_ * relPosDotNormal;

	// define the specular component
	vec3 specular = vec3(0.0);
	
	// see if there's any point in finding specular component
	if(relPosDotNormal > 0.0) {
		specular = Ls_ * Ks_ * pow(max(dot(ref, vertDir), 0.0), shininess_);
	}

	// final shading = ambient + diffuse + specular
	return (ambient + diffuse + specular);
}

void main() {
	// find the the vertex coordinate and the normal in eye-space coordinates
	vec4 coordEye = (modelviewMatrix_ * vec4(vertexPosition_, 1.0));
	vec3 normal = normalize(normalMatrix_ * vertexNormal_);

	lightIntensity_ = calculatePhongShading(coordEye, normal);

	// find the final vertex position
	gl_Position = projectionMatrix_ * modelviewMatrix_ * vec4(vertexPosition_, 1.0); 
}

fragment shader:

#version 400

in vec3 lightIntensity_;

layout (location = 0) out vec4 fragColor_;

void main() {
	fragColor_ = vec4(lightIntensity_, 1.0);
}

That code produces the following result: http://i39.tinypic.com/snotgi.png

I’ve figured out that the application does get to specular calculation line by altering the following line:

// see if there's any point in finding specular component
if(relPosDotNormal > 0.0) {
	specular = Ls_ * Ks_ * pow(max(dot(ref, vertDir), 1.0), shininess_);
}

This produces the following: http://i39.tinypic.com/2pt8h2x.png

I can deduce from this that my code identifies where it should apply specular highlighting but it’s not actually capable of doing it. Ls_ and Ks_ are vec3(1.0, 1.0, 1.0) and shininess_ = 100.

I’ve been going through the same code over and over again and I can’t find where I’m going wrong. Any help appreciated.

Thanks in advance!

Such a high exponent will produce a very sharp highlight (i.e. one that is only visible from a very specific angle between eye, surface and light source) and additionally if your mesh is not tessellated finely enough it is possible no vertex is actually inside the “specular spot” and the approximation of per vertex lighting simply breaks down.

Thanks for your quick reply!

That seems* to be the case indeed. Lower shininess values: http://i40.tinypic.com/35kki7n.png

Scaling the sphere made it more obvious as well: http://i43.tinypic.com/152ezdf.png

*I don’t know why but I’m a bit suspicious about the result… Shouldn’t the highlight be a bit closer to the “whiter” areas (ie the top of the sphere)? The light’s position is given as (in eye-space): glm::vec4 lightPosition(0.0f, 0.0f, -2.5f, 1.0f);
(I think that’s 2.5 units behind the camera position). Does the image look as it should or does it look a bit off?

Since I have a couple of more questions that have been bothering me about this, I’ll add them here:

  1. I understand why one would use eye-space coordinates for the lights, however, say I had a single light that was supposed to be at absolute world coordinates (0.0, 10.0, 0.0). How would I go about transforming that into eye coordinates? I tried multiplying the light position by the modelview matrix but that didn’t work. Is that the way to do it and should I just mess around with different coordinates or is there another way to do the transformation?

  2. Should I use structs and then uniform variables of structs instead of uniform blocks? Both seem reasonable and the difference to me seems only the way I pass the data to the shaders.

Thanks in advance!

glm::vec4 lightPosition(0.0f, 0.0f, -2.5f, 1.0f);

For the usual eye space that’s 2.5 units in front of the camera, you are typically looking along the negative z axis.

  1. The same way you transform anything from world into eye space, apply the view matrix :wink:

  2. Hmm, off hand I’d say that is a case of “whatever is most convenient”. I believe there are different size limits on uniforms and uniform blocks, so depending on how much data needs to be passed you may have to use one or the other - it would probably better if someone more familiar with the trade-offs weighs in on this.

[QUOTE=carsten neumann;1253858]For the usual eye space that’s 2.5 units in front of the camera, you are typically looking along the negative z axis.

  1. The same way you transform anything from world into eye space, apply the view matrix :wink:

  2. Hmm, off hand I’d say that is a case of “whatever is most convenient”. I believe there are different size limits on uniforms and uniform blocks, so depending on how much data needs to be passed you may have to use one or the other - it would probably better if someone more familiar with the trade-offs weighs in on this.[/QUOTE]

Thanks for your reply once again!

The coordinate mapping didn’t make any sense to me so I started doubting the data I was passing to the shaders. It turns out that somehow, somewhere the main 3 matrices break down. The root cause seems to be the uniform blocks… Once I remove the 3 matrices from their uniform block and make them uniform separately, I get the picture I expected to get (also the coordinate system works as expected!): http://i40.tinypic.com/212irgi.png

As soon as I move them into a block, I get a black screen (if I fiddle with the nonsensical coordinates I can get the sphere to appear again). I’m not sure what I’m doing wrong… Here’s how I handle uniform blocks:

uniformBufferIndex_ starts at 0.

void GLSLProgram::generateUniformBuffer(const GLchar* uniformBlockName, const GLchar* uniformDataNames[], const GLsizei uniformDataCount, const UniformBufferData* bufferData) {
	// get the index of the uniform block
	GLuint uniformBlockIndex = glGetUniformBlockIndex(program_, uniformBlockName);

	// set up the binding for the block
	glUniformBlockBinding(program_, uniformBlockIndex, uniformBufferIndex_);

	// get the size of the uniform block
	GLint blockSize;
	glGetActiveUniformBlockiv(program_, uniformBlockIndex, GL_UNIFORM_BLOCK_DATA_SIZE, &blockSize);

	// allocate memory for the buffer
	GLubyte* blockBuffer = new GLubyte[blockSize];

	// find the indices for the uniform block's data
	GLuint* blockIndices = new GLuint[uniformDataCount];
	glGetUniformIndices(program_, uniformDataCount, uniformDataNames, blockIndices);

	// find the offsets of the uniform data
	GLint* uniformOffsets = new GLint[uniformDataCount];
	glGetActiveUniformsiv(program_, uniformDataCount, blockIndices, GL_UNIFORM_OFFSET, uniformOffsets);

	// copy the data into the buffer
	for(int i = 0; i < uniformDataCount; i++) {		
		memcpy(blockBuffer + uniformOffsets[i], bufferData[i].data_, bufferData[i].size_);
	}

	// create the uniform buffer object
	GLuint uniformBufferObject;
	glGenBuffers(1, &uniformBufferObject);
	glBindBuffer(GL_UNIFORM_BUFFER, uniformBufferObject);
	glBufferData(GL_UNIFORM_BUFFER, blockSize, blockBuffer, GL_DYNAMIC_DRAW);
	glBindBufferBase(GL_UNIFORM_BUFFER, uniformBufferIndex_, uniformBufferObject);
	
	glBindBuffer(GL_UNIFORM_BUFFER, 0);
	uniformBufferObjects_.push_back(uniformBufferObject);

	// make sure to increment the index for future uniform buffer objects
	uniformBufferIndex_++;

	// cleanup
	delete[] uniformOffsets;
	delete[] blockIndices;
	delete[] blockBuffer;
}

And here’s how I generate the data for it:

// generate the matrices uniform block's data
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);

glm::mat4 projectionMatrix = glm::perspective(45.0f, static_cast<float>(viewport[2]) / static_cast<float>(viewport[3]), 0.1f, 100.0f);
glm::mat4 modelviewMatrix = glm::lookAt(glm::vec3(0.0f, 0.0f, 3.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
//glm::mat4 scale = glm::scale(glm::mat4(1.0f), glm::vec3(0.5f));
//modelviewMatrix *= scale;

// extract the top left 3x3 from the modelview matrix
glm::mat3 normalMatrix = glm::mat3(modelviewMatrix);

// normal matrix = ((modelview3x3)^-1)^T
normalMatrix = glm::inverse(normalMatrix);
normalMatrix = glm::transpose(normalMatrix);
		
const GLchar* matricesDataNames[] = {"projectionMatrix_", "modelviewMatrix_", "normalMatrix_"};
		
GLSLProgram::UniformBufferData bufferData[3];
bufferData[0].data_ = &projectionMatrix[0][0];
bufferData[0].size_ = sizeof(glm::mat4);
bufferData[1].data_ = &modelviewMatrix[0][0];
bufferData[1].size_ = sizeof(glm::mat4);
bufferData[2].data_ = &normalMatrix[0][0];
bufferData[2].size_ = sizeof(glm::mat3);
		
	
// fill in the matrices uniform block's data
program_.generateUniformBuffer("MainMatrices", matricesDataNames, 3, bufferData);

Where the UniformBufferData is defined as:

struct UniformBufferData {
	void* data_;
	size_t size_;
};

Lighting and material properties are loaded similarly:

// generate the light uniform block's data
glm::vec4 lightPosition(0.0f, 0.0f, 30.0f, 1.0f);
glm::vec3 La(0.1f, 0.1f, 0.1f);
glm::vec3 Ld(1.0f, 1.0f, 1.0f);
glm::vec3 Ls(0.5f, 0.5f, 0.5f);

const GLchar* lightDataNames[] = {"lightPosition_", "La_", "Ld_", "Ls_"};

GLSLProgram::UniformBufferData lightBufferData[4];
lightBufferData[0].data_ = &lightPosition[0];
lightBufferData[0].size_ = sizeof(glm::vec4);
lightBufferData[1].data_ = &La[0];
lightBufferData[1].size_ = sizeof(glm::vec3);
lightBufferData[2].data_ = &Ld[0];
lightBufferData[2].size_ = sizeof(glm::vec3);
lightBufferData[3].data_ = &Ls[0];
lightBufferData[3].size_ = sizeof(glm::vec3);

// fill in the light uniform block's data
program_.generateUniformBuffer("LightProperties", lightDataNames, 4, lightBufferData);

// generate the material uniform block's data
glm::vec3 Ka(0.1f, 0.1f, 0.1f);
glm::vec3 Kd(0.7f, 0.7f, 0.7f);
glm::vec3 Ks(1.0f, 1.0f, 1.0f);
float shininess = 10.0f;

const GLchar* materialDataNames[] = {"Ka_", "Kd_", "Ks_", "shininess_"};
GLSLProgram::UniformBufferData materialBufferData[4];
materialBufferData[0].data_ = &Ka[0];
materialBufferData[0].size_ = sizeof(glm::vec3);
materialBufferData[1].data_ = &Kd[0];
materialBufferData[1].size_ = sizeof(glm::vec3);
materialBufferData[2].data_ = &Ks[0];
materialBufferData[2].size_ = sizeof(glm::vec3);
materialBufferData[3].data_ = &shininess;
materialBufferData[3].size_ = sizeof(float);

// fill in the material uniform block's data
program_.generateUniformBuffer("MaterialProperties", materialDataNames, 4, materialBufferData);

As I said, I’m not sure what’s going on here…

Thanks in advance once more!