Issues with smooth edges spotlights in OpenGL

I am currently working on adding lighting to my project and am still at the basics (getting everything to render properly) with lighting. I am using LWJGL (Java) on a Windows 7 machine, more detailed information at the bottom.

So the issue is, when trying to create a spotlight with a cutoff (limited angle affected by the light) I get very hard edges, almost like stairs, as seen in this picture:

[ATTACH=CONFIG]980[/ATTACH]

There is no ambient light (using

glLightModel(GL_LIGHT_MODEL_AMBIENT, colorToFloatBuffer(new Color(0.0f, 0.0f, 0.0f, 1.0f)))

) and I also disabled spot exponent (specifies how the light is distributed in the cone) and all attenuations to make the effect clearer and the background is a big white image (made black by the lack of ambient light; I am also wondering why I even need ot use an image at all in order to see the light).

And that is obviously not what it is supposed to look like (it should be a triangle-like shape, right?) and I have no idea why. Additionally, I scale the context before rendering anything in order to maintain using orhtographic coordinates in a perspective (GLUT) view for visual effects.

This is how I setup my light:


	glEnable(GL_LIGHTING);
	glEnable(GL_COLOR_MATERIAL);

	glLight(glLightID, GL_AMBIENT, colorToFloatBuffer(Color.blue));
	glLight(glLightID, GL_DIFFUSE, colorToFloatBuffer(Color.black));
	glLight(glLightID, GL_SPECULAR, colorToFloatBuffer(Color.black));

	glLight(glLightID, GL_POSITION, floatBuffer.put(position.x).put(position.y).put(0.0f).put(1.0f));

	glLight(glLightID, GL_SPOT_DIRECTION, otherFloatBuffer.put(1.0f, 0.0f, 0.0f)); 
	glLightf(glLightID, GL_SPOT_EXPONENT, lightSource.getSpotExponent());
	glLightf(glLightID, GL_SPOT_CUTOFF, 22.5f);
		
	glLightf(glLightID, GL_CONSTANT_ATTENUATION, 1.0f);
	glLightf(glLightID, GL_LINEAR_ATTENUATION, 0.0f);
	glLightf(glLightID, GL_QUADRATIC_ATTENUATION, 0.0f);

Also, the Form Posting Guide told me to post specific information about the system I am using, so here it goes:

OS: Windows 7 | OS_VERSION: 6.1
JAVA_VERSION: 1.7.0_71
LWJGL_VERSION: 2.9.0
GL_VERSION: 4.3.0
GL_VENDOR: NVIDIA Corporation
GL_RENDERER: GeForce GTX 560 Ti/PCIe/SSE2

Thanks for any help in advance.

The main thing to bear in mind about OpenGL’s fixed-function lighting is that it calculates a colour for each vertex, and those colours are linearly interpolated across the polygon.

How well this works depends upon the resolution of the geometry relative to the lighting, i.e. whether you have enough vertices to adequately sample the illumination.

The number of cases where it works well In practice is rather low. Games almost (?) never use OpenGL’s lighting. Early OpenGL-based games (e.g. GLQuake, Quake 2) used light maps, modern games use fragment shaders to perform lighting calculations per fragment (pixel).

Funny enough, I just found this out a few hours ago and have since been looking into GLSL - vertex and fragment shaders - and it doesn’t seem too hard so I’ll try that instead (and I can add more than 8 lights! awesome!). Thanks for pointing it out and the additional interesting info :slight_smile:

So as kind of a followup (that doesn’t really fit here but you seem like you know a lot about GLSL so I will ask now ):

I am currently setting everything up to transition from the old lighting models to GLSL’s shaders. I am already wondering how to create (fast) cone spotlights (spotlights with a limited angle) in GLSL?
Additionally, it would be awesome if you could tell me that in your opinion is the best way to render multiple lights (pass an array of lights and render it in one blend pass? multiple blend passes that blend together? - I don’t know how to pass an array of lights to GLSL nor how to handle multiple lights at once).

Thank you very much for your help :slight_smile:

EDIT:

Also, how is it possible to selectively apply certain lights to certain textures? (For example I don’t want GUI to be affected at all, and my lights have depth ranges (every texture in that range is affected - others aren’t))

I am already wondering how to create (fast) cone spotlights (spotlights with a limited angle) in GLSL?

Once upon a time, the answer would have been to use a projected texture or cubemap. On modern hardware however, in-shader computations are much faster than texture lookups.

So just do the math yourself. Spotlight math is pretty simple; you fade out the light’s intensity based on the result of a dot product between the direction from the point towards the light and the direction of the spotlight. OpenGL’s fixed-function pipeline does exponential falloff, but you can use whatever makes your scene work.

Additionally, it would be awesome if you could tell me that in your opinion is the best way to render multiple lights (pass an array of lights and render it in one blend pass? multiple blend passes that blend together?

There really is no “best way”, as each method has its own benefits and drawbacks. Deferred rendering is a solid solution to the problem, but it can be bandwidth intensive, and it makes multisample anti-aliasing quite expensive. There are variations on deferred rendering (light pre-pass, as explained there), which have different drawbacks. The single-light-per-pass approach can work, though it really benefits from a depth pre-pass (rendering just the depth of everything, so that only fragments that contribute to the result are executed).

In your case, I’d just start with whatever works. As you start to understand the performance concerns your program will encounter, you’ll start to see what the best solution for you is.

I don’t know how to pass an array of lights to GLSL

The same way you pass an array of anything to GLSL.

Normally, I would suggest a simple UBO, using std140 layout. However, I see that you’re using LWJGL, which means Java. That makes it a bit more difficult to pass structured data via buffer objects. Not impossible, just a bit more difficult to work with than in C or C++, where you can just do some pointer casting and memory copies.

So it would probably be easier to use an array of uniforms and call glProgramUniform (or glUniform if you want to do it old-school). Your data in GLSL would preferably be structured as an array of basic types:


#define MAX_NUM_LIGHTS 4

uniform int numLights;
uniform vec3 lightPositions[MAX_NUM_LIGHTS];
uniform vec3 lightIntensities[MAX_NUM_LIGHTS];

And in OpenGL, you would get the uniform locations for ‘lightPositions’ and ‘lightIntensities’, then call glProgramUniform3fv (or the LWJGL equivalent). This function can take an array of vec3’s to upload.

nor how to handle multiple lights at once

Lighting is additive, so just take the sum of the result computed from each light.

[QUOTE=1337;1265450]I am currently setting everything up to transition from the old lighting models to GLSL’s shaders. I am already wondering how to create (fast) cone spotlights (spotlights with a limited angle) in GLSL?
[/QUOTE]
Subtract the surface position from the light position to get a direction, calculate the dot product between that and the light’s direction to obtain the cosine of the angle between them. Clamp to the positive range (i.e. negative values become zero). Then you can either use a step or smoothstep function for a (relativelY) hard edge, or any other function for a smooth falloff.

[QUOTE=1337;1265450]Additionally, it would be awesome if you could tell me that in your opinion is the best way to render multiple lights (pass an array of lights and render it in one blend pass? multiple blend passes that blend together?
[/QUOTE]
It depends. If you have a lot of overdraw, deferred rendering may be worthwhile (as it means that you only perform lighting calculations on visible surfaces, not on occluded surfaces). Additionally, if each light only affects a small portion of the scene, tiled rendering can reduce the amount of computation required (as you can completely ignore lights which don’t affect the current tile).

Use a uniform buffer object to supply the data for an array of structures (similar to the definition of gl_LightSource in the compatibility profile, although you probably won’t need as many fields).

Just add together all of the contributions from the individual lights.

[QUOTE=1337;1265450]
Also, how is it possible to selectively apply certain lights to certain textures? (For example I don’t want GUI to be affected at all, and my lights have depth ranges (every texture in that range is affected - others aren’t))[/QUOTE]
You’d normally draw the GUI in a separate draw call, so you can just change the array of light sources for the GUI (or use a different shader program altogether).

Distance limits would typically be implemented using attenuation. To avoid a sudden cut-off, you can subtract a small “floor” value and clamp to positive, so lights are effectively cut off when the un-clamped value becomes negative.

For culling, a simple option is to add an integer attribute containing a bitmask of the lights which affect a given surface.

Beyond that, the almost limitless capabilities offered by shaders mean that lighting is now an incredibly complex subject. Simply reading all of the papers and articles which are being written on the subject would be a full-time job.

@AlfonseReinheart and @GClements, thank you both very much for your amazingly detailed responses, they really help me a lot. The only thing I still don’t have any idea how to go about is the distance limit; I don’t really understand your explanation. To explain my situation, it is a 2D project with perspective view for some visual effects (bumping certain parts of the screen in and out). The thing is, the “depth” is not the z coordinate (which would mean that it would be rendered in a different size), but rather an arbitrary value I added to all my game objects to render them in a particular order (to maintain using orthographic coordinates and controllable resize behaviour). So when I know have an array of lights passed to GLSL, how could I go about the depth ranges for lights? Like for example anything positive is foreground (“above” the main field of action where entities and terrain is), anything negative is background and depth 0 is the main field. When I only want objects of a certain depth to be affected by lights that contain that depth in their depthrange (if lightmindepth <= objectdepth <= lightmaxdepth)? Like, the lights are calculated once, right? Not for every object drawn (well in a certain way they are, but I don’t know their depth anymore then or what they originally were) at least. Then there are partially transperent textures through which you could of course see some background light and so on. That still really confuses me, it would be awesome if could clear that up for me :slight_smile: Thanks again, your help is awesome.

If you want to take account of the depth in the lighting calculation, then the depth needs to be passed to the shader. If you’re drawing all objects with a given depth in a separate pass, you can use a uniform variable. Otherwise it will need to be a vertex attribute.

Thanks for the quick and enlightening reply. I think I understand that now. So I either pass it as a vertex and draw it all in one go or I draw it sequentially grouped by depth so that I can pass it as a uniform variable. This may sound stupid, but is it somehow possible to pass the depth in the vertex while still keeping the original z-coordinate? Like, two depth coordinates in vertex (additionally to x and y)?

You can pass many attributes for each vertex.

You could pass the depth in the w coordinate of the position attribute, or you could add a separate attribute. Using a separate attribute has the advantage that it can be a different size (e.g. the depth could be an unsigned byte while the position uses 32-bit floats). Also, the depth attribute can be “flat”-qualified, meaning that the value is guaranteed to be constant for each triangle.

Ahhh. Thanks again. I think I know how to do it with the w-coordinate, but the seperate attribute seems to be a cleaner (and more flexible) solution.How is it possible to add a new attribute (and access it in GLSL)?

Any vertex shader input (variable with the “in” qualifier, or “attribute” qualifier in older versions) is an attribute.

You can specify the attribute index with a layout qualifier, e.g.

layout(location=1) in int depth;

Or set it in the application by calling glBindAttribLocation() prior to linking. Or let the compiler allocate the index and query it in the application by calling glGetAttribLocation() after linking.

The data for the attribute is specified using glVertexAttribPointer (or glVertexAttribIPointer for an integer attribute). This is similar to glVertexPointer() etc except that the attribute is identified using its index rather than by having a separate function for each attribute.

That sounds very promising. Thanks.

At the moment I am having an issue with my fragment shader, namely that it doesn’t change my bool value when I want it to change. I have this fragment shader:



uniform sampler2D texture;
uniform bool isTexture;

void main()
{
	if (isTexture)
	{
		gl_FragColor = texture2D(texture, gl_TexCoord[0].st);
	}
	else
	{
		gl_FragColor = gl_Color;
	}
}


And I got these two methods to tell it when I start using a texture and when I am done with it:


        ...

        vertexShaderTextureAttr = glGetAttribLocation(shaderProgramCode, "texture");
        usingTextureAttr = glGetAttribLocation(shaderProgramCode, "isTexture");

        ...

	public static void startUsingTexture(int textureID)
	{
		glUniform1i(vertexShaderTextureAttr, textureID);
		glUniform1i(usingTextureAttr, GL_TRUE);
	}

	public static void stopUsingTexture()
	{
		glUniform1i(usingTextureAttr, GL_FALSE);
	}

However, the part where if (isTexture) … is is never executed, only the else part, meaning the bool is always false. Am I doing something wrong here? Can’t I use bools like that (I tried it with an integer too, doesn’t work either)? The second uniform value, the bool, doesn’t change whatever I do while the first one seems to be working fine as I get textures to render when I just use the first part of the render code.

[QUOTE=1337;1265476]At the moment I am having an issue with my fragment shader, namely that it doesn’t change my bool value when I want it to change. I have this fragment shader:


uniform bool isTexture;

And I got these two methods to tell it when I start using a texture and when I am done with it:


        usingTextureAttr = glGetAttribLocation(shaderProgramCode, "isTexture");

Am I doing something wrong here?[/QUOTE]

You need to use glGetUniformLocation() for uniform variables.

Now geometry (polygons) works fine but images aren’t displayed at all o.O

I currently have no clue what the problem is. I suspect something is wrong with the vertex shader because whatever I do with the textures in the fragment shader doesn’t change anything at all.

Also:

The value stored in a sampler uniform should be the number of the texture unit (e.g. 0 for GL_TEXTURE0) to which the texture is bound, not the texture’s name (ID).

Oh, thanks again :slight_smile: Do you know how one could get the number of the texture unit (given the texture and its name (ID))?

It will be unit 0 unless you selected a different unit by calling glActiveTexture() at some point prior to binding.

You can query the texture ID bound to the active texture unit with glGetIntegerv(GL_TEXTURE_BINDING_2D). There isn’t a specific function to do the reverse.

That’s what I found out, too. But then shouldn’t this



		glUniform1i(vertexShaderTextureAttr, GL_TEXTURE0);
		glUniform1i(usingTextureAttr, GL_TRUE);


work? I also tried it with 0 instead of GL_TEXTURE0 and with GL_TEXTURE1, 2, 3, 4… and any other number available. Nothing changes :confused:

[QUOTE=1337;1265484]That’s what I found out, too. But then shouldn’t this



		glUniform1i(vertexShaderTextureAttr, GL_TEXTURE0);
		glUniform1i(usingTextureAttr, GL_TRUE);


work? I also tried it with 0 instead of GL_TEXTURE0[/QUOTE]
It should be 0 rather than GL_TEXTURE0.

Did you bind the texture to the unit with glBindTexture()? You normally need to do this before uploading data, although it’s common to unbind it afterwards.

Both seem to “work” actually, they have the same effect anyway. Thanks for pointing it out though :slight_smile:

Still got two problems, alpha values don’t work (transparancy doesn’t show; like 0.5a is the same as 1.0a and so on) and when I draw the texture at a different size than default size it is drawn multiple times (kind of).

EDIT: First one seems to be fixed, the solution was to just add


	gl_FragColor = gl_Color * texture2D(texture, gl_TexCoord[0].st);

if it is a texture and it works (yay! :)).
Second problem remains though, and I am not even sure how to reproduce it beacause it only seems to apply to some textures.