PDA

View Full Version : Issues with smooth edges spotlights in OpenGL



_1337_
04-05-2015, 10:16 AM
I am currently working on adding lighting to my project and am still at the basics (getting everything to render properly) with lighting. I am using LWJGL (Java) on a Windows 7 machine, more detailed information at the bottom.

So the issue is, when trying to create a spotlight with a cutoff (limited angle affected by the light) I get very hard edges, almost like stairs, as seen in this picture:

1718

There is no ambient light (using
glLightModel(GL_LIGHT_MODEL_AMBIENT, colorToFloatBuffer(new Color(0.0f, 0.0f, 0.0f, 1.0f)))) and I also disabled spot exponent (specifies how the light is distributed in the cone) and all attenuations to make the effect clearer and the background is a big white image (made black by the lack of ambient light; I am also wondering why I even need ot use an image at all in order to see the light).

And that is obviously not what it is supposed to look like (it should be a triangle-like shape, right?) and I have no idea why. Additionally, I scale the context before rendering anything in order to maintain using orhtographic coordinates in a perspective (GLUT) view for visual effects.

This is how I setup my light:



glEnable(GL_LIGHTING);
glEnable(GL_COLOR_MATERIAL);

glLight(glLightID, GL_AMBIENT, colorToFloatBuffer(Color.blue));
glLight(glLightID, GL_DIFFUSE, colorToFloatBuffer(Color.black));
glLight(glLightID, GL_SPECULAR, colorToFloatBuffer(Color.black));

glLight(glLightID, GL_POSITION, floatBuffer.put(position.x).put(position.y).put(0. 0f).put(1.0f));

glLight(glLightID, GL_SPOT_DIRECTION, otherFloatBuffer.put(1.0f, 0.0f, 0.0f));
glLightf(glLightID, GL_SPOT_EXPONENT, lightSource.getSpotExponent());
glLightf(glLightID, GL_SPOT_CUTOFF, 22.5f);

glLightf(glLightID, GL_CONSTANT_ATTENUATION, 1.0f);
glLightf(glLightID, GL_LINEAR_ATTENUATION, 0.0f);
glLightf(glLightID, GL_QUADRATIC_ATTENUATION, 0.0f);


Also, the Form Posting Guide told me to post specific information about the system I am using, so here it goes:

OS: Windows 7 | OS_VERSION: 6.1
JAVA_VERSION: 1.7.0_71
LWJGL_VERSION: 2.9.0
GL_VERSION: 4.3.0
GL_VENDOR: NVIDIA Corporation
GL_RENDERER: GeForce GTX 560 Ti/PCIe/SSE2

Thanks for any help in advance.

GClements
04-05-2015, 02:05 PM
The main thing to bear in mind about OpenGL's fixed-function lighting is that it calculates a colour for each vertex, and those colours are linearly interpolated across the polygon.

How well this works depends upon the resolution of the geometry relative to the lighting, i.e. whether you have enough vertices to adequately sample the illumination.

The number of cases where it works well In practice is rather low. Games almost (?) never use OpenGL's lighting. Early OpenGL-based games (e.g. GLQuake, Quake 2) used light maps, modern games use fragment shaders to perform lighting calculations per fragment (pixel).

_1337_
04-05-2015, 03:21 PM
Funny enough, I just found this out a few hours ago and have since been looking into GLSL - vertex and fragment shaders - and it doesn't seem too hard so I'll try that instead (and I can add more than 8 lights! awesome!). Thanks for pointing it out and the additional interesting info :)

So as kind of a followup (that doesn't really fit here but you seem like you know a lot about GLSL so I will ask now ):

I am currently setting everything up to transition from the old lighting models to GLSL's shaders. I am already wondering how to create (fast) cone spotlights (spotlights with a limited angle) in GLSL?
Additionally, it would be awesome if you could tell me that in your opinion is the best way to render multiple lights (pass an array of lights and render it in one blend pass? multiple blend passes that blend together? - I don't know how to pass an array of lights to GLSL nor how to handle multiple lights at once).

Thank you very much for your help :)

EDIT:

Also, how is it possible to selectively apply certain lights to certain textures? (For example I don't want GUI to be affected at all, and my lights have depth ranges (every texture in that range is affected - others aren't))

Alfonse Reinheart
04-05-2015, 04:00 PM
I am already wondering how to create (fast) cone spotlights (spotlights with a limited angle) in GLSL?

Once upon a time, the answer would have been to use a projected texture or cubemap. On modern hardware however, in-shader computations are much faster than texture lookups.

So just do the math yourself. Spotlight math is pretty simple; you fade out the light's intensity based on the result of a dot product between the direction from the point towards the light and the direction of the spotlight. OpenGL's fixed-function pipeline does exponential falloff, but you can use whatever makes your scene work.


Additionally, it would be awesome if you could tell me that in your opinion is the best way to render multiple lights (pass an array of lights and render it in one blend pass? multiple blend passes that blend together?

There really is no "best way", as each method has its own benefits and drawbacks. Deferred rendering (https://en.wikipedia.org/wiki/Deferred_shading) is a solid solution to the problem, but it can be bandwidth intensive, and it makes multisample anti-aliasing quite expensive. There are variations on deferred rendering (light pre-pass, as explained there), which have different drawbacks. The single-light-per-pass approach can work, though it really benefits from a depth pre-pass (rendering just the depth of everything, so that only fragments that contribute to the result are executed).

In your case, I'd just start with whatever works. As you start to understand the performance concerns your program will encounter, you'll start to see what the best solution for you is.


I don't know how to pass an array of lights to GLSL

The same way you pass an array of anything to GLSL.

Normally, I would suggest a simple UBO (https://www.opengl.org/wiki/Uniform_Buffer_Object), using std140 layout (https://www.opengl.org/wiki/Interface_Block_%28GLSL%29#Buffer_backed). However, I see that you're using LWJGL, which means Java. That makes it a bit more difficult to pass structured data via buffer objects. Not impossible, just a bit more difficult to work with than in C or C++, where you can just do some pointer casting and memory copies.

So it would probably be easier to use an array of uniforms and call glProgramUniform (or glUniform if you want to do it old-school). Your data in GLSL would preferably be structured as an array of basic types:



#define MAX_NUM_LIGHTS 4

uniform int numLights;
uniform vec3 lightPositions[MAX_NUM_LIGHTS];
uniform vec3 lightIntensities[MAX_NUM_LIGHTS];


And in OpenGL, you would get the uniform locations (https://www.opengl.org/wiki_132/index.php?title=Uniform_Location) for 'lightPositions' and 'lightIntensities', then call glProgramUniform3fv (or the LWJGL equivalent). This function can take an array of vec3's to upload.


nor how to handle multiple lights at once

Lighting is additive, so just take the sum of the result computed from each light.

GClements
04-05-2015, 07:20 PM
I am currently setting everything up to transition from the old lighting models to GLSL's shaders. I am already wondering how to create (fast) cone spotlights (spotlights with a limited angle) in GLSL?

Subtract the surface position from the light position to get a direction, calculate the dot product between that and the light's direction to obtain the cosine of the angle between them. Clamp to the positive range (i.e. negative values become zero). Then you can either use a step or smoothstep function for a (relativelY) hard edge, or any other function for a smooth falloff.


Additionally, it would be awesome if you could tell me that in your opinion is the best way to render multiple lights (pass an array of lights and render it in one blend pass? multiple blend passes that blend together?

It depends. If you have a lot of overdraw, deferred rendering may be worthwhile (as it means that you only perform lighting calculations on visible surfaces, not on occluded surfaces). Additionally, if each light only affects a small portion of the scene, tiled rendering can reduce the amount of computation required (as you can completely ignore lights which don't affect the current tile).



- I don't know how to pass an array of lights to GLSL

Use a uniform buffer object to supply the data for an array of structures (similar to the definition of gl_LightSource in the compatibility profile, although you probably won't need as many fields).



nor how to handle multiple lights at once).

Just add together all of the contributions from the individual lights.



Also, how is it possible to selectively apply certain lights to certain textures? (For example I don't want GUI to be affected at all, and my lights have depth ranges (every texture in that range is affected - others aren't))
You'd normally draw the GUI in a separate draw call, so you can just change the array of light sources for the GUI (or use a different shader program altogether).

Distance limits would typically be implemented using attenuation. To avoid a sudden cut-off, you can subtract a small "floor" value and clamp to positive, so lights are effectively cut off when the un-clamped value becomes negative.

For culling, a simple option is to add an integer attribute containing a bitmask of the lights which affect a given surface.

Beyond that, the almost limitless capabilities offered by shaders mean that lighting is now an incredibly complex subject. Simply reading all of the papers and articles which are being written on the subject would be a full-time job.

_1337_
04-06-2015, 02:52 AM
@AlfonseReinheart and @GClements, thank you both very much for your amazingly detailed responses, they really help me a lot. The only thing I still don't have any idea how to go about is the distance limit; I don't really understand your explanation. To explain my situation, it is a 2D project with perspective view for some visual effects (bumping certain parts of the screen in and out). The thing is, the "depth" is not the z coordinate (which would mean that it would be rendered in a different size), but rather an arbitrary value I added to all my game objects to render them in a particular order (to maintain using orthographic coordinates and controllable resize behaviour). So when I know have an array of lights passed to GLSL, how could I go about the depth ranges for lights? Like for example anything positive is foreground ("above" the main field of action where entities and terrain is), anything negative is background and depth 0 is the main field. When I only want objects of a certain depth to be affected by lights that contain that depth in their depthrange (if lightmindepth <= objectdepth <= lightmaxdepth)? Like, the lights are calculated once, right? Not for every object drawn (well in a certain way they are, but I don't know their depth anymore then or what they originally were) at least. Then there are partially transperent textures through which you could of course see some background light and so on. That still really confuses me, it would be awesome if could clear that up for me :) Thanks again, your help is awesome.

GClements
04-06-2015, 04:15 AM
If you want to take account of the depth in the lighting calculation, then the depth needs to be passed to the shader. If you're drawing all objects with a given depth in a separate pass, you can use a uniform variable. Otherwise it will need to be a vertex attribute.

_1337_
04-06-2015, 04:39 AM
Thanks for the quick and enlightening reply. I think I understand that now. So I either pass it as a vertex and draw it all in one go or I draw it sequentially grouped by depth so that I can pass it as a uniform variable. This may sound stupid, but is it somehow possible to pass the depth in the vertex while still keeping the original z-coordinate? Like, two depth coordinates in vertex (additionally to x and y)?

GClements
04-06-2015, 06:35 AM
is it somehow possible to pass the depth in the vertex while still keeping the original z-coordinate? Like, two depth coordinates in vertex (additionally to x and y)?
You can pass many attributes for each vertex.

You could pass the depth in the w coordinate of the position attribute, or you could add a separate attribute. Using a separate attribute has the advantage that it can be a different size (e.g. the depth could be an unsigned byte while the position uses 32-bit floats). Also, the depth attribute can be "flat"-qualified, meaning that the value is guaranteed to be constant for each triangle.

_1337_
04-06-2015, 06:42 AM
Ahhh. Thanks again. I think I know how to do it with the w-coordinate, but the seperate attribute seems to be a cleaner (and more flexible) solution.How is it possible to add a new attribute (and access it in GLSL)?

GClements
04-06-2015, 07:00 AM
Ahhh. Thanks again. I think I know how to do it with the w-coordinate, but the seperate attribute seems to be a cleaner (and more flexible) solution.How is it possible to add a new attribute (and access it in GLSL)?
Any vertex shader input (variable with the "in" qualifier, or "attribute" qualifier in older versions) is an attribute.

You can specify the attribute index with a layout qualifier, e.g.

layout(location=1) in int depth;

Or set it in the application by calling glBindAttribLocation() prior to linking. Or let the compiler allocate the index and query it in the application by calling glGetAttribLocation() after linking.

The data for the attribute is specified using glVertexAttribPointer (or glVertexAttribIPointer for an integer attribute). This is similar to glVertexPointer() etc except that the attribute is identified using its index rather than by having a separate function for each attribute.

_1337_
04-06-2015, 07:18 AM
That sounds very promising. Thanks.

At the moment I am having an issue with my fragment shader, namely that it doesn't change my bool value when I want it to change. I have this fragment shader:




uniform sampler2D texture;
uniform bool isTexture;

void main()
{
if (isTexture)
{
gl_FragColor = texture2D(texture, gl_TexCoord[0].st);
}
else
{
gl_FragColor = gl_Color;
}
}



And I got these two methods to tell it when I start using a texture and when I am done with it:



...

vertexShaderTextureAttr = glGetAttribLocation(shaderProgramCode, "texture");
usingTextureAttr = glGetAttribLocation(shaderProgramCode, "isTexture");

...

public static void startUsingTexture(int textureID)
{
glUniform1i(vertexShaderTextureAttr, textureID);
glUniform1i(usingTextureAttr, GL_TRUE);
}

public static void stopUsingTexture()
{
glUniform1i(usingTextureAttr, GL_FALSE);
}


However, the part where if (isTexture) ... is is never executed, only the else part, meaning the bool is always false. Am I doing something wrong here? Can't I use bools like that (I tried it with an integer too, doesn't work either)? The second uniform value, the bool, doesn't change whatever I do while the first one seems to be working fine as I get textures to render when I just use the first part of the render code.

GClements
04-06-2015, 07:26 AM
At the moment I am having an issue with my fragment shader, namely that it doesn't change my bool value when I want it to change. I have this fragment shader:



uniform bool isTexture;


And I got these two methods to tell it when I start using a texture and when I am done with it:



usingTextureAttr = glGetAttribLocation(shaderProgramCode, "isTexture");


Am I doing something wrong here?

You need to use glGetUniformLocation() for uniform variables.

_1337_
04-06-2015, 07:43 AM
Now geometry (polygons) works fine but images aren't displayed at all o.O

I currently have no clue what the problem is. I suspect something is wrong with the vertex shader because whatever I do with the textures in the fragment shader doesn't change anything at all.

GClements
04-06-2015, 07:58 AM
Also:




glUniform1i(vertexShaderTextureAttr, textureID);


The value stored in a sampler uniform should be the number of the texture unit (e.g. 0 for GL_TEXTURE0) to which the texture is bound, not the texture's name (ID).

_1337_
04-06-2015, 08:32 AM
Oh, thanks again :) Do you know how one could get the number of the texture unit (given the texture and its name (ID))?

GClements
04-06-2015, 09:10 AM
Oh, thanks again :) Do you know how one could get the number of the texture unit (given the texture and its name (ID))?
It will be unit 0 unless you selected a different unit by calling glActiveTexture() at some point prior to binding.

You can query the texture ID bound to the active texture unit with glGetIntegerv(GL_TEXTURE_BINDING_2D). There isn't a specific function to do the reverse.

_1337_
04-06-2015, 09:16 AM
That's what I found out, too. But then shouldn't this




glUniform1i(vertexShaderTextureAttr, GL_TEXTURE0);
glUniform1i(usingTextureAttr, GL_TRUE);



work? I also tried it with 0 instead of GL_TEXTURE0 and with GL_TEXTURE1, 2, 3, 4.. and any other number available. Nothing changes :/

GClements
04-06-2015, 09:30 AM
That's what I found out, too. But then shouldn't this




glUniform1i(vertexShaderTextureAttr, GL_TEXTURE0);
glUniform1i(usingTextureAttr, GL_TRUE);



work? I also tried it with 0 instead of GL_TEXTURE0
It should be 0 rather than GL_TEXTURE0.

Did you bind the texture to the unit with glBindTexture()? You normally need to do this before uploading data, although it's common to unbind it afterwards.

_1337_
04-06-2015, 10:00 AM
Both seem to "work" actually, they have the same effect anyway. Thanks for pointing it out though :)

Still got two problems, alpha values don't work (transparancy doesn't show; like 0.5a is the same as 1.0a and so on) and when I draw the texture at a different size than default size it is drawn multiple times (kind of).

EDIT: First one seems to be fixed, the solution was to just add



gl_FragColor = gl_Color * texture2D(texture, gl_TexCoord[0].st);


if it is a texture and it works (yay! :)).
Second problem remains though, and I am not even sure how to reproduce it beacause it only seems to apply to some textures.

_1337_
04-06-2015, 01:29 PM
OK, I think I found the root of evil now: Clipping. The incorrectly rendered images are all drawn as a resized pattern and the implementation on that is to draw more than needed and then clip the rest. So it looks like my shaders kind of broke clipping, I guess? Do I need to re-implement this? Is this done in the "original" shader program? What do I have to add to make it work again?

GClements
04-06-2015, 02:49 PM
The incorrectly rendered images are all drawn as a resized pattern and the implementation on that is to draw more than needed and then clip the rest. So it looks like my shaders kind of broke clipping, I guess? Do I need to re-implement this? Is this done in the "original" shader program? What do I have to add to make it work again?
If you're getting repeated copies of the texture, it means that a) your texture coordinates aren't limited to the range 0..1, and b) the wrapping mode is GL_REPEAT (which is the default).

If you expect either a portion of the texture or the entire texture to be mapped to the polygon, check your texture coordinates. If you expect to see one copy of the texture surrounded by a border, change the wrapping mode with glTexParameter(GL_TEXTURE_WRAP_S) and GL_TEXTURE_WRAP_T.

If this was working before you used a fragment shader, what has changed since then?

_1337_
04-06-2015, 03:06 PM
I think the problem is that clipping isn't working anymore. The other textures that do not require clipping to be drawn work fine, but some objects need to fill up space with a certain image pattern and there clipping is used which my shader seems to break. I researched that and found some sources (such as https://www.opengl.org/discussion_boards/showthread.php/171914-How-to-activate-clip-planes-via-shader) stating that as custom shaders override the original functionality, clipping has to be re-implemented. It seems like that is in fact the problem but I have no idea how to check if a pixel is being clipped in the fragment shader, none of the links I found included real code examples.

Alfonse Reinheart
04-06-2015, 03:40 PM
It's not clear exactly what you mean by "clipping", since you use the term incorrectly in a couple of places. "Pixels" (I'll assume you meant "Fragments") don't get clipped. Only triangles get clipped.

Custom shaders do override the old user-defined clip plane support, but that doesn't turn off viewport clipping (https://www.opengl.org/wiki/Clipping). It doesn't affect viewport clipping at all.

If clipping is indeed some kind of problem, it's more likely that you did something wrong with your viewport than anything else.

GClements
04-06-2015, 09:03 PM
I think the problem is that clipping isn't working anymore. The other textures that do not require clipping to be drawn work fine,
Was your previous code using glClipPlane() and glEnable(GL_CLIP_PLANE0) etc?

If it wasn't, then the problem has nothing to do with clipping.

If it was, then you're correct that you need to re-implement this, but it's the vertex shader which is responsible, not the fragment shader. User clipping planes are still enabled in the same way, but rather than specifying plane coefficients using glClipPlane(), the vertex shader writes the distance inside the clip plane to gl_ClipDistance[i] (possibly using plane coefficients passed via uniforms).

If you aren't using a vertex shader, then clipping should function as before.

Another thought: if you're now using the w component of the vertex position for depth, that will interact with clipping.

_1337_
04-07-2015, 03:03 AM
Awesome, you're right! I was using glClipPlane and am now using glScissor instead which works fine. Everything now looks like it did before which means I finally can look into lighting. Thank you so much :)

_1337_
04-07-2015, 07:16 AM
So I need your help once more as I am unable to find out why my shader won't compile. So I created this fragment shader:



...
uniform vec4 lightColor[MAX_LIGHTS];

uniform int minAffectedDepth[MAX_LIGHTS];
uniform int maxAffectedDepth[MAX_LIGHTS];

uniform float intensity[MAX_LIGHTS];
...

void main()
{
vec4 ownColor = gl_Color;
vec4 color = vec4(0.0, 0.0, 0.0, 1.0);

if (isTexture)
{
ownColor *= texture2D(texture, gl_TexCoord[0].st);
}

for (int i = 0; i < numLights; i++)
{
if (myDepth >= minAffectedDepth[i] && myDepth <= maxAffectedDepth[i])
{
if (isSpotLight[i])
{

}
else
{
color += ownColor * lightColor[i] * intensity[i]; // <--- this line
}
}
}

gl_FragColor = color;
}

The shader without the line does what it is supposed to do: turn everything black (no light). But when I add the line



color += ownColor * lightColor[i] * intensity[i];


the shader doesn't compile anymore and doesn't even give any error messages.
At first I thought it was because I was using arrays wrong, but the minAffectedDepth[] check works just fine and that is an array access too. What am I doing wrong?

GClements
04-07-2015, 08:57 AM
the shader doesn't compile anymore and doesn't even give any error messages.
At first I thought it was because I was using arrays wrong, but the minAffectedDepth[] check works just fine and that is an array access too. What am I doing wrong?

I'm not sure.

But you might be exceeding the maximum number of uniform components in the default uniform block. Uniforms are optimised away if they're not used, so simply adding the declaration won't cause compilation to fail even if using it will. If that's what's happening, then you'll need to use an explicit uniform block (glUniformBlockBinding() etc in OpenGL 3.1+) backed by a uniform buffer object.

Also, you need to check the status (and any error messages) for both compilation and linking, as errors may occur at either phase.

_1337_
04-07-2015, 10:18 AM
I checked the linkage status and it already fails to link the program while the individual compiling of the shaders works fine. Also, the maximum number of uniforms in the fragment shader (according to my OpenGL) is 35657, so I am not exceeding that. Any idea what could be causing the linking error? o.O

EDIT: I just tested whether maybe the error log gets overriden after each step - it does. So here is the linkage error I am getting:

Internal error: assembly compile error for fragment shader at offset 79443:
-- error message --
line 1704, column 35: error: invalid local parameter number
line 1710, column 18: error: out of bounds array access
line 1719, column 19: error: out of bounds array access
line 1724, column 15: error: out of bounds array access
line 1725, column 15: error: out of bounds array access
line 1729, column 25: error: offset for relative array access outside supported range
line 1734, column 27: error: offset for relative array access outside supported range
-- internal assembly text --
!!NVfp5.0
OPTION NV_shader_atomic_float;
# cgc version 3.1.0001, build date Jan 18 2013
# command line args:
#vendor NVIDIA Corporation
#version 3.1.0.1
#profile gp5fp
#program main
#semantic numLights
#semantic lightPosition
#semantic lightColor
#semantic minAffectedDepth
#semantic maxAffectedDepth
#semantic isSpotLight
#semantic spotDirection
#semantic spotExponent
#semantic spotCutOff
#semantic spotCosCutOff
#semantic intensity
#semantic constantAttenuation
#semantic linearAttenuation
#semantic quadraticAttenuation
#semantic texture
#semantic isTexture
#semantic myDepth
#var float4 gl_Color : $vin.COLOR0 : COL0 : -1 : 1
#var float4 gl_TexCoord[0] : $vin.TEX0 : TEX0 : -1 : 1
#var float4 gl_TexCoord[1] : : : -1 : 0
#var float4 gl_TexCoord[2] : : : -1 : 0
#var float4 gl_TexCoord[3] : : : -1 : 0
#var float4 gl_TexCoord[4] : : : -1 : 0
#var float4 gl_TexCoord[5] : : : -1 : 0
#var float4 gl_TexCoord[6] : : : -1 : 0
#var float4 gl_TexCoord[7] : : : -1 : 0
#var float4 gl_FragColor : $vout.COLOR : COL0[0] : -1 : 1
#var int numLights : : c[640] : -1 : 1
#var float3 lightPosition[0] : : : -1 : 0
#var float3 lightPosition[1] : : : -1 : 0
#var float3 lightPosition[2] : : : -1 : 0
... and so on with every single array....
#var float quadraticAttenuation[125] : : : -1 : 0
#var float quadraticAttenuation[126] : : : -1 : 0
#var float quadraticAttenuation[127] : : : -1 : 0
#var sampler2D texture : : texunit 0 : -1 : 1
#var bool isTexture : : c[641] : -1 : 1
#var int myDepth : : c[642] : -1 : 1
PARAM c[643] = { program.local[0..642] };
ATTRIB fragment_texcoord[] = { fragment.texcoord[0..0] };
TEMP R0, R1, R2, R3;
TEMP RC, HC;
OUTPUT result_color0 = result.color;
TEXTURE texture0 = texture[0];
MOV.U.CC RC.x, c[641];
MOV.F R0, fragment.color;
MOV.F R1, {0, 1, 0, 0}.xxxy;
IF NE.x;
TEX.F R0, fragment.texcoord[0], texture0, 2D;
MUL.F R0, fragment.color, R0;
ENDIF;
MOV.S R3.x, {0, 0, 0, 0};
REP.S ;
SLT.S R2.x, R3, c[640];
SEQ.U R2.x, -R2, {0, 0, 0, 0};
MOV.U.CC RC.x, -R2;
BRK (GT.x);
MOV.U R2.x, R3;
SLE.S R2.y, c[642].x, c[R2.x + 256].x;
SGE.S R2.x, c[642], c[R2.x + 128];
AND.U.CC HC.x, -R2, -R2.y;
IF NE.x;
MOV.U R2.x, R3;
SEQ.U R2.x, c[R2.x + 384], {0, 0, 0, 0};
MOV.U.CC RC.x, -R2;
IF NE.x;
MOV.U R3.y, R3.x;
MUL.F R2, R0, c[R3.y];
MAD.F R1, R2, c[R3.y + 512].x, R1;
ENDIF;
ENDIF;
ADD.S R3.x, R3, {1, 0, 0, 0};
ENDREP;
MOV.F result_color0, R1;
END
# 30 instructions, 4 R-regs
(1)

Did I just somehow break OpenGL?

EDIT: I changed MAX_LIGHTS to 64 and it works. Is this an OpenGL bug?
EDIT: Definitely looks like an OpenGL bug now. The magical number seems to be 101 (works fine), everything >=102 = linking error.

Alfonse Reinheart
04-07-2015, 10:59 AM
100 lights would mean, given your definitions, 700 uniform components (each vec4 counts as 4 components). That's a lot. This isn't an "OpenGL bug". You're almost certainly exceeding the implementation's uniform limits, just as GClements suggested; it's no surprise that the linker chokes on it.

Nor am I surprised that NVIDIA's multi-layered compiler gives such an obtuse error message for it...

_1337_
04-07-2015, 11:16 AM
100 lights would mean, given your definitions, 700 uniform components (each vec4 counts as 4 components). That's a lot. This isn't an "OpenGL bug". You're almost certainly exceeding the implementation's uniform limits, just as GClements suggested; it's no surprise that the linker chokes on it.


Well, GL_MAX_FRAGMENT_UNIFORM_COMPONENTS is 34567 (which is just a little more than 700 ;)) for me, as mentioned earlier. Still seems like a bug.

Alfonse Reinheart
04-07-2015, 11:25 AM
A limit that large is a bit surprising from NVIDIA hardware since, last I heard, non-block uniforms were actually compiled into the shader executable. But there it is.

So it seems more like their internal Cg compiler can't handle uniform arrays of that size, since it walked past some internal compiler limit.

You can do as GClements suggested and use a UBO instead of non-block uniforms. You'll have to work out how to deal with the difficulties of doing that in LWJGL though.

GClements
04-07-2015, 11:29 AM
Also, the maximum number of uniforms in the fragment shader (according to my OpenGL) is 35657
35657 = 0x8B49, which is the value of the enumeration constant GL_MAX_FRAGMENT_UNIFORM_COMPONENTS.

That's the "key" used to query the limit, not the value of the limit. To obtain the value, you need to call glGetIntegerv() with GL_MAX_FRAGMENT_UNIFORM_COMPONENTS as the first argument (pname).

_1337_
04-07-2015, 11:34 AM
Oh. Sorry then, stupid mistake. ;). Still 2048 though. With 101 * 13 uniforms I get 1313 array uniforms + 4 other is 1317 (< 2048). Still smaller than the limit. So is it a bug or am I just being stupid again? :)

Alfonse Reinheart
04-07-2015, 11:55 AM
With 101 * 13 uniforms I get 1313 array uniforms + 4 other is 1317 (< 2048). Still smaller than the limit.

It's not the number of uniforms; it's the number of components. One of the arrays you showed us is a vec4, and a vec4 has 4 components. So that would be 4 * 101, just for that array.

_1337_
04-07-2015, 12:03 PM
Ahhh. Ok that makes sense, thanks for pointing that out. So I was being stupid (again) ;)

So 4 + 102 (the number where it starts to throw errors) * 20 = 4 + 2040 = 2044. Is that like close enough?

Alfonse Reinheart
04-07-2015, 12:56 PM
It's best not to get that close to the limits. Even if you're technically under them, there is usually a degree of fuzziness in them. For example, I'd guess that many implementations don't count sampler uniforms, since they use different resources from regular uniforms.

Yeah, it's technically a driver bug if compilation fails when you're under the limit, but with you being so close to the edge, they'd probably consider it an edge case and prioritize fixing it appropriately.

_1337_
04-07-2015, 01:25 PM
I set MAX_LIGHTS to 64 now. With that I should be safe on any platform, right?

GClements
04-07-2015, 02:14 PM
I set MAX_LIGHTS to 64 now. With that I should be safe on any platform, right?
OpenGL 2 only requires GL_MAX_FRAGMENT_UNIFORM_COMPONENTS to be at least 64. OpenGL 3 increases the minimum to 1024.

So you should be safe for OpenGL 3 or later, but not for OpenGL 2.

If you need to support OpenGL 2, query the limit and adjust MAX_LIGHTS accordingly.

_1337_
04-08-2015, 09:35 AM
I just hit another problem I currently have no solution for:

Ambient light works fine but spot lights are giving me trouble, namely the "real" (world coordinates) positions of a given pixel, because currently it seems like they coordinates are scaled wrong (like an additional offset of 1 I added manually in the shader is like 100 pixels). I found several articles and posts about how you need to get the modelMatrix, but none of them actually showed how to do it. This post (https://www.opengl.org/discussion_boards/showthread.php/163272-How-do-I-get-a-fragments-x-y-z-in-world-coordinates-in-the-fragment-shader) suggest that you need to multiply the modelviewmatrix with the inversed viewmatrix, but it doesn't explain how to get the viewmatrix. There is no glGetFloat for it. How can I solve this issue?

EDIT: Specifically, the vertex-to-fragment-passed coordinates of the



varying vec3 originalPos;

void main()
{
originalPos = gl_ModelViewMatrix * gl_Vertex;
...


are not in the same scale (world? not sure what the word here is) as the light coordinates (which are, I think, normal world coordinates).

Thanks for any kind of help :)

GClements
04-08-2015, 12:20 PM
Any values you specify as uniforms or attributes are passed directly to the shaders without any transformation. The matrices (modelview, projection) are whatever you set them to.

For fixed-function lighting, positions and directions are specified in object (model) space but transformed immediately and stored in eye space. Lighting calculations are performed in eye space.

_1337_
04-08-2015, 01:04 PM
Thanks for your reply, but I am sorry I still don't quite understand it. So I pass the lightPosition with just the same world coordinates it has (position.x, position.y, 0.0) and the varying originalPos is set to modelViewMatrix multiplied by gl_Vertex, like this:



...
originalPos = gl_ModelViewMatrix * gl_Vertex;
...


But they somhow seem to be totally different scales and I can't get any light calculations out of them. I calculate the vec3 lightDirection like this:



...
lightDirection = vec3(lightPosition[i] - originalPos);
distance = length(lightDirection);
...


And I then use its length to calculate the light attenuation, spotEffect and so on. So it really needs to be in the same space, but I don't know how to achieve that. I understand why it doesn't work this way, but not how to fix it. It would be awesome if you could clarify that :)

GClements
04-08-2015, 05:50 PM
Thanks for your reply, but I am sorry I still don't quite understand it. So I pass the lightPosition with just the same world coordinates it has (position.x, position.y, 0.0)

OpenGL doesn't have world coordinates. It has object coordinates (the values passed via glVertexPointer etc), which are transformed by the model-view matrix to produce eye coordinates, which are transformed by the projection matrix to produce clip coordinates.

At least, that's the case for the fixed-function pipeline. With shaders, you still have object coordinates (i.e. the original values passed into the shaders via attributes and uniforms) and clip coordinates (the values written to gl_Position). Any other coordinate system is up to the programmer, although it's common to use eye coordinates for lighting.



and the varying originalPos is set to modelViewMatrix multiplied by gl_Vertex, like this:



...
originalPos = gl_ModelViewMatrix * gl_Vertex;
...


But they somhow seem to be totally different scales and I can't get any light calculations out of them. I calculate the vec3 lightDirection like this:



...
lightDirection = vec3(lightPosition[i] - originalPos);
distance = length(lightDirection);
...



In the above code, originalPos will be in eye coordinates, lightPosition[i] will be in whatever coordinate system you choose.

You should probably be transforming the light positions (and spot directions) by the modelview matrix before passing them to the shader via glUniform or whatever. This is how the fixed-function pipeline behaves; glLightfv(GL_POSITION) transforms the given position by the current model-view matrix and stores the resulting eye coordinates for use in subsequent lighting calculations. Similarly for GL_SPOT_DIRECTION (except that the translation component of the matrix is ignored).

Beyond that: if you're going to be using shaders, you should avoid using the OpenGL matrix functions. Generate the matrices in the application (either using your own code or a library such as GLM) and pass them as uniforms. Reading the legacy matrices out of OpenGL with glGetDoublev(GL_MODELVIEW_MATRIX) etc can have a significant performance cost (the same goes for any glGet* function; you should avoid using any of those in per-frame code).

_1337_
04-08-2015, 11:54 PM
OpenGL doesn't have world coordinates. It has object coordinates (the values passed via glVertexPointer etc), which are transformed by the model-view matrix to produce eye coordinates, which are transformed by the projection matrix to produce clip coordinates.

At least, that's the case for the fixed-function pipeline. With shaders, you still have object coordinates (i.e. the original values passed into the shaders via attributes and uniforms) and clip coordinates (the values written to gl_Position). Any other coordinate system is up to the programmer, although it's common to use eye coordinates for lighting.



In the above code, originalPos will be in eye coordinates, lightPosition[i] will be in whatever coordinate system you choose.

You should probably be transforming the light positions (and spot directions) by the modelview matrix before passing them to the shader via glUniform or whatever. This is how the fixed-function pipeline behaves; glLightfv(GL_POSITION) transforms the given position by the current model-view matrix and stores the resulting eye coordinates for use in subsequent lighting calculations. Similarly for GL_SPOT_DIRECTION (except that the translation component of the matrix is ignored).

Beyond that: if you're going to be using shaders, you should avoid using the OpenGL matrix functions. Generate the matrices in the application (either using your own code or a library such as GLM) and pass them as uniforms. Reading the legacy matrices out of OpenGL with glGetDoublev(GL_MODELVIEW_MATRIX) etc can have a significant performance cost (the same goes for any glGet* function; you should avoid using any of those in per-frame code).

Thanks for the awesome reply. So, it should work if I just multiply lightPosition with the gl_ModelViewMatrix and they will be in the same coordinate space again?
Like this:



lightDirection = vec3(lightPosition[i] * gl_ModelViewMatrix - originalPos);


But somehow that doesn't work either.

GClements
04-09-2015, 04:56 AM
Thanks for the awesome reply. So, it should work if I just multiply lightPosition with the gl_ModelViewMatrix and they will be in the same coordinate space again?
Yes. Although ideally you'd want to avoid performing the transformation for each vertex. Also, if you change the model-view matrix for each object, the light will move accordingly (i.e. the light will be in a fixed position in object space, which is unlikely to be what you want).

This is why glLight() transforms the position when you set it, not during rendering.


But somehow that doesn't work either.
Well, it's hard to say what's wrong without more context.

_1337_
04-09-2015, 08:42 AM
Thanks again, I will provide more context now. I have been trying around for hours and hours and just can't get it to work at all :(

Also, wouldn't it mean when just multiplying lightPosition with the gl_ModelViewMatrix solved the problem, that I could just leave it beacuse the originalPosition in the fragmentShader was originally multiplied with the same matrix in the vertex shader too? Confuses me. I tried like every possible combination of multiplying this with that and nothing works.

(Always with varying vec3 originalPos in vertexShader and the light properties (x = 500, y = 500, color = yellow, intensity = 1.0, spotLight = true, spotDirection = 0, angle = 360 (another note: cutoff is angle / 2.0f), spotExponent = 0, linearAttenuation = 0.05) and applied to a 1920 * 1080 white image)

Option 1:

Using the lightPosition as given (like, 500x and 500y, 0z) and the mysteriously transformed originalPos coming from gl_Vertex.

Vertex Shader (both shaders just contain the parts that affect the light in this example, the others dont change and work so far anyway):



originalPos = gl_Vertex;


Fragment Shader:



lightDirection = vec3(lightPosition - originalPos);
distance = length(lightDirection);
spotEffect = dot(normalize(spotDirection[i]), normalize(-lightDirection));

if (spotEffect > spotCosCutOff[i])
{
spotEffect = pow(spotEffect, spotExponent[i]);
attenuation = spotEffect / (1.0 + linearAttenuation[i] * distance + quadraticAttenuation[i] * distance * distance);

color += attenuation * lightColor[i] * ownColor * intensity[i];
}


Result:

1735

Doesn't seem too bad, but doesn't really work either. Light moves with the texture and the 360 degree angle isn't quite there. Doesn't make a difference whether I use 180 or 360 as an angle, other angles seem to work fine, spotExponent, direction and attenuation seem too work too. So just the position and the 360 degree thing (should emit light in every direction).

Option 2:

Vertex Shader:



originalPos = gl_ModelViewMatrix * gl_Vertex;


Fragment Shader:



same as above


Result:

Completely dark screen at x500 and y500. At x0 and y0 it looks like this:

1736

Weirder than before. With a spotExponent of 1 however we discover that the light source is somehow outside of the screen (even though it should be at x0 y0 now):

1737

It also reacts tto position changes and all sorts of factors, though on a totally different scale (1 in x change looks like 100 pixels). 360 degree angle seems to kinda work here, but I can't be sure because it is offscreen. A spot direction of 180 (angle --> x = -1, y = 0) just flips the whole image horizontally, which is totally not what should happen (it should stay the same because spotDirection shouldn't matter at a 360 degree angle). When changing spotExponent to 0 and linearAttenuation to 0.1 it almost seems like as if a spotExponent of zero flips the light again, now making it seem almost correct, apart from only being halfway there:

1738

But heyyy, the light position now at least seems to be unaffected by the texture position.

Option 3:

Only multiply light position with modelviewmatrix. Original light properties restored.

Vertex Shader:



originalPos = gl_Vertex;


Fragment Shader:



lightDirection = vec3(vec3(vec4(lightPosition[i], 0.0) * gl_ModelViewMatrix) - originalPos); //lightPosition is a vec3 so I have to cast it to vec4 with 0.0 as w to multiply it with gl_ModelViewMatrix and then cast it back to vec3
//the other part remains the same


Result:

1739

The part I marked should demonstrate the distance 500x and 500y now represents. Kind of works too, but not really. 360 degree doesn't work either. Position is moving with texture. spotDirection and so on works, but 360 degree and 180 are still the same (how can that even happen? isnt the cos of (180 (which is the cutoff; angle / 2.0f)) -1.0? So basically every other cos should be greater than it, right?).

Option 4:

#1 and #2 combined.

Vertex Shader:



originalPos = gl_ModelViewMatrix * gl_Vertex;


Fragment Shader:



lightDirection = vec3(vec3(vec4(lightPosition[i], 0.0) * gl_ModelViewMatrix) - originalPos);


Result:

http://imgur.com/EDDGOXV (can't add more than 5 images per post)

Yes, it really is the whole screen. With x0 and y0 it looks like this:

http://i.imgur.com/MdnOXeA.png

Still not really better though.

Anyone any ideas? It would be super awesome if anyone could tell me why it isn't working, I have been working on this for days now and nothing really works.