You made some pretty substantial changes to the code for something that is supposed to be following the tutorial. For example, the camera-space surface normal in the tutorial was passed as a varying in the tutorial. In your code…
Well really, that’s likely your problem. In your code, you “calculate normal from texture coordinates”. But you didn’t pass any texture coordinates from your vertex shader. It never sets gl_TexCoord[0]. Why this isn’t a linker error, I don’t know.
This is one reason why you should always declare your shader inputs and outputs, even if they’re built-ins. That way, you can know at a glance what your shaders read and right. It’s also why avoiding built-ins is a good idea for a beginner; it forces you to explicitly say what you’re doing.
ThatMs because i want the point sprites to stay as spheres, if i change something else, the points will appear as point. So that’s why i’m having trouble using the tutorial.
ThatMs because i want the point sprites to stay as spheres, if i change something else, the points will appear as point.
That sounds like an even bigger bug. Admittedly, I haven’t used point sprites much, so I don’t fully understand the vagaries around them. But I don’t understand how which inputs/outputs you use affects where it is a “sphere” or a “point”.
But your problem remains as stated. Your vertex shader does not output the correct information to the fragment shader. Until you do so, you will not get proper lighting.
Points are not quads. They have never been quads. The varyings are not interpolated over the area of the point. Therefore, if you pass a varying, every fragment gets the same value. Which means that you cannot use a varying to compute where you are on the surface.
(note: the above is true for shaders. Fixed function works differently)
If you want to know where you are on a point, you use gl_PointCoord. You can then do interpolation as you see fit.
I use point sprites to simulate alot of spheres, over 2 millions. Since it would be hard to render that many spheres using quads, i use point sprite, and use a shader to make them look like spheres. Now, the points sprites are always facing the user. I need them to stay fixed, so i can move around to really have the sensation of a 3d sphere.
Now, the points sprites are always facing the user. I need them to stay fixed, so i can move around to really have the sensation of a 3d sphere.
Assuming you have corrected the problem I mentioned, you still have other problems. For example, you are doing your lighting in camera space. This means that your light direction needs to be relative to the camera. It therefore changes when the camera changes.
// pixel shader for rendering points as shaded spheres
const char *spherePixelShader = STRINGIFY(
uniform float pointRadius; // point size in world space
varying vec3 posEye; // position of center in eye space
uniform vec3 lightDir;
void main()
{
lightDir = vec3(0.577, 0.577, 0.577);
That won’t even compile. You cannot set uniforms from a shader (except as a constant initializer). They can only be set from code.
You have a light direction in world space. This represents the direction from the object to the light. You need a light direction in camera space. Therefore, you must transform the world space light direction into camera space. This is done on the CPU; the shader is then given the light direction in camera space as a uniform.
// pixel shader for rendering points as shaded spheres
const char *spherePixelShader = STRINGIFY(
uniform float pointRadius; // point size in world space
varying vec3 posEye; // position of center in eye space
uniform vec3 lightDir;
void main()
Look like a good tutorial. I don’t feel like reading another tutorial (read about 3 tutorials about directional lighting). I’m just playing with the code right now, hoping some miracle will happen!