Adding normals into fragment shader

I’ve been following a tutorial where the author creates a cube and a lightsource. He includes the vertex positions along with the normals in one array like so:


    GLfloat vertices[] = {
        -0.5f, -0.5f, -0.5f,  0.0f,  0.0f, -1.0f,
         0.5f, -0.5f, -0.5f,  0.0f,  0.0f, -1.0f,
         0.5f,  0.5f, -0.5f,  0.0f,  0.0f, -1.0f,
         0.5f,  0.5f, -0.5f,  0.0f,  0.0f, -1.0f,
        -0.5f,  0.5f, -0.5f,  0.0f,  0.0f, -1.0f,
        -0.5f, -0.5f, -0.5f,  0.0f,  0.0f, -1.0f,
        ...

Now I’m creating a sphere and the normals are not so easy and need to be calculated. I’ve done that so my question is how to pass them into the shader program?

I have my vertex data in vdata[] and I generate the normals and put them in ndata[]. I also have indices in tindicesp[]. So my setup code looks like this:


  //sphere VAO
  glGenVertexArrays(1, &vao_sphere);
  glGenBuffers(1, &vbo_sphere);

  glBindVertexArray(vao_sphere);

  glBindBuffer(GL_ARRAY_BUFFER, vbo_sphere);
  glBufferData(GL_ARRAY_BUFFER, sizeof(vdata), vdata, GL_STATIC_DRAW);

  glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
  glEnableVertexAttribArray(0);

  glGenBuffers(1, &index_buffer);
  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
  glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(tindices), tindices, GL_STATIC_DRAW);

  glBindVertexArray(0);

  // sphere normals VAO
  glGenVertexArrays(1, &vao_sphere_normals);
  glGenBuffers(1, &vbo_sphere_normals);

  glBindVertexArray(vao_sphere_normals);

  glBindBuffer(GL_ARRAY_BUFFER, vbo_sphere_normals);
  glBufferData(GL_ARRAY_BUFFER, sizeof(ndata), ndata, GL_STATIC_DRAW);

  glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
  glEnableVertexAttribArray(1);

  glBindVertexArray(0);

then my fragment and vertex shaders look like so:

vertex shader


#version 430 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

out vec3 Normal;
out vec3 FragPos;

void main()
{
  gl_Position = projection * view * model * vec4(position, 1.0f);
  FragPos = vec3(model * vec4(position, 1.0f));
  Normal = normal;
}

fragment shader


#version 430 core

in vec3 Normal;
in vec3 FragPos;
out vec4 color;

// uniform vec3 sphereColor;
// uniform vec3 lightColor;
uniform vec3 lightPos;
uniform vec3 viewPos;

void main()
{
  vec3 lightColor = vec3(1.0f, 1.0f, 1.0f);
  vec3 sphereColor = vec3(0.0f, 0.0f, 1.0f);

  // ambient
  float ambientStrength = 0.1f;
  vec3 ambient = ambientStrength * lightColor;

  // Diffuse 
  vec3 norm = normalize(Normal);
  vec3 lightDir = normalize(lightPos - FragPos);
  float diff = max(dot(norm, lightDir), 0.0);
  vec3 diffuse = diff * lightColor;

  vec3 result = (ambient + diffuse) * sphereColor;
  color = vec4(result, 1.0f);
}

unfortunately only the ambient light gets calculated and not the diffuse.

If the sphere’s origin is at the centre, vertex normals are the same as the vertex coordinates (up to a constant scale factor). More generally, the normal is just the vertex position relative to the centre.

[QUOTE=michaelglaz;1266749]I have my vertex data in vdata and I generate the normals and put them in ndata. I also have indices in tindicesp. So my setup code looks like this:


  //sphere VAO
  glGenVertexArrays(1, &vao_sphere);
  glGenBuffers(1, &vbo_sphere);
 ....
  // sphere normals VAO
  glGenVertexArrays(1, &vao_sphere_normals);
  glGenBuffers(1, &vbo_sphere_normals);

[/QUOTE]
Both the positions and the normals must go in the same VAO. They can even go in the same VBO if you want, but they must go in the same VAO, as you can only have one VAO bound at a time (i.e. at the point that you call glDrawElements() or whatever).

Your code creates two VAOs: one for the positions and one for the normals. Depending upon which one is bound when you call glDrawElements(), you’ll either get the positions or the normals; the other attribute is effectively undefined.

The currently-bound VAO specifies the data sources for all of the attributes passed to the vertex shader.

For each attribute, a VAO stores the VBO which was bound at the time of the glVertexAttribPointer() call for that attribute, the parameters to the glVertexAttribPointer() call, the enabled/disabled state (glEnableVertexAttribArray() or glDisableVertexAttribArray()), and any divisor specified by glVertexAttribDivisor(). In addition to the per-attribute data, it also stores the current index buffer (from glBindBuffer(GL_ELEMENT_ARRAY_BUFFER)).

I’m having trouble rendering the sphere. It renders correctly when I have only ambient lighting but when I send the normals through it renders like the top-left 1/3 of the sphere. I’m drawing the sphere using the technique in chapter 2 of the redbook. So my vertices and element indices look like this


GLfloat vdata[12][3] {    
   {-X, 0.0, Z}, {X, 0.0, Z}, {-X, 0.0, -Z}, {X, 0.0, -Z},    
   {0.0, Z, X}, {0.0, Z, -X}, {0.0, -Z, X}, {0.0, -Z, -X},    
   {Z, X, 0.0}, {-Z, X, 0.0}, {Z, -X, 0.0}, {-Z, -X, 0.0} 
};

GLuint tindices[20][3] = { 
   {0,4,1}, {0,9,4}, {9,5,4}, {4,5,8}, {4,8,1},    
   {8,10,1}, {8,3,10}, {5,3,8}, {5,2,3}, {2,7,3},    
   {7,10,3}, {7,6,10}, {7,11,6}, {11,0,6}, {0,1,6}, 
   {6,1,10}, {9,0,11}, {9,11,2}, {9,2,5}, {7,2,11} 
 };

Then I set the normals with the following code:


  for(int i=0; i<20; i++)
  {
    for(int j=0; j<3; j++)
    {
      for(int k=0; k<3; k++)
      {
        ndata[i*9 + j*3 + k] = vdata[tindices[i][j]][k];
      }
    }
  }

The normal array should have exactly the same number of elements as the vertex array.

The indices in the element array are used to index into both the vertex and normal arrays.

I think I know what I did. I thought there were 20*3 sides to this sphere but here are actually only 20.

GClements, I still can’t figure out why only the top portion of the sphere is rendering. Here’s my initialization:


  //sphere VAO
  glGenVertexArrays(1, &vao_sphere);
  glGenBuffers(1, &vbo_sphere);

  glBindVertexArray(vao_sphere);

  glBindBuffer(GL_ARRAY_BUFFER, vbo_sphere);
  glBufferData(GL_ARRAY_BUFFER, sizeof(vdata), vdata, GL_STATIC_DRAW);

  glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
  glEnableVertexAttribArray(0);

  glGenBuffers(1, &index_buffer);
  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
  glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(tindices), tindices, GL_STATIC_DRAW);
  glBufferData(GL_ARRAY_BUFFER, sizeof(ndata), ndata, GL_STATIC_DRAW);

  glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
  glEnableVertexAttribArray(1);

  glBindVertexArray(0);

Then in my GLFW loop I call:


      glBindVertexArray(vao_sphere);

      // glDrawArrays(GL_TRIANGLES, 0, 6);

      glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_INT, 0);      

      glBindVertexArray(0);

Please post the complete code for the program.


#include <cmath>
#include <iostream>
#include <GL/glew.h>
#include <GLFW/glfw3.h>

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>

#include "mgl/mgl.h"
#include "camera.h"

using namespace std;

GLFWwindow* window;
const GLuint WIDTH = 800, HEIGHT = 800;
const GLuint NUM_GROUND_LINES = 80;

glm::vec3 ground[NUM_GROUND_LINES*2*2];

glm::vec3 quad_data[342];
glm::vec3 strip_data[40];
glm::vec3 lightPos(0.0, 3.0, 0.0);

Camera camera(glm::vec3(0.0f, 0.0f, 3.0f));

GLfloat square[] {
  -0.5, -4.0,  0.5, 0.0, 1.0, 0.0,
   0.5, -4.0,  0.5, 0.0, 1.0, 0.0,
  -0.5, -4.0, -0.5, 0.0, 1.0, 0.0,
   0.5, -4.0, -0.5, 0.0, 1.0, 0.0
};

GLuint program_sphere, program_ground, program_lamp;
GLuint vbo_sphere, vao_sphere;
GLuint vbo_sphere_normals, vao_sphere_normals;
GLuint vbo_poles, vao_poles;
GLuint vbo_ground, vao_ground;
GLuint index_buffer;

glm::vec3 cameraPos   = glm::vec3(0.0, 0.0, 3.0);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp    = glm::vec3(0.0f, 1.0f,  0.0f);

GLfloat yaw    = -90.0f;	
GLfloat pitch  =  0.0f;
GLfloat lastX  =  WIDTH  / 2.0;
GLfloat lastY  =  HEIGHT / 2.0;
GLfloat aspect =  45.0f;

GLfloat twist = 0;
GLfloat elevation = 0;
GLfloat azimuth = 0;

GLfloat deltaTime = 0.0f;	// Time between current frame and last frame
GLfloat lastFrame = 0.0f; 

bool firstMouse = true;
bool keys[1024];

const GLfloat X = .525731112119133606;
const GLfloat Z = .850650808352039932;

GLfloat vdata[12][3] {    
   {-X, 0.0, Z}, {X, 0.0, Z}, {-X, 0.0, -Z}, {X, 0.0, -Z},    
   {0.0, Z, X}, {0.0, Z, -X}, {0.0, -Z, X}, {0.0, -Z, -X},    
   {Z, X, 0.0}, {-Z, X, 0.0}, {Z, -X, 0.0}, {-Z, -X, 0.0} 
};

GLuint tindices[20][3] = { 
   {0,4,1}, {0,9,4}, {9,5,4}, {4,5,8}, {4,8,1},    
   {8,10,1}, {8,3,10}, {5,3,8}, {5,2,3}, {2,7,3},    
   {7,10,3}, {7,6,10}, {7,11,6}, {11,0,6}, {0,1,6}, 
   {6,1,10}, {9,0,11}, {9,11,2}, {9,2,5}, {7,2,11} 
 };

GLfloat ndata[60][3];

void init_sphere_normals()
{
  for(int i=0; i<20; i++)
  {
    for(int j=0; j<3; j++)
    {
      for(int k=0; k<3; k++)
      {
        ndata[i*3+j][k] = vdata[tindices[i][j]][k];
      }
    }
  }
}

void key_callback(GLFWwindow* window, int key, int scancode, int action, int mode)
{
    if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
        glfwSetWindowShouldClose(window, GL_TRUE);
    if (key >= 0 && key < 1024)
    {
      if (action == GLFW_PRESS)
      {
        keys[key] = true;
      }
      else if (action == GLFW_RELEASE)
        keys[key] = false;
    }
}

void do_movement()
{
    // Camera controls
    if (keys[GLFW_KEY_W])
        camera.ProcessKeyboard(FORWARD, deltaTime);
    if (keys[GLFW_KEY_S])
        camera.ProcessKeyboard(BACKWARD, deltaTime);
    if (keys[GLFW_KEY_A])
        camera.ProcessKeyboard(LEFT, deltaTime);
    if (keys[GLFW_KEY_D])
        camera.ProcessKeyboard(RIGHT, deltaTime);
}

void mouse_callback(GLFWwindow* window, double xpos, double ypos)
{
    if (firstMouse)
    {
        lastX = xpos;
        lastY = ypos;
        firstMouse = false;
    }

    GLfloat xoffset = xpos - lastX;
    GLfloat yoffset = lastY - ypos;  // Reversed since y-coordinates go from bottom to left

    lastX = xpos;
    lastY = ypos;

    camera.ProcessMouseMovement(xoffset, yoffset);
}

void scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
    if (aspect >= 1.0f && aspect <= 45.0f)
        aspect -= yoffset;
    if (aspect <= 1.0f)
        aspect = 1.0f;
    if (aspect >= 45.0f)
        aspect = 45.0f;
}

void init_glfw()
{
	glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
	// Init GLFW
	glfwInit();

	// Set all the required options for GLFW
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
	glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);

	window = glfwCreateWindow(WIDTH, HEIGHT, "Sphere", nullptr, nullptr);
	glfwMakeContextCurrent(window);

  glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);

	glewExperimental = GL_TRUE;

	glewInit();

  // glfwSetCursorPosCallback(window, mouse_callback);
  // glfwSetScrollCallback(window, scroll_callback);
  glfwSetKeyCallback(window, key_callback);
  glfwSetCursorPosCallback(window, mouse_callback);

  // GLFW Options
  // glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);	

	glViewport(0, 0, WIDTH, HEIGHT);
}

void init_sphere()
{

}

void init_poles()
{
	int k = 0;

	strip_data[k] = glm::vec3(0.0, 0.0, 1.0);
	k++;

	float sin80 = sin(glm::radians(80.0));
	float cos80 = cos(glm::radians(80.0));

	for(float theta = -180; theta <= 180.0; theta += 20.0)
	{
		float thetar = glm::radians(theta);
		strip_data[k] = glm::vec3(sin(thetar)*cos80, cos(thetar)*cos80, sin80);
		k++;
	}

	strip_data[k] = glm::vec3(0.0, 0.0, -1.0);
	k++;

	for(float theta = -180; theta <= 180; theta += 20.0)
	{
		float thetar = glm::radians(theta);
		strip_data[k] = glm::vec3(sin(thetar)*cos80, cos(thetar)*cos80, sin80);
		k++;
	}
}

void init_ground()
{
  GLfloat x = -static_cast<GLfloat>(NUM_GROUND_LINES/2);
  GLfloat z = -static_cast<GLfloat>(NUM_GROUND_LINES/2);
  for(int i=0; i<NUM_GROUND_LINES*2; i += 2)
  {
    GLfloat x = NUM_GROUND_LINES/2;
    glm::vec3 vertex1 = glm::vec3(-x, -4.0, z);
    glm::vec3 vertex2 = glm::vec3(x, -4.0, z);
    ground[i] = vertex1;
    ground[i+1] = vertex2;
    z += 1.0;
  }

  for(int i=NUM_GROUND_LINES*2; i<NUM_GROUND_LINES*4; i += 2)
  {
    GLfloat z = NUM_GROUND_LINES/2;
    glm::vec3 vertex1 = glm::vec3(x, -4.0, -z);
    glm::vec3 vertex2 = glm::vec3(x, -4.0, z);
    ground[i] = vertex1;
    ground[i+1] = vertex2;
    x += 1.0;
  }
}

void init_checkerboard()
{

}

void init_opengl()
{

  program_sphere = mgl::load_shaders("sphere.vert", "sphere.frag");
  program_ground = mgl::load_shaders("ground.vert", "ground.frag");
  program_lamp = mgl::load_shaders("lamp.vert", "lamp.frag");

  init_sphere();

  init_poles();

  init_ground();

  init_sphere_normals();

  glEnable(GL_DEPTH_TEST);

  //sphere VAO
  glGenVertexArrays(1, &vao_sphere);
  glGenBuffers(1, &vbo_sphere);

  glBindVertexArray(vao_sphere);

  glBindBuffer(GL_ARRAY_BUFFER, vbo_sphere);
  glBufferData(GL_ARRAY_BUFFER, sizeof(vdata), vdata, GL_STATIC_DRAW);

  glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
  glEnableVertexAttribArray(0);

  glGenBuffers(1, &index_buffer);
  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
  glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(tindices), tindices, GL_STATIC_DRAW);

  glBufferData(GL_ARRAY_BUFFER, sizeof(ndata), ndata, GL_STATIC_DRAW);

  glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, 0);
  glEnableVertexAttribArray(1);


  glBindVertexArray(0);

  //ground VAO
  glGenVertexArrays(1, &vao_ground);
  glGenBuffers(1, &vbo_ground);

  glBindVertexArray(vao_ground);
  
  glBindBuffer(GL_ARRAY_BUFFER, vbo_ground);
  glBufferData(GL_ARRAY_BUFFER, sizeof(square), square, GL_STATIC_DRAW);

  glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6*sizeof(GLfloat), 0);
  glEnableVertexAttribArray(0);

  glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6*sizeof(GLfloat), (GLvoid*)(3*sizeof(GLfloat)));
  glEnableVertexAttribArray(1);

  glBindVertexArray(0);  
}

int main()
{
	init_glfw();

	init_opengl();

  while(!glfwWindowShouldClose(window))
  {
    GLfloat currentFrame = glfwGetTime();
    deltaTime = currentFrame - lastFrame;
    lastFrame = currentFrame;

    glfwPollEvents();
    do_movement();

  	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    	glm::mat4 view;
    	glm::mat4 projection;
    	glm::mat4 model;

    	projection = glm::perspective(45.0f, (GLfloat)WIDTH/(GLfloat)HEIGHT, 0.1f, 100.0f);

      glUseProgram(program_ground);

      GLint projLoc = glGetUniformLocation(program_ground, "projection");
      GLint viewLoc = glGetUniformLocation(program_ground, "view");
      GLint modelLoc = glGetUniformLocation(program_ground, "model");
      GLint groundColorLoc = glGetUniformLocation(program_ground, "groundColor");
      // GLint lightColorLoc = glGetUniformLocation(program_ground, "lightColor");
      GLint lightPosLoc    = glGetUniformLocation(program_ground, "lightPos");
      GLint viewPosLoc     = glGetUniformLocation(program_ground, "viewPos");

      glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));
      // glUniform3f(lightColorLoc, 1.0, 1.0, 1.0);
      glUniform3f(viewPosLoc, camera.Position.x, camera.Position.y, camera.Position.z);
      glUniform3f(lightPosLoc, lightPos.x, lightPos.y, lightPos.z);

      view = camera.GetViewMatrix();
      projection = glm::perspective(camera.Zoom, (GLfloat)WIDTH / (GLfloat)HEIGHT, 0.1f, 100.0f);

      glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
      glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));      

      glBindVertexArray(vao_ground);

      bool white_square = false;

      for(GLfloat z = -10.0; z <= 10.0; z += 1.0)
      {
        for(GLfloat x = -10.0; x <= 10.0; x += 1.0)
        {
          model = glm::mat4();

          if(white_square)
            glUniform3f(groundColorLoc, 1.0f, 1.0f, 1.0f);
          else
            glUniform3f(groundColorLoc, 1.0f, 0.0f, 0.0f);


          model = glm::translate(model, glm::vec3(x, 0.0f, z));
          glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));

          glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); 

          white_square = !white_square;
        }
      }

      glBindVertexArray(0);


      model = glm::mat4();

      glUseProgram(program_sphere);

      modelLoc = glGetUniformLocation(program_sphere, "model");    
      viewLoc = glGetUniformLocation(program_sphere, "view");
      projLoc = glGetUniformLocation(program_sphere, "projection");

      lightPosLoc    = glGetUniformLocation(program_sphere, "lightPos");
      viewPosLoc     = glGetUniformLocation(program_sphere, "viewPos");

      glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
      glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
      glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));

      glUniform3f(viewPosLoc, camera.Position.x, camera.Position.y, camera.Position.z);
      glUniform3f(lightPosLoc, lightPos.x, lightPos.y, lightPos.z);   

      glBindVertexArray(vao_sphere);

      glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_INT, 0);      

      glBindVertexArray(0); 

      glUseProgram(program_lamp);

      modelLoc = glGetUniformLocation(program_lamp, "model");
      viewLoc = glGetUniformLocation(program_lamp, "view");
      projLoc = glGetUniformLocation(program_lamp, "projection");

      glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
      glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
      glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));

      glfwSwapBuffers(window);
  }	
}



glBindBuffer(GL_ARRAY_BUFFER, vbo_sphere);
glBufferData(GL_ARRAY_BUFFER, sizeof(vdata), vdata, GL_STATIC_DRAW);
 
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
 
glGenBuffers(1, &index_buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index_buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(tindices), tindices, GL_STATIC_DRAW);
glBufferData(GL_ARRAY_BUFFER, sizeof(ndata), ndata, GL_STATIC_DRAW);

That last glBufferData call overwrites your positions with the normals. Did you mean to create a second buffer object for the normals?

That last glBufferData call overwrites your positions with the normals. Did you mean to create a second buffer object for the normals?

Yes, that was right. But my other problem, I now realize, is that I’m not sure how many normals I should have. Do I need one normal per vertex or one normal per triangle? I mean thinking mathematically, isn’t a normal defined as the orthogonal vector for a surface?

Do you want smooth shading (where the mesh is a piecewise-linear approximation to a smooth surface) or flat shading (where the surface itself is a set of polygons)?

For a comparison:

If you’re trying to approximate a smooth surface such as a sphere, you’d normally want to use smooth shading, which requires a 1:1 correspondence between vertex positions and vertex normals. Because the normal is defined per vertex, the intensity is also defined per vertex, so there’s no discontinuity at the edges between polygons.

Note that the OpenGL API only allows attributes to be specified for vertices. If you want to specify an attribute per primitive, you have to specify it for one of the primitive’s vertices (usually the last vertex). When primitives share a vertex, they share all of that vertex’ attributes, so if you want different primitives to have different attributes, they need different vertices even when the vertices’ positions are shared between the primitives. Consequently, flat-shading typically requires more vertices than smooth shading (for a naive approach, every triangle needs three distinct vertices, i.e. no vertices are shared).

I suppose first off I should practice with the flat shading model. You said since my sphere is centered at the origin all the normals are just positions of the vertices, correct? So how do I build the normals array so that it corresponds to the positions array?

If you’re using smooth shading, the vertex normals for a sphere are the same as the vertex positions.

If you’re using flat shading, you need to calculate face normals from the vertex positions for each face, with e.g.


n = (p1 - p0) × (p2 - p0)

where × indicates the cross product.

If you’re using flat shading, each triangle needs 3 distinct vertices. All vertices for a triangle will have the same normal.

I got it working somewhat. Except I think my normals are mismatched with the positions because the wrong triangles of the sphere are reflecting light even when the light is on the other side


  for(int i=0; i<20; i++)
  {
    glm::vec3 d1, d2;
    for(int j=0; j<3; j++)  
    {
      d1[j] = vdata[i+1][j] - vdata[i][j];    
      d2[j] = vdata[i+2][j] - vdata[i][j];
    }
    glm::vec3 normal = glm::normalize(glm::cross(d1, d2));

    ndata[i][0] = normal[0];
    ndata[i][1] = normal[1];
    ndata[i][2] = normal[2];
  }

I changed my vdata to this:


GLfloat vdata[60][3] {
  //{0,4,1} 
  {-X, 0.0, Z},
  {0.0, Z, X},
  {X, 0.0, Z},

  //{0,9,4} 
  {-X, 0.0, Z},
  {-Z, X, 0.0},
  {0.0, Z, X},

 ...

Looking at your code, it appears that you now have 60 vertices but only 20 normals. At least, the code you’ve shown only populates 20 elements of ndata. And you’re only using 22 elements of vdata when calculating the normals.

You probably want something like (pseudo-code):


for(int i=0; i<20; i++) {
  glm::vec3 d1 = vdata[i*3+1]-vdata[i*3+0];
  glm::vec3 d2 = vdata[i*3+2]-vdata[i*3+0];
  ndata[i*3+0] = ndata[i*3+1] = ndata[i*3+2] = cross(d1, d2);
}

I.e. given the 3 vertices for a face, calculate a normal from the positions and use that normal for all 3 vertices.

Ok, that almost works. The surfaces are one uniform color but I have the light source directly above the sphere and the reverse is happening - bottom of sphere is brightest, top is darkest. When I negate the result of glm::cross(d1, d2) it looks correct. Why is that?

The order in which you chose the vertices results in the normals pointing inward.

Reversing the order of the arguments would also fix it, as b×a = -a×b.