OpenGL 4.0 Normals

Hi all, I’m your new resident here!

Anyway, I’ve been programming a simple demo using OpenGL 4.0 (As far as I can - GLEW won’t give me core profile). I am at the stage where I am loading in a .obj file into my demo.

Currently there is no method in my demo to give the Vertex and Fragment shaders the corresponding normals of the .obj files. Now normally if I were to use a lower version of OpenGL I’d use glNormal*f or glNormalPointer.

As I understand it, it’s either I load the normals in along with the vertex and texture information in one big vertex buffer or I calculate the normals in the geometry shader. Which on is the better approach? Is there something I am missing from this massive deprecation?

…I load the normals in along with the vertex and texture information in one big vertex buffer…

You are not necessary to put all of your vertex, normal and texture coordinate information into one buffer, you can use different ones, however, indeed interleaved buffers are more efficient nowadays.

To answer your question: depends.
Putting normals in the VBO consumes memory and bandwidth, while calculating it on-the-fly in the geometry shader consumes computational resources. Also, geometry shaders could severely degrade performance on early geometry shader capable NVIDIA cards (8000 to 300 series) and maybe on ATI 2000-4000 series also as geometry shader execution can be a bottleneck there.
So if you don’t have too much vertex attributes that could consume too much bandwidth, I would say that you should go with storing the normals in the VBO and later you may change it (if you need e.g. also tangents and binormals).

The latter is overkill and may require more information than you’d like to feed to your renderer.

While the legacy vertex attributes (and legacy vtx attrib set calls such as glVertexPointer, glNormalPointer, glColorPointer, etc.) are deprecated in 3.0 and obsoleted in 3.1, you have several options:

  • Use generic vertex attribs (i.e. glVertexAttribPointer( index ), or
  • Use the legacy vertex attribs anyway by allocating a compatibility profile and sticking with GLSL 1.2 or earlier in your shaders,

There are a number of ways to do the first. You can just use the same name for input attributes in your vertex shaders (e.g. my_Vertex, my_Normal, my_Color, etc.) and call glBindAttribLocation( pgm, index, name ) for each before linking. That’s very easy. OR you can bind the input attribute names to index values in your shader code using layouts (this works OK, but you end up duplicating this if you use the same name:index assignments across shaders). OR you can let the GLSL compiler assign random vtx attribute index values to each program, query them after the fact, and then use those indices when setting up vertex attribs for specific program (this works, but can be less efficient if your shaders have a common set of vertex attributes they use).

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=286015#Post286015

The latter is overkill and may require more information than you’d like to feed to your renderer.

Why you would need to feed more info to your renderer? You can calculate the surface normal in the geometry shader for any triangle based on position only. Of course, if you need smooth normals then you may need to render triangles with adjacency and that would be even more expensive.

Don’t misunderstand me, I agree with you that storing the normal is usually faster, but your statement is a bit misleading.

Right, that’s what I was thinking about. If you just precompute them, then the GPU doesn’t need to know or care about adjacency info.

You also may need to support hard corners (no normal interpolation or even autocomputation), where you may essentially specify the normals anyway because the autocomputed normal from the poly mesh is wrong.

All around, simpler to just compute normals in the tools, whether vertex normals or normal maps.

Yes, this way I completely agree with you.

Btw, the same problems about smooth and hard normals also get relevant in case you need a tangent space basis. There the use of geometry shaders is more justified because passing normals, binormals and tangents using VBOs can be a bandwidth eater. However, in my opinion even in that case you should better go with another solution: pass only the X and Y coordinates of your tangent and binormal thus you need only 4x2 byte for high quality half float normals and in the vertex shader (or whatever stage you need them) you can recalculate the Z coordinate assuming that tangents and binormals are unit length vectors and then you can calculate the normal as the cross product of the other two.

GLEW shouldn’t interfere with you requesting an OpenGL Core profile. Here is a snip of my Windows based code


//	Initialise GL Extension Wrangler
	if (glewInit() != GLEW_OK)
		return FALSE;

	//	Create CORE OpenGL context
#if !defined(Y_RENDERER_OPENGL_FIXED_FUNCTION)
	if (wglewIsSupported("WGL_ARB_create_context"))
	{
		int attribs[] =
		{
			WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
			WGL_CONTEXT_MINOR_VERSION_ARB, 3,
			WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
			WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
			0
		};
		HGLRC new_hRC = wglCreateContextAttribsARB(gRenderingView, 0, attribs);
		if (new_hRC)
		{
			wglMakeCurrent(NULL, NULL);
			wglDeleteContext(hRC);
			ret = wglMakeCurrent(gRenderingView, new_hRC);
			if (!ret)
			{
				wglDeleteContext(new_hRC);
				hRC = wglCreateContext(gRenderingView);
				ret = wglMakeCurrent(gRenderingView, hRC);
			}
			hRC = new_hRC;
		}
	}
#endif

GLEW shouldn’t interfere with you requesting an OpenGL Core profile. Here is a snip of my Windows based code


//	Initialise GL Extension Wrangler
	if (glewInit() != GLEW_OK)
		return FALSE;

	//	Create CORE OpenGL context
#if !defined(Y_RENDERER_OPENGL_FIXED_FUNCTION)
	if (wglewIsSupported("WGL_ARB_create_context"))
	{
		int attribs[] =
		{
			WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
			WGL_CONTEXT_MINOR_VERSION_ARB, 3,
			WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB,
			WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
			0
		};
		HGLRC new_hRC = wglCreateContextAttribsARB(gRenderingView, 0, attribs);
		if (new_hRC)
		{
			wglMakeCurrent(NULL, NULL);
			wglDeleteContext(hRC);
			ret = wglMakeCurrent(gRenderingView, new_hRC);
			if (!ret)
			{
				wglDeleteContext(new_hRC);
				hRC = wglCreateContext(gRenderingView);
				ret = wglMakeCurrent(gRenderingView, hRC);
			}
			hRC = new_hRC;
		}
	}
#endif

[/QUOTE]

I tried that code myself. Sadly I am using SDL and GLEW 1.5.6 which has a reported bug that has yet to be fixed.

GLEW OpenGL 4.0 Core Profile Bug

And the big caps give out errors presumably because I don’t have WGL initialised, so I tried looking for GL_CONTEXT_ equivalents which I can’t seem to find.

Thanks for all your replies. I’ll go ahead and load the normals in using glVertexAttrib =)