some questions about bump mapping

It seems that bump mapping techniques just deal with diffuse color and one light.
1)So if we want to use bump mapping in our game, it means that we need to disable OpenGL lighting?
2)Which technique creates better result: Parallax or normal map?
3) Do we need to still use old manners such as dot3 extension or cube map ) to implement normal map using OpenGL fixed function? If hardware supports shader, Why not implementing this feature using shader?
4) could we mimic the behavior of more than one light while using bump map?
5) Why the codes on the internet just use the diffuse part of the lighting equation? Is it really difficult to add specular while using bump map for example?
6) Any good article or sample that solves all the problems ?

  1. fixed path (what you call OpenGL lighting) only works on vertexes, and you need per pixel lighting for convincing bumpmaps, so yes you have to drop fixed function.
  2. parallax is better (but harder to tune) because it takes in account parallax to visually move around the surface. normal map only simulate lighting for a different normal : it works pretty well for small bumps. For deep bumps, it looks fake.
  3. dot3/cubemap is only for ancient fixed function hardware. Of course if hardware supports shader, do it with shader, it will look much better and will be easier to program.
  4. yes. Just compute for each light then add the contributions, either do it direclyt on one shader or use multipass.
  5. Do you have a link ? No, specular is not very hard using shaders. In fact people stopped talking about bump mapping as soon as shader arrived, as it was trivially included in the more general “per pixel lighting” (ppl). maybe you don’t search with correct terms, try something like normal map per pixel lighting.
  6. try http://www.ozone3d.net/tutorials/bump_mapping.php

Thanks . I was reading this old article and some other old topics about bump mapping.As you have suggested , I have searched for old stuff.
So if I even specify the glLight*() functions and use per pixel lighting, all the glLight()s are dropped?

If you use shaders, the fixed function will be disabled.
This tutorial series about GLSL as defined in OpenGL 2.0 is a good introduction :
http://www.lighthouse3d.com/opengl/glsl/

Then do not forget that current OpenGL is version 3.2 and that 3.3 and 4.0 specs just arrived, depending on your target hardware it can be important.

OK, Thanks for suggesting that article. It was really useful :slight_smile:

Some other problems:
I have no idea about the following codes at zone3D.net:

float att = clamp(1.0 - invRadius * sqrt(distSqr), 0.0, 1.0);
vec4 base = texture2D(colorMap, texCoord);

They are used in the final fragment color:

gl_FragColor = ( vAmbient*base + vDiffuse*base+vSpecular) * att;

3)this example doesn’t multiply the light position by the modelview matrix. So the light is in object space and the vertex is in eye space:

vec3 vVertex = vec3(gl_ModelViewMatrix * gl_Vertex);
vec3 tmpVec = gl_LightSource[0].position.xyz - vVertex;
 

Why?

  1. Another question( it’s about per fragment lighting ) :
    Using per pixel lighting with a normal map is meaningful, each fragment has its own normal map. But a sample in “More OpenGL game programming” has discussed about the per fragment lighting without normal maps. If we use per vertex lighting, the result in interpolated across the primitive and each fragment gets its own color-- if glShadeModel( GL_SMOOTH ) is enabled. So why we need to use per fragment lighting when we don’t use normal maps? Does it generate better quality?

read the GLSL spec, for details about each function.
http://www.opengl.org/documentation/specs

  1. a distance attenuation factor, so like a local light, the farther you go from the light, the less light you receive. If you don’t need it, just set it to 1.0 or another constant value that look good.

  2. this reads the base texture (diffuse color). Not needed if you want only bumps and uniform color, just set “base” to the solid color you want.

  3. in this case the light position is attached to the camera (like a torchlight)

  4. you did not read the complete lighthouse tutorial ?
    http://www.lighthouse3d.com/opengl/glsl/index.php?dirlightpix
    Compare the two teapots at the bottom.

This example[click to download] has used the folowing code to specify the data and also pass the information to the shader:


    glActiveTexture(GL_TEXTURE2);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, g_heightMapTexture);

    glActiveTexture(GL_TEXTURE1);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, g_normalMapTexture);

    glActiveTexture(GL_TEXTURE0);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, (g_disableColorMapTexture) ? g_nullTexture : g_colorMapTexture);   

    glBindBuffer(GL_ARRAY_BUFFER, g_vertexBuffer);

    glEnableClientState(GL_VERTEX_ARRAY);
    glVertexPointer(3, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(0));

    glClientActiveTexture(GL_TEXTURE0);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(sizeof(float) * 3));

    glEnableClientState(GL_NORMAL_ARRAY);
    glNormalPointer(GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(sizeof(float) * 5));

    glClientActiveTexture(GL_TEXTURE1);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    glTexCoordPointer(4, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(sizeof(float) * 8));

    glDrawArrays(GL_QUADS, 0, sizeof(g_cube) / sizeof(g_cube[0]));


Texture0 consists of the color map, texture 1 consists of the normal map and tangent vectors and finally texture2 consists of height map ( to be used with parallax mapping ).

1)The weird behavior is that only texture0 is used for texturing and texture1 and 2 are no longer modulated with the previous texture, they are simply used to pass data to the shader( If they are modulated, we no longer see the correct result ). Does it mean that enbaling the shader disables fixed function multitexturing?( In other word, does it mean that we must do multitexturing inside the fragment shader with the modulation formula? )

2)OpenCollada also exports binormal vectors. It means that I can also pass binormals via another glTexCorrdPointer?For example:


glClientActiveTexture(GL_TEXTURE3);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(4, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(sizeof(float) * 12));

And access to this binormal vector in my vertex shader:

vec3 b = normalize(gl_NormalMatrix * gl_MultiTexCoord3.xyz);

As I said above, “If you use shaders, the fixed function will be disabled.” and to be more precise, there are four cases :

  1. no shader at all : fixed path vertex processing, fixed path shading (texturing + GL lighting + …)
  2. vertex shader only : no fixed path vertex processing, fixed path shading. In this case you have to be careful to send the data expected by fixed path
  3. fragment shader only : fixed path vertex processing, no fixed path shading
  4. vertex shader + fragment shader : no fixed path at all

So yes, multitexturing when using a fragment shader has to be done “by hand”

http://www.opengl.org/wiki/Multitexture_with_GLSL

Well,I could finally simulate normal mapping:
http://i44.tinypic.com/16gll78.jpg
Thank you :slight_smile:

Good work, very nice screenshot !