Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 6 of 6

Thread: Need help normal mapping a cube-mapped sphere

  1. #1
    Junior Member Newbie
    Join Date
    Sep 2014
    Posts
    11

    Need help normal mapping a cube-mapped sphere

    I've been trying for some time to successfully get some bump mapping going on a cubemapped sphere.

    I have a procedurally generated height-cubemap. With this I colourize to create a diffuse map.

    Now, I'm creating a normalmap based on the code from "Mathematics for 3D Game Programming and Computer Graphics, Third Edition", which appears to work really well.

    The problem I've had to date is Tangent-space normal map. I just cannot find a way to generate the normal-map and generate the tangent and binormal on my vertices without any graphical artifacts.
    http://gamedev.stackexchange.com/que...ping-a-cubemap
    http://stackoverflow.com/questions/3...ts-my-tangents

    If anyone has a solution for that I'd love to know it.

    I've basically given up on this for now though. What I'm trying to do now is generate a object-space normal map for my cubemapped sphere. This seems like it would be easier to generate and fool-proof to use. But so far I'm having trouble.

    My normal map looks like this:


    So I think there's clearly something wrong in it's generation. Here's the code that generates it.

    Code :
    float scale = 15.0;
    std::deque<glm::vec4> normalMap(textureSize*textureSize);
    for(int x = 0; x < textureSize; ++x)
    {
        for(int y = 0; y < textureSize; ++y)
        {
            // center point
            int i11 = utils::math::get_1d_array_index_from_2d(x,y,textureSize);
            float v11 = cubeFacesHeight[i][i11].r;
     
            // to the left
            int i01 = utils::math::get_1d_array_index_from_2d(std::max(x-1,0),y,textureSize);
            float v01 = cubeFacesHeight[i][i01].r;
     
            // to the right
            int i21 = utils::math::get_1d_array_index_from_2d(std::min(x+1,textureSize-1),y,textureSize);
            float v21 = cubeFacesHeight[i][i21].r;
     
            // to the top
            int i10 = utils::math::get_1d_array_index_from_2d(x,std::max(y-1,0),textureSize);
            float v10 = cubeFacesHeight[i][i10].r;
     
            // and now the bottom
            int i12 = utils::math::get_1d_array_index_from_2d(x,std::min(y+1,textureSize-1),textureSize);
            float v12 = cubeFacesHeight[i][i12].r;
     
            glm::vec3 S = glm::vec3(1, 0, scale * v21 - scale * v01);
            glm::vec3 T = glm::vec3(0, 1, scale * v12 - scale * v10);
     
            glm::vec3 N = (glm::vec3(-S.z,-T.z,1) / std::sqrt(S.z*S.z + T.z*T.z + 1));
     
            glm::vec3 originalDirection;
            if(i == POSITIVE_X)
                originalDirection = glm::vec3(textureSize,-y,-x);
            else if(i == NEGATIVE_X)
                originalDirection = glm::vec3(-textureSize,-x,-y);
            else if(i == POSITIVE_Y)
                originalDirection = glm::vec3(-x,-textureSize,-y);
            else if(i == NEGATIVE_Y)
                originalDirection = glm::vec3(-y,textureSize,-x);
            else if(i == POSITIVE_Z)
                originalDirection = glm::vec3(-y,-x,textureSize);
            else if(i == NEGATIVE_Z)
                originalDirection = glm::vec3(-y,-x,-textureSize);
     
            glm::vec3 o = originalDirection;
            glm::vec3 a = N;
     
            glm::vec3 ax = glm::normalize(o) * (glm::dot(a,glm::normalize(o)));
     
            N = ax;
     
            N.x = (N.x+1.0)/2.0;
            N.y = (N.y+1.0)/2.0;
            N.z = (N.z+1.0)/2.0;
     
            normalMap[utils::math::get_1d_array_index_from_2d(x,y,textureSize)] = glm::vec4(N.x,N.y,N.z,v11);
        }
    }
    for(int x = 0; x < textureSize; ++x)
    {
        for(int y = 0; y < textureSize; ++y)
        {
            cubeFacesHeight[i][utils::math::get_1d_array_index_from_2d(x,y,textureSize)] = normalMap[utils::math::get_1d_array_index_from_2d(x,y,textureSize)];  
        }
    }

    cubeFacesHeight is 6 faces of height values.

    What I'm attempting to do is use the value originally given to N, as this is the normal map as though it was the surface of a plane. Then, I'm attempting to apply this to the original direction vector of each point (which is also the normal vector). I think it's that application, where ax is set that it the problem.

    I then implement it in my Fragment shader like so:

    Code :
    #version 400
     
    layout (location = 0) out vec4 color;
     
    struct Material
    {
        bool useMaps;
        samplerCube diffuse;
        samplerCube specular;
        samplerCube normal;
        float shininess;
        vec4 color1;
        vec4 color2;
    };
     
    struct PointLight
    {
        bool active;
     
        vec3 position;
        vec3 ambient;
        vec3 diffuse;
        vec3 specular;
     
        float constant;
        float linear;
        float quadratic;
    };
     
    uniform Material uMaterial;
    uniform mat4 model;
    uniform mat4 view;
    uniform mat4 projection;
     
    in vec3 ex_normal;
    in vec3 ex_positionCameraSpace;
    in vec3 ex_originalPosition;
    in vec3 ex_positionWorldSpace;
    in vec4 ex_positionLightSpace;
     
    in PointLight ex_light;
     
    /* *********************
    Calculates the color when using a point light. Uses shadow map
    ********************* */
    vec3 CalcPointLight(PointLight light, Material mat, vec3 n, vec3 fragPos, vec3 originalPos, vec3 viewDir)
    {
         /* just lighting stuff that doesn't matter */
     
        vec3 lightDir = normalize(fragPos - light.position);    
     
        vec3 reflectDir = normalize(reflect(lightDir, n));
        float specularFactor = pow(dot(viewDir,reflectDir), mat.shininess);
        if(specularFactor > 0 && diffuseFactor > 0)
            specularColor = light.specular * specularFactor * specularMat;
     
        /*more lighting stuff*/
    }
     
    vec3 get_normal(vec3 SRT)
    {
        vec3 map = texture(uMaterial.normal,SRT).rgb * 2.0 - 1.0;
        return mat3(transpose(inverse(view * model))) * map;
    }
     
    void main(void)
    {
        vec3 viewDir = normalize(-ex_positionCameraSpace);
     
        vec3 n = get_normal(glm::normalize(ex_originalPosition));
     
        vec3 result = CalcPointLight(ex_light,uMaterial,n,ex_positionCameraSpace, ex_positionWorldSpace,viewDir);
     
        color = vec4(result,1.0);
    }

    Considering that my Fragment shader works fine without sampling the normal map, and instead using "ex_originalPosition", I don't think it's the problem. I could just use some help in generating the object space normal map.

  2. #2
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,411
    The most obvious issue is that you're confusing corner-origin and centre-origin coordinate systems:
    Quote Originally Posted by NeomerArcana View Post
    Code :
            if(i == POSITIVE_X)
                originalDirection = glm::vec3(textureSize,-y,-x);
    Those vectors would (presumably) be correct if x and y ranged from -textureSize to textureSize with (0,0) in the centre of the texture. But they range from 0 to textureSize-1.

    I haven't looked any further than that.

  3. #3
    Junior Member Newbie
    Join Date
    Sep 2014
    Posts
    11
    Thanks, I fixed that up (I verified by just using these directions and now my cubemap cross looks as it should. But I still have the problem with implementing the normals from the height map to the faces.

  4. #4
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,411
    Quote Originally Posted by NeomerArcana View Post
    But I still have the problem with implementing the normals from the height map to the faces.
    Can you explain more clearly what your actual problem is?

    Also, if the normal map covers the entire object (rather than being tiled), and the object isn't subject to deformation (e.g. skeletal animation), there's probably no reason to use a tangent-space normal map; an object-space normal map will work fine and require less computation.

  5. #5
    Junior Member Newbie
    Join Date
    Sep 2014
    Posts
    11
    So what I mean is.

    All on the surface of my sphere I have a terrain (it's a planet). With this, I obviously have a height map. From this heightmap, I've created a normal map.

    But this normal map is actually 6 normal maps. 6 faces of my cubemap. All of them think of themselves as a plane, so the normals are all facing "up" if you imagine all the faces lying flat on the table.

    So, what I need to do now, is:

    1. Take the original normal on the surface of the sphere (I have this, thanks for pointing out my erroneous corner coords too)
    2. Find the adjustment normal (that's what I'm naming the normal from my face-plane) that's sampled from the cubemap (I have this too)
    3. Adjust the normal. I.e., how to combine these two normals so that the adjustment-normal is "centered" on the direction of the original normal (rather than being "centered" on the face of a plane pointing "up").

    I hope that makes sense. I believe I'm looking to move the adjustment-normal to the... tangent space... or the original normal? I'm really confused with the correct terminology.

  6. #6
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,411
    Okay, so your normals are currently in tangent space with respect to the sphere.

    First, it might be easier to generate the normals in object-space to start with, by transforming the vertex positions to object space then calculating the normals from the object-space vertices.

    As for converting existing normals from sphere-tangent-space to object space. For a point with signed (-1..+1) texture coordinates (u,v), transform the corresponding normal by the matrix:
    Code :
    [    d    0  u/d ]
    [    0    d  v/d ]
    [ -d*u -d*v  1/d ]
    where:
    d = sqrt(u^2+v^2+1)

    The above calculation assumes that the texture coordinates (u,v) map to (u,v,1)/d. Each of the six cube map faces actually has a different permutation and signs, which needs to be taken into account.

    Essentially, the point (u,v,h) in tangent space maps to the point (1+h)*(u,v,1)/d in object space. Taking partial derivatives with respect to u, v and r gives you the axes, which form the columns of a matrix which transforms coordinates from tangent space to object space (formally, the Jacobian). To transform normals, you need to use the inverse of the transpose of the matrix, which is given above (simplified assuming that h+1 ~= 1, i.e. variations in height are small compared to the radius).

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •