Position of neighbour fragments

For creating a bumpmapping effect, I need to know the position of the fragment wich belongs to the texel of a texture which I get with
texture2D(texture1, vec2(gl_TexCoord[1])+vec2(1.0, 0.0)).r.
I need this for calculating a vector which is perpendicular to the normalvector (normalize(pos of neighbour frag - pos of this frag)) for calculating a third, new normalvector lying on the same plane as the other two, but with an angle, which is defined by the difference of the two texel’s color.

If this doesn’t work (i hope it will…), what is a common method for implementing bumpmapping then?

Thanks, spl@t

It look like you want some kind of parallax bump mapping. You need to pass interpolated view vector to fragment shader (in tangent space). Here is sample code…

  
Vertex shader:
attribute vec3 tangent;
attribute vec3 binormal;

varying vec3 eyeVec;
void main()
{
// Vertex transformation
 gl_Position = ftransform();
 mat3 TBN_Matrix = mat3(tangent, binormal, gl_Normal); 
 TBN_Matrix =  gl_NormalMatrix * TBN_Matrix ;

// Calculation view vector in ModelView Space
 vec4 Vertex_ModelView = gl_ModelViewMatrix * gl_Vertex;
 eyeVec = vec3(-Vertex_ModelView) * TBN_Matrix ;
 gl_TexCoord[0] = gl_MultiTexCoord0;
}

fragment shader:

uniform sampler2D basetex;
uniform sampler2D bumptex;

varying vec3 eyeVec;

void main()
{
    vec2 texUV, srcUV = gl_TexCoord[0].xy;
    float height = texture2D(bumptex, srcUV).r;

    float v = height * 0.03 - 0.015;
    vec3 eye = normalize(eyeVec);
    texUV = srcUV + (eye.xy * v);
 
    vec3 rgb = texture2D(basetex, texUV).rgb;

    gl_FragColor = vec4(vec3(rgb), 1.0);
}

or… just dl Shader designer from http://www.typhoonlabs.com/ and look for examples…

yooyo

Thanks, but this is not really what I want to do.
I want to do phong shading, but a bumpmap should be able to change the normals’ direction.
The bumpmap is not a normalmap of 3 colors like in many examples.
The brightness of a texel and the upper-right 3 texels around should affect the dircetion of the normalvector.

I don’t now, wether this method is common, but I suggest it would look great.
My problem is the affection of the normal. If the right texel is very bright, the normal should rotate a bit to the left, because the bright texel on the right signalizes that the object is higher at the right.
But I don’t know in which direction I have to rotate the normal, because right in u/v coordinates does not mean right in this modelview-, world-, or eye-coordinates or what ever the coordinates of the normal and the fragment positions and the lightposition and so on
are.
I could calculate the direction if I would know the position of the neighbour texel, wich belongs to the neighbour fragment, in this eye-coordinates and not only in u/v-texture-coordinates, because I am knowing the position of the current fragment.

I hope, what I want to do/ need is understandable now.

P.S:
The matrix-calculations in your example seem to be interesting, I think I have to do something similar, but I don’t understand anything of it :frowning:

If you want, I can post the code I have up to now (but in 10 hours or so soonest, in my country I have to sleep now :wink: )

Why not use a tool to convert the bumpmap into a normal map. I have a library that does this if you want. Cause if you do it in a shader, it will cost more TEX and ALU instructions.

Otherwise, you can just do what you are doing already and compute the lighting in tangent space.

Another way to do it is to create a render_to_texture and render your bumpmap as a fullscreen quad and apply a shader that does bumpmap->normalmap. The shader is easy to write.
Your RTT will contain the normal map then.

Yes, runtime calculation of normals from a height map is possible (especially using the derivative functions in GLSL), but it is way slower than just using a preprocessed normal map (libraries as V-man said, or photoshop, or Gimp probably for that matter).

But yeah, if you really don’t want to precalculate, using the derivative functions is the way to go.

I don’t want to precalculate, because I want to learn to write an advanced shader which do bumpapping, and I want to see wether my idea was a good one.
Adruab: Can you please explain what these derivate functions are? Are they for converting spaces?

I tried to add tangent space calculations to my phong shading code before I wanted to add the bumpmapping, but now nothing works anymore. The triangles are eighter bright (same color for all pixels) or black or white. Can you please help me to find the mistake?

 
Vertex Shader: 

uniform vec3 light_color1;

attribute vec3 tangent;
attribute vec3 binormal;

varying vec3 lightDir;
varying vec3 viewDir;

void main(void) {
	vec3 t;
	vec3 b;
    	vec3 n;
  	vec3 pos;
	vec3 v;
	
 	// Do standard vertex stuff
	
	gl_Position  = gl_ModelViewProjectionMatrix * gl_Vertex;
	gl_TexCoord[0] = gl_MultiTexCoord0;
	gl_TexCoord[1]  = gl_MultiTexCoord1;


	// Compute the binormal

	t =  gl_NormalMatrix * tangent; 
     	b =  gl_NormalMatrix * binormal; 
     	n =  gl_NormalMatrix * gl_Normal; 

 	// Transform light position into surface local coordinates
		
	v.x = dot(light_pos1, t);
	v.y = dot(light_pos1, b);
	v.z = dot(light_pos1, n);

 	lightDir = v;

  	pos      = vec3 (gl_ModelViewMatrix * gl_Vertex);
	
 	v.x = dot(pos, t);
 	v.y = dot(pos, b);
 	v.z = dot(pos, n);

   	viewDir  = v;
}

Fragment Shader:

uniform vec3 light_color1;
uniform float evenness;

uniform sampler2D texture0;

varying vec3 lightDir;
varying vec3 viewDir;

float expose(float light) {
	return 1.0 - exp(-light*4.0);
}

void main (void) {
	//if (gl_FrontFacing) {
		vec3 fragment_normal	= vec3(0.0, 0.0, 1.0);
		float distance		= length(lightDir);
		vec3 light_direction 	= normalize(lightDir);
		vec3 view_direction	= normalize(viewDir);
				
	
		float attenuation_factor = 1.0 / max(sqrt(distance), 1.0);

		vec3 ambient = gl_FrontMaterial.ambient.xyz;
		
		vec3 diffuse = gl_FrontMaterial.diffuse.xyz * max(dot(light_direction, fragment_normal), 0.0);
		
		vec3 specular = gl_FrontMaterial.specular.xyz * pow(max(dot(view_direction, normalize(reflect(fragment_normal, light_direction))), 0.0), evenness);

		vec3 color = light_color1* (ambient + attenuation_factor*(diffuse + specular)); 


		color = color * vec3( texture2D(texture0, vec2(gl_TexCoord[0])) ); 
 	
		gl_FragColor = vec4(expose(color.x), expose(color.y), expose(color.z), 1.0);
	//}
}

Thanks,
spl@t

Now i know why it doesn’t work:
I thought the attributes tangent and binormal were build in, so i didn’t pass them from the program to the shader…
But I don’t want to calculate something like this outside the shader!
But how to do this in the vertex shader?

EDIT: Here is a quote from another website:

To find the tangent space vectors at a vertex, use the vertex normal for N, find the tangent axis by finding the vector direction of increasing s in the object’s coordinate system (the direction of the texture’s s axis in the object’s space).
This is exactly what i mean in my first post.

Well, I haven’t used derivatives extensively. They are documented in the GLSL spec starting on the page labeled 58 (64 in the pdf). To get the normal by using derivatives on the height map I think you’d need something like this:

n = cross(dFdx(texture2D(heighttex,t0)),
dFdy(texture2D(heighttex,t0)));

I’ve never actually used this before, so don’t quote me on it. However, using this technique will be MUCH slower than using a precomputed normal map and putting the light vector in tangent space (see articles everywhere for that).

Also, I have no idea if dFdx etc. work on most of todays cards, where as I know that normal mapping will work on pretty much all of them.

Look at Nvidia presentations for all this craziness ( earliest paper I could find ).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.