Dot3 bump mapping on a GF2

I have a radeon, and am useing a bump mapping method, in my engine, that uses 2 textures + 1 color. How can I make this work on the geforce2 (or anyother duel texturing hardware)? Can I get the same bumps if I modulate the first texture unit, then take the dot product of those and the other texture? (I currently take first texture, dot product with second texture, then modulate with color 1.) I know I could use combiners on the geforce, but it would be nice not to have to do that (especially when I don’t have a testing machine )

What extensions are you using currently to do your bump mapping? If those extensions are supported on a GeForce, then it will work there too.

If you’re using GL_ARB_texture_env_dot3 and no more than two textures you should be safe.

  1. Is there any sample code for this form of Dot3 Bump Mapping?
  2. It doesn´t seem THAT complicated to setup, or?
  3. Where are the dissadvantages, if there are any?

Regards,
Diapolo

  1. There are sample code for it in the Radeon SDK.
  2. It’s very easy to use.
  3. Just a fixed function, but does the job.

Here’s some code to setup a texture unit to do DOT3 with the previous texture.

glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_DOT3_RGBA_ARB);

glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB_ARB, GL_PREVIOUS_ARB);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB_ARB, GL_SRC_COLOR);

glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB_ARB, GL_TEXTURE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB_ARB, GL_SRC_COLOR);

how do you do it on your radeon? do you use one or two or three operations? i think two… one on texenv of texstage0 and one on texenv of texstage1, right?

means
activetex( 0 )
dot3( tex0, color )
activetex( 1 )
modulate( previous, tex1 )

something like this?

works on gf2, too… ( on gf, too i think… ) and for sure on gf3

you just have to have new drivers ( i think you need leaked ones currently as long as no newer driver is officially out ( 6.5 officially… 12.1 currently leaked… hihi ) ) cause on the new drivers dot3 is supported… on older ones, you need the registercombiners to do it… they are complicated, but you can get very very nice results at the end…

i currently do diffuse and specular bumpmapping on gf2 with the rc’s in one pass… ( with color map etc )
looks like this then:
tex0.rgb = normalmap, tex0.alpha = transparence
tex1.rgb = diffusemap, tex1.alpha = glossmap
color0 = point_to_light ( on gf3 could be tex2, with a normalizationmap… )
color1 = halfangle ( on gf3 could be tex3, with a normalizationmap… )

and the equation i do is

src.rgb = ( light dot normal ) * diffuse + light.z > 0 ? gloss * ( halfangle dot normal ) ^5 : 0
src.alpha = transparence

and then blending it like normally… blendfunc( src_alpha, one_minus_src_alpha )

looks nice…

[This message has been edited by davepermen (edited 05-04-2001).]

I would be very interested to see how you set that equation up in register combiners. Also, is your normal map a displacement map or the actual normals at those points?

as i use a gf2 i dont have displacement maps… like that i use:

a normal map, real normal map, with rgb = xyz and x^2+y^2+z^2 equal 1 ( more or less ) code to generate such maps can be found on nvidia.com, like always… i use tangentspace calculations and like that i use a vertex_program to set up the textures/colors/just the lighting…

here is the rendering function:

				VERTEX& cur = mymesh.vdata[ mymesh.fdata[ f * 3 + i ] ];
				glVertexAttrib3fvNV( 1, cur.T );
				glVertexAttrib3fvNV( 2, cur.S );
				glVertexAttrib3fvNV( 3, cur.nrml );
				glVertexAttrib3fvNV( 4, cur.tex0 );
				glVertexAttrib3fvNV( 7, light );//just for now..
				glVertexAttrib3fvNV( 8, eye );//just for now.. too
				glVertex3fv( cur.pos );

thats in the loop for rendering indexed meshes… will be replaced by arrays etc… but thats not so important for now…
thats the vertexprogram:

	"!!VP1.0

"

	//project pos to screen
	TRANSFORM_4X4_4inst( "o[HPOS]", "v[0]", "c[0]", "c[1]", "c[2]", "c[3]" )

	//normalization of the tangentspace.. not important in fact, should be done yet [img]http://www.opengl.org/discussion_boards/ubb/smile.gif[/img] **! BUT ITS NOT !**
	NORMALIZE_TO_3inst( "R2", "v[1]" )
	NORMALIZE_TO_3inst( "R3", "v[2]" )
	NORMALIZE_TO_3inst( "R4", "v[3]" )

	//light into objectspace.. transform it by the inverse matrix **! can be done before.. !**
	TRANSFORM_4X4_4inst( "R5", "v[7]", "c[4]", "c[5]", "c[6]", "c[7]" )

	//point_to_light
	"ADD R5, -R5, v[0];"
	NORMALIZE_3inst( "R5" ) // for later use in halfanglecalculation..
	TRANSFORM_3X3_3inst( "R6", "R5", "R2", "R3", "R4" )
	NORMALIZE_3inst( "R6" )
	"MAD o[COL0], R6, c[12].x, c[12].x;" //unsigned color mapping.. on gf3 could be a MOV o[TEX2], R6 if TEX2 is a normalizationmap, for example

	//specular: transform eye into objectspace and then into tangent space
	TRANSFORM_4X4_4inst( "R1", "v[8]", "c[4]", "c[5]", "c[6]", "c[7]" )
	TRANSFORM_3X3_3inst( "R7", "R1", "R2", "R3", "R4" )
	//shorten it to 1
	NORMALIZE_3inst( "R7" )

	//calculate halfangle
	"ADD R7, -R7, R6;"
	NORMALIZE_3inst( "R7" )
	"MAD o[COL1], R7, c[12].x, c[12].x;" //unsigned color mapping.. on gf3 could be a MOV o[TEX3], R6 if TEX3 is a normalizationmap, for example

	//texture coordinates
	"MOV o[TEX0].xy, v[4];"
	"MOV o[TEX1].xy, v[4];"
	"END"

these are my textures:
glActiveTextureARB( GL_TEXTURE0_ARB );
mytex = CreateNormalMap( “bump.bmp”, 10 );
glBindTexture( GL_TEXTURE_2D, mytex );
glEnable( GL_TEXTURE_2D );

glActiveTextureARB( GL_TEXTURE1_ARB );
myctex = LoadTexture( "color.tga" );
glBindTexture( GL_TEXTURE_2D, myctex );
glEnable( GL_TEXTURE_2D );

and these are my combiners:
glEnable( GL_REGISTER_COMBINERS_NV );
glCombinerParameteriNV( GL_NUM_GENERAL_COMBINERS_NV, 2 );
//combiner0
{
//rgb
{
//color0.rgb = light dot normal
//color1.rgb = halfangle dot normal
glCombinerInputNV( GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_A_NV, GL_PRIMARY_COLOR_NV, GL_EXPAND_NORMAL_NV, GL_RGB );
glCombinerInputNV( GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_B_NV, GL_TEXTURE0_ARB, GL_EXPAND_NORMAL_NV, GL_RGB );
glCombinerInputNV( GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_C_NV, GL_SECONDARY_COLOR_NV, GL_EXPAND_NORMAL_NV, GL_RGB );
glCombinerInputNV( GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_D_NV, GL_TEXTURE0_ARB, GL_EXPAND_NORMAL_NV, GL_RGB );
glCombinerOutputNV( GL_COMBINER0_NV, GL_RGB, GL_PRIMARY_COLOR_NV, GL_SPARE1_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_TRUE, GL_TRUE, GL_FALSE );
}
//alpha
{
//spare0.a = light.z
glCombinerInputNV( GL_COMBINER0_NV, GL_ALPHA, GL_VARIABLE_A_NV, GL_ZERO, GL_UNSIGNED_INVERT_NV, GL_ALPHA );
glCombinerInputNV( GL_COMBINER0_NV, GL_ALPHA, GL_VARIABLE_B_NV, GL_PRIMARY_COLOR_NV, GL_UNSIGNED_IDENTITY_NV, GL_BLUE );
glCombinerOutputNV( GL_COMBINER0_NV, GL_ALPHA, GL_SPARE0_NV, GL_DISCARD_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_FALSE, GL_FALSE, GL_FALSE );
}
}

//combiner1
{
	//rgb
	{
		//spare0.rgb = color0.rgb * diffuse = ( light dot normal ) * diffuse
		//spare1.rgb = color1.rgb ^2 = ( halfangle dot normal ) ^2
		glCombinerInputNV( GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_A_NV, GL_PRIMARY_COLOR_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB );
		glCombinerInputNV( GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_B_NV, GL_TEXTURE1_ARB, GL_UNSIGNED_IDENTITY_NV, GL_RGB );
		glCombinerInputNV( GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_C_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB );
		glCombinerInputNV( GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_D_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB );
		glCombinerOutputNV( GL_COMBINER1_NV, GL_RGB, GL_SPARE0_NV, GL_SPARE1_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_FALSE, GL_FALSE, GL_FALSE );
	}
	//rgb
	{
		//spare0.a = spare0.a =< .5f ? 0 : spare1.blue * specular = light.z > 0 ? ( halfangle dot normal ) ^2 * specular : 0
		glCombinerInputNV( GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_A_NV, GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_ALPHA );
		glCombinerInputNV( GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_B_NV, GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_ALPHA );
		glCombinerInputNV( GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_C_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_BLUE );
		glCombinerInputNV( GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_D_NV, GL_TEXTURE1_ARB, GL_UNSIGNED_IDENTITY_NV, GL_ALPHA );
		glCombinerOutputNV( GL_COMBINER1_NV, GL_ALPHA, GL_DISCARD_NV, GL_DISCARD_NV, GL_SPARE0_NV, GL_NONE, GL_NONE, GL_FALSE, GL_FALSE, GL_TRUE );
	}
}

//e_times_f
{
	// e_times_f = spare1.rgb ^2 = ( ( halfangle dot normal ) ^2 ) ^2 = ( halfangle dot normal ) ^4
	glFinalCombinerInputNV( GL_VARIABLE_E_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB );
	glFinalCombinerInputNV( GL_VARIABLE_F_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB );
}
//a * b
{
	//a * b = spare0.a * e_times_f = ( light.z > 0 ? ( halfangle dot normal ) ^2 * specular : 0 ) * ( halfangle dot normal ) ^4 = light.z > 0 ? ( halfangle dot normal ) ^5 : 0
	glFinalCombinerInputNV( GL_VARIABLE_A_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_ALPHA );
	glFinalCombinerInputNV( GL_VARIABLE_B_NV, GL_E_TIMES_F_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB );
}

glFinalCombinerInputNV( GL_VARIABLE_C_NV, GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_RGB );

// +d = + spare0.rgb = + ( light dot normal ) * diffuse
glFinalCombinerInputNV( GL_VARIABLE_D_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB );

//FINAL EQUATION:
// a * b + d
// light.z > 0 ? ( halfangle dot normal ) ^5 : 0 + ( light dot normal ) * diffuse
// ( light dot normal ) * diffuse + light.z > 0 ? ( halfangle dot normal ) ^5 : 0

//FINAL ALPHA: normalmap.alpha
glFinalCombinerInputNV( GL_VARIABLE_G_NV, GL_TEXTURE0_ARB, GL_UNSIGNED_IDENTITY_NV, GL_ALPHA );

now you have it all… if you want you can take this code to render nice ****ing meshes with specular and all… not important that i had weeks/months/years to get this… i’m too stupid to copy it out into the world btw if you find some bugs or something, say it to me plz i think its correct so far, but i’m not finished at all… and i plan to code a nice gf3 version with much more stuff and a 2th pass for the enviromnent map and and and… YEAHHHHHHHHH

love it, just love it

if you want to see the demo, you have bout 30minutes to grab it from here: http://194.230.160.81/bumpmesh.zip

hope this is what you wanted…

ok, i forgot the constant registers

here:
glTrackMatrixNV( GL_VERTEX_PROGRAM_NV, 0, GL_MODELVIEW_PROJECTION_NV, GL_IDENTITY_NV );
glTrackMatrixNV( GL_VERTEX_PROGRAM_NV, 4, GL_MODELVIEW, GL_INVERSE_NV );
glProgramParameter4fNV( GL_VERTEX_PROGRAM_NV, 12, .5f, 0, 0, 1 );

happy now?

Here’s how I was doing it…

1st texture unit - Texture (replace)
2nd texture unit - Texture (dot3)
3rd texture unit - Pri. Color (Modulate)

and I am wondering how to do it with 2 texture units. It would be easy if

1st tex unit - Texutre (Modulate with pri color)
2nd tex unit - Texture (dot3)

but wouldn’t this turn up all kinds of artifacts? Anyway, the way my engine uses all the bump maps is this (maybe I could change how I use them)

-Clears screen to (0,0,0,0)

-Blend mode: add
-does all the bump maps (for each significant light source)

-Blend mode: multiply
-renders with normal texture

notice that I use the primary color 1, which is the whole problem. I am strongly considering just setting up r. combiners for anything using a Geforce, but it would be nice if it worked without them. Any advice on how to do it…or do it better would be appreciated

[This message has been edited by HFAFiend (edited 05-04-2001).]

activetex( 0 )
dot3( tex0, color )
activetex( 1 )
modulate( previous, tex1 )

it should work like this nicely… with two texture units…

you can use source0 and source1, operand0 and operand1, cant you?

like that you can set it up like that:
texstage0:
combiner = dot3
source0 = texture0
operand0 = src_color
source1 = texture1
operand1 = src_color

ok, i see, you cant access texture1 on texstage0, can you? on gf, you can… should work… else i can write a simple rc-code for it, if you want… what ever you want, i say if possible or not and do if so… ( try )

Hm…it looks like regester combiners are the right option. Is there a good tutorial anyone would sugest, or should I just look through nvidia’s site? (Thanks davepermen)

Register combiner tutorials are hard to come by.

First, I would suggest reading the extension spec about 10 times. After that, you should understand about 10% of how it works.

Next, go download a demo that uses register combiners (a bump map or other per-pixel lighting demo should suffice). Find where it sets up its register combiners and try to understand what it’s doing. Have your extension spec handy; you will need it.

After that, you should have a pretty good grap of NV_register_combine

for really understanding how the combiners are working, what you can do and what not and you have microsoft powerpoint i would take a look at the openglsdk-documentation of nvidia: http://www.nvidia.com/Marketing/Develope…GL_sdk_docs.zip

great, cause it shows grafically how they work… and then you can search on nvidia.com for demos, documentations etc to get the math behind, and more info bout how to set them up… demos demos demos is the best i think… have fun

texture0 = normalmap
primary_color = light_dir
texture1 = diffusecolormap

activetexture( 0 )
combiner = dot3
source0 = texture
source1 = primarycolor
activetexture( 1 )
combiner = modulate
source0 = previous
source1 = texture

should work perfectly…

else, for combiners:

glCombinerParameteriNV( GL_NUM_GENERAL_COMBINERS_NV, 1 );

glCombinerInputNV( COMBINER0, VARIABLE_A, RGB, EXPAND_NORMAL, TEXTURE0, RGB );
glCombinerInputNV( COMBINER0, VARIABLE_B, RGB, EXPAND_NORMAL, PRIMARY_COLOR, RGB );
glCombinerOutputNV( COMBINER0, SPARE0, DISCARD, DISCARD, NULL, NULL, TRUE, FALSE, FALSE ); // stores dotproduct betwen A and B into SPARE0

glFinalCombiner( VARIABLE_A, SPARE0, RGB )
glFinalCombiner( VARIABLE_B, TEXTURE1, RGB )
glFinalCombiner( VARIABLE_C, ZERO )
glFinalCombiner( VARIABLE_D, ZERO )

something like that… correct writing should be possible to find out… but try it with the above code for tex_env instead of combiners… your calculation is not so complex, should be possible without…

I really need some help to understand the whole dot3 bump mapping stuff.

I know I need:

  • a simple 2d texture
  • a normalmap of this 2d texture
    (A small question here, in an ATI example the normalmap has got an alpha channel, but it´s only white. If I generate one using NVIDIA´s NormalMapGen tool then I get a way different alpha channel. In which way is the alpha channel of the normalmap used, if?)
  • an extension to do the dot3 between the normalmap (and the primary color that represents the light vector??? -> I´m not sure about the text in brakets)

How can I get the light vector in the primary color?
I tried enabling lightnig with a rotating cube and the follwing texenvf stuff, but it seems like there´s something wrong

glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, gluiTextureIDs[4]);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);
glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_DOT3_RGBA_EXT);

glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE0_RGB_EXT, GL_PRIMARY_COLOR_EXT);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND0_RGB_EXT, GL_SRC_COLOR);

glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE1_RGB_EXT, GL_TEXTURE);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB_EXT, GL_SRC_COLOR);

glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, gluiTextureIDs[3]);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);
glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_MODULATE);

glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE0_RGB_EXT, GL_PREVIOUS_EXT);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND0_RGB_EXT, GL_SRC_ALPHA);

glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE1_RGB_EXT, GL_TEXTURE);
glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB_EXT, GL_SRC_COLOR);

Could someone give me a short but EASY to understand instruction for dot3 bump mapping.

Regards,
Diapolo

Thanks again davepermen!

[This message has been edited by HFAFiend (edited 05-07-2001).]

ok, you have to calculate the lightvector per vertex, means the vertex to light vector, normalized… thats a problem for the gluDisc and all those… cause you have no access for it… sorry…

next, you have to compress this vector to unsigned color… this is simple… you have x,y and z component wich are [-1,1]range ( means from -1 to 1 ), and you need to get this to [0,1]… this is simple like that:

//lightvector = light.x/.y/.z
//compressed_to_color = color.r/.g/.b

color.r = .5 * light.x + .5
color.g = .5 * light.y + .5
color.b = .5 * light.z + .5

and this you have to put into primary_color wich you do with

glColor3fv( &color.r );

thats all… for every vertex…

Thanks davepermen, but one question left for now !

Is this the correct way to get the light vector for a vertex?

color.r = (lightpos[0] - vertexpos[0]);
color.g = (lightpos[1] - vertexpos[1]);
color.b = (lightpos[2] - vertexpos[2]);

// normalize
GLfloat fValue = sqrt((color.rcolor.r) + (color.gcolor.g) + (color.b*color.b));

color.r = color.r / fValue;
color.g = color.g / fValue;
color.b = color.b / fValue;

I´ve heard of light vector into tangent space, what´s this and do I need it?

Regards,
Diapolo

[This message has been edited by Diapolo (edited 05-07-2001).]

tangent space is simple to understand… on your normalmaptexture you have normals ( no! ) and they face up and in other directions than up, but if they face up [ 0, 1, 0 ] i think then they point in the direction of the NORMAL OF THE FACE, and NOT UP in the direction of your mesh… think bout a cube, on the topface, you have simply upnormals pointing up in worldspace, but on the bottomface, the same upnormal points then down… that means you have to transform the light into this space so it points up, too…

after all this here in i think i have to write a tutorial… first let me get my light equation finally fully under control… then i will do what i can for doing easy to understand bumpmapping… ok? good, see ya later guys