PDA

View Full Version : NV Register Combiners questions



Diapolo
03-06-2002, 10:00 AM
Hi,

I will have to take a deep look into NVs Register Combiners, before I can start coding with them (and I donīt want to use NV parse at the beginning).

But before the whole learning process, Iīve got some questions.

1. Is it possible to do a cube map normalisation + diffuse DOT3 Bump Mapping on GF2 with 2 general combiners and 2 texture units in 1 pass (I currently do this with GL_ARB_texture_env_combine, GL_ARB_texture_env_dot3 and 3 active texture units on my GF3 in 1 pass)?

2. Is the quality / speed of the Register Combiners superior to the thing Iīm currently doing?

3. Whatīs the best solution for doing DOT3 Bump mapping on GF2 in one pass with RCs (only diffuse / diffuse + specular / diffuse + CM normalisation ... etc.)?

Diapolo

Korval
03-06-2002, 10:32 AM
1: Yes, but you can't have a base texture (as that would require 3 texture units) at the same time.

2: Well, seeing as how nVidia probably implements the OpenGL TexEnv stuff in Register combiners, yes.

3: How many ways to do it are there? As far as I know, there's only one way, and it works.

Diapolo
03-06-2002, 02:07 PM
Korval:

1. That was the question, is it possible with a base texture in one pass on a GeForce2? And your answer is NO, right?

2. Could you give me a better hint, what you mean by your 2nd answer http://www.opengl.org/discussion_boards/ubb/smile.gif?

3. I dunno how many ways are there, Iīm searching for the best way to do it in one pass?
Should be base texture modulated with N.L i guess ... but can I achieve some form of normalisation on GF2 with the 2 combiners I have and with the 2 texture units?
Perhaps a way to add the specular component (for which I have to learn the math, currently I only know to do the diffuse part).

Diapolo

Korval
03-06-2002, 07:16 PM
1: A GeForce 2 can only apply 2 textures. The bump-map and the renormalization map take up those two slots. So you will have to go to a second pass, or just not renormalize the interpolated normal (it doesn't look that bad for a lot of things).

2: What I mean is that the actual hardware that does texture combining, that implements glTexEnv calls on the GeForce 2, is almost certainly the register combiner hardware. What the NV_register_combiner extension allows you to do is program that hardware directly, rather than use the normal OpenGL commands to get at that functionality. The actual hardware is significantly more powerful than what the glTexEnv calls let you get to.

3: I seem to recall a paper on nVidia's web site about a way to approximate the renormalization of a normal without using a cubemap. However, that may have required more register combiner operations than a GeForce 2 has.

SirKnight
03-07-2002, 03:28 AM
3: I seem to recall a paper on nVidia's web site about a way to approximate the renormalization of a normal without using a cubemap. However, that may have required more register combiner operations than a GeForce 2 has.


Yes this was done using the register combiners but it takes 2 general combiners to normalize one vector, 3 gen combiners for 3 vectors and so on. So unless you have a geforce 3 or 4ti, you cant do it. Well i say you cant, you can but it will be in software and slow as hell. http://www.opengl.org/discussion_boards/ubb/smile.gif But they [nvidia] also said that using the register combiner normalizing technique is faster than using the cube map method (on the gf3 & 4ti of course).

-SirKnight

Diapolo
03-07-2002, 04:11 AM
I see, so the "best" thing for DOT3 Bump Mapping on a GeForce 2 is N.L <modulate> base texture without any normalisation technique (1 pass).
Iīm currently doing this via combine and dot3 extensions and I will try to convert that into register combiner code (like I said, I first have to read through the whole tech docs *g*).

But this whole stuff should be possible in 1 pass with diffuse AND specular, right (read that somewhere in the NV tech docs)?

But I still donīt understand how to do this (diffuse + specular).

The equation I currently use is:
final Color = diffuse Color * (N <dotprod> L)

I know how to compute L in tangent space and all that stuff.

But how can I get H (which is the half-angle vector?)?
And how should I set my secondaray color?

Diapolo

davepermen
03-07-2002, 07:02 AM
i have an application done wich does diffuse and specular in one pass, with different exponents and a glossmap, and a colormap.. but its only a fake in fact.. its cool to get the gf2 till the end of its power (using the two textures rgba and the two colors rgb and rgb.. (i let out one alpha, yes http://www.opengl.org/discussion_boards/ubb/wink.gif)

it even does selfshadow the bumps, wich is an (imho) important grafical improvement for solid geometry (for water you can disable it)

registercombiners are nice powerfull, i can't wait for my gf4 with 4 textures and much more general combiners (oh, and tex-shaders, too http://www.opengl.org/discussion_boards/ubb/wink.gif)

here is the demo: http://tyrannen.starcraft3d.net/PerPixelLighting

but i don't have the source anymore..

Diapolo
03-08-2002, 04:30 PM
OK, I read through the NV_register_combiners specs.
For now Iīve got only 2 questions (more to come? *g*):

1. If I use glCombinerOutputNV without a GL_TRUE for DotProduct and muxSum, then the combiner does a AB multiply and a CD multiply?

2. SPARE0_NV and SPARE1_NV are in some way a "temp storage register", that donīt directly affect the final computed output color?

Diapolo

Btw.: How many general Combiners supports the GF4 (GF1/GF2: 2 - GF3: 8)?

[This message has been edited by Diapolo (edited 03-08-2002).]

V-man
03-08-2002, 08:23 PM
1. You can do a AB and a CD and a AB+CD but to do that you need to tell it where you want the computed value (output) to go. Yes you can set the last 3 to GL_FALSE at the same time.

2. They are storage units, but spare 0 " is hardwired in the color sum unit" in the final combiner stage. Be careful with that.

3. I see support for NV_register_combiners and NV_register_combiners2 at http://developer.nvidia.com/view.asp?IO=nvidia_opengl_specs
so Im assuming it hasnt seen further development. Correct me if Im wrong please.

V-man

Diapolo
03-09-2002, 04:24 AM
V-Man:

1. If I set the last ones to GL_FALSE I will get an A*B, C*D multiply and a result for (A*B)+(C*D), right?

2. You are right, Spare0 is hardwired to the Color sum unit.
One question on that, what is the initial value of the secondary color if I didnīt set it?

3. And I thought, that perhaps GF4 would have more general combiners, than GF3 has, even without NV_register_combiners3 http://www.opengl.org/discussion_boards/ubb/smile.gif. Not that I would be able to use all the 8 combiners I have on my GF3 *g* ... I only wanted to know.

Diapolo

V-man
03-09-2002, 06:19 AM
Originally posted by Diapolo:
V-Man:

1. If I set the last ones to GL_FALSE I will get an A*B, C*D multiply and a result for (A*B)+(C*D), right?

2. You are right, Spare0 is hardwired to the Color sum unit.
One question on that, what is the initial value of the secondary color if I didnīt set it?

3. And I thought, that perhaps GF4 would have more general combiners, than GF3 has, even without NV_register_combiners3 http://www.opengl.org/discussion_boards/ubb/smile.gif. Not that I would be able to use all the 8 combiners I have on my GF3 *g* ... I only wanted to know.

Diapolo

1. Yes that case will work. If you do set some of those to GL_TRUE, it may not because of limitations. Something about upgrading the register combiners and limiting the number of adders and multipliers.

2. Secondary color is a incoming value computed by some part of the geforce. If there is no specular (secondary color), it will be zero.

3. For more functionality, they have to add a new extension so that we'll know new registers are available. But! why should that stop us from upgrading!


V-man

Diapolo
03-10-2002, 02:01 PM
Cool, I figured it out, how to do the simplest form of DOT3 Bump Mapping via the RCs ... doesnīt seem too complicated to use, but I guess there are some pitfalls, that one has too look for.

Here is my current RC code:






// UNIT 0 (Normal Map)
glActiveTextureARB(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D, TEXTURES.Return_TextureIDs(3));
glEnable(GL_TEXTURE_2D);
// UNIT 1 (Base Texture)
glActiveTextureARB(GL_TEXTURE1_ARB);
glBindTexture(GL_TEXTURE_2D, TEXTURES.Return_TextureIDs(2));
glEnable(GL_TEXTURE_2D);

// Register Combiner Setup Code

glCombinerParameteriNV(GL_NUM_GENERAL_COMBINERS_NV , 1);

glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_A_NV, GL_TEXTURE0_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_B_NV, GL_PRIMARY_COLOR_NV, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerOutputNV(GL_COMBINER0_NV, GL_RGB, GL_SPARE0_NV, GL_DISCARD_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_TRUE, GL_FALSE, GL_FALSE);

glFinalCombinerInputNV(GL_VARIABLE_A_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_B_NV, GL_TEXTURE1_ARB, GL_UNSIGNED_IDENTITY_NV, GL_RGB);

glFinalCombinerInputNV(GL_VARIABLE_C_NV, GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_D_NV, GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_RGB);




Iīve got a question, on secondary color.
Do I have to use the GL_EXT_secondary_color to set a per-vertex secondary color, if I want to use this color in my RC code?

Diapolo

SirKnight
03-10-2002, 02:55 PM
Iīve got a question, on secondary color.
Do I have to use the GL_EXT_secondary_color to set a per-vertex secondary color, if I want to use this color in my RC code?


I'm pretty sure you do. At least I have always used that extension for secondary colors, I've never seen it done any other way.

-SirKnight

McBain
03-10-2002, 03:28 PM
Originally posted by SirKnight:
I'm pretty sure you do. At least I have always used that extension for secondary colors, I've never seen it done any other way.

-SirKnight

You can do it withouth the extension
just use glVertexAttrib3dNV to put the 2nd color directly in the vertex program register
Am i right ?

Diapolo
03-10-2002, 03:52 PM
I guess you are right McBain, but currently I donīt use Vertex Programs http://www.opengl.org/discussion_boards/ubb/smile.gif.

First I learn Register Combiners and do all my vertex math on the CPU (currently that works).
And after I know, how to do great-looking Bump Mapping and Per-Pixel Lighting, Iīll take a look at Vertex Programs and Texture Shaders http://www.opengl.org/discussion_boards/ubb/smile.gif.

By the way, what is a good code example / sample app, if I want to learn, how to do diffuse + specular DOT3 Bump Mapping with the RCs (I know about the diffuse part, but know not much about the specular part ... mainly, how to compute the half angle vector H)?

Diapolo

SirKnight
03-10-2002, 08:49 PM
That halfangle vector is just the view vector (negated) and the light vector (negated) added together. If you remember adding two vectors gives you the vector inbetween the two. So knowing that, you should be able to see why those two vectors should be negated. Try it on paper if you need to, thats what i did. http://www.opengl.org/discussion_boards/ubb/smile.gif Oh and remember when you dot N and H to raise it to some specular exponent. I use a function that makes it look like it was raised to the power of 16. It works quite well.

Also McBain is correct about the vertex attrib thing but if your not using vertex programs, looks like your stuck using the EXT_secondary_color extension. I totally forgot about the vertex attrib functions at the time, otherwise i would have also stated that. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

Diapolo
03-11-2002, 06:17 AM
That leads me to the question, how to compute the view vector http://www.opengl.org/discussion_boards/ubb/smile.gif.

Light Vector is simply light position - vertex position.

View vector = camera position - vertex position?

Diapolo

cass
03-11-2002, 06:23 AM
Diapolo,

Yep, that's right. And in eye space, the camera position is (0,0,0), so the (unnormalized) view vector is just (-vertex_position).

Thanks -
Cass

Diapolo
03-11-2002, 06:42 AM
Thanks for that reply cass http://www.opengl.org/discussion_boards/ubb/smile.gif.

Correct me, if Iīm wrong, the vertex position I have is in object space and 0.0f, 0.0f, 0.0f is the camera position in eye space.
So in order to compute the view vector in object space, I have to do a matrix multiply with cam pos in eye space and the inverse modelview matrix.
After that I get the cam pos in object space and can compute:

view vector_OS = cam pos_OS - vertex pos_OS

That is, because the light vector I compute is in object space, too and in order to get a correct half-angle vector I guess the 2 vectors have to be in the same space, or?

Diapolo

davepermen
03-11-2002, 07:59 AM
i'm not 100% percent sure if i read your post correctly.. but one thing:
if you need to transform the eye-pos from eyespace (0,0,0 for the eyepos there http://www.opengl.org/discussion_boards/ubb/wink.gif) into some other space by a matrix, just take the last row.. why? beauce the 3x3-part of the matrix gets multiplied with the (0,0,0) vector.. so don't care about this http://www.opengl.org/discussion_boards/ubb/smile.gif
should help to find the correct vector faster..

jwatte
03-11-2002, 08:17 AM
The code you're posting (putting light vectors in through the color channel) will not be correctly per-pixel because it doesn't normalize the light vectors. The code you posted only used a single texture, though, so you can add that.

I can't figure out a way to normalize both diffuse and specular AND use a normal bump map, all in one pass on a GF2.

As far as transforming/normalizing light vectors, this gets even more fun when you do skinned models, because the transformed light vector is actually a blend of the inverse of several different matrices. Oh, joy! Luckily, these are just the transpose of the regular vertex position matrices, nuking the translation part, beacuse I don't allow scale/skew.

Diapolo
03-11-2002, 01:12 PM
@davepermen:

If I understand you right, itīs pretty simple (+ fast) to convert the cam pos into object space.

I only need the 12th, 13th and 14th component of the modelview matrix, because a multiply with 0 leads to 0.
And 12, 13, 14 is X, Y, Z translation, right?

So the resulting vector would be:
CamPos_OS[0] = CurMViewMatrix[12];
CamPos_OS[1] = CurMViewMatrix[13];
CamPos_OS[2] = CurMViewMatrix[14];

Diapolo

Diapolo
03-11-2002, 02:49 PM
OK, now I need help in converting NVParse code into the OpenGL calls.

I got this from an NVIDIA PDF and I tried to convert it by myself, but there seems to be an error, because I donīt see any specular thing *g*.

RC Code:





Diffuse + Specular: decalcol * (N’•L) + speccol * (N’•H)4
!!RC1.0
const0 = ( 0.2, 0.2, 0.2, 0 ); // Spec. color
{
rgb {
spare0 = expand(tex0) . expand(tex1); // NdotL
spare1 = expand(tex0) . expand(tex2); // NdotH
}
}
{
rgb {
spare0 = tex3 * unsigned(spare0); // decal*NdotL
spare1 = unsigned(spare1) * spare1; // NdotH 2
}
}
final_product = spare1 * spare1; // NdotH 4
out.rgb = const0 * final_product + spare0;


And this is what I got out of it:





// UNIT 0 (Normal Map)
glActiveTextureARB(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D, TEXTURES.Return_TextureIDs(3));
glEnable(GL_TEXTURE_2D);
// UNIT 1 (Normalisation Cube Map - L)
glActiveTextureARB(GL_TEXTURE1_ARB);
glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, TEXTURES.Return_TextureIDs(4));
glEnable(GL_TEXTURE_CUBE_MAP_ARB);
// UNIT 2 (Normalisation Cube Map - H)
glActiveTextureARB(GL_TEXTURE2_ARB);
glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, TEXTURES.Return_TextureIDs(4));
glEnable(GL_TEXTURE_CUBE_MAP_ARB);
// UNIT 3 (Base Texture)
glActiveTextureARB(GL_TEXTURE3_ARB);
glBindTexture(GL_TEXTURE_2D, TEXTURES.Return_TextureIDs(2));
glEnable(GL_TEXTURE_2D);

// Register Combiner Setup Code
float fSpecularColor[4] = {0.2f, 0.2f, 0.2f, 0.0f};

glCombinerParameteriNV(GL_NUM_GENERAL_COMBINERS_NV , 2);

glCombinerParameterfvNV(GL_CONSTANT_COLOR0_NV, fSpecularColor);

glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_A_NV, GL_TEXTURE0_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_B_NV, GL_TEXTURE1_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_C_NV, GL_TEXTURE0_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_D_NV, GL_TEXTURE2_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerOutputNV(GL_COMBINER0_NV, GL_RGB, GL_SPARE0_NV, GL_SPARE1_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_TRUE, GL_TRUE, GL_FALSE);

glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_A_NV, GL_TEXTURE3_ARB, GL_SIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_B_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_C_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_D_NV, GL_SPARE1_NV, GL_SIGNED_IDENTITY_NV, GL_RGB);
glCombinerOutputNV(GL_COMBINER1_NV, GL_RGB, GL_SPARE0_NV, GL_SPARE1_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_FALSE, GL_FALSE, GL_FALSE);

glFinalCombinerInputNV(GL_VARIABLE_A_NV, GL_CONSTANT_COLOR0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_B_NV, GL_E_TIMES_F_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_C_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_E_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_F_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);


I guess if I know how that should look I have got a much cleaner understanding of the RCs http://www.opengl.org/discussion_boards/ubb/smile.gif.
One Problem was the mapping parameter.
If the NVParse Code sais noting about the mapping, is it signed identity?

Diapolo

razor
03-11-2002, 07:44 PM
----------
Yes this was done using the register combiners but it takes 2 general combiners to normalize one vector, 3 gen combiners for 3 vectors and so on. So unless you have a geforce 3 or 4ti, you cant do it.
----------

could someone post a link to the page that describes this? i tried looking on NV's site, but i can't find it.

Thanks

richardve
03-11-2002, 08:16 PM
Originally posted by razor:
could someone post a link to the page that describes this? i tried looking on NV's site, but i can't find it.

http://developer.nvidia.com/view.asp?IO=bumpmappingwithregistercombiners

ln.

richardve
03-11-2002, 08:18 PM
Originally posted by Diapolo:
OK, now I need help in converting NVParse code into the OpenGL calls.

To make your life a bit easier, there's some code in the NVSDK for converting NVParse scripts back to regcombiner calls.

yw.

Diapolo
03-12-2002, 03:41 AM
Any idea how this file or tool is called, to get the glCalls of NVParse RC code?

I think the NVParse code is not that hard to read, but in order to learn the whole RC stuff it is much better for me to learn the pure glCalls first http://www.opengl.org/discussion_boards/ubb/wink.gif.

Did anyone take a look at my code and found some errors (that are in for sure *g*)?

Diapolo

kon
03-12-2002, 04:00 AM
In your code in the finalcombiner set the value you have for C to the D and set C to zero. C is multiplied by (1-A) which is not that what you want.
And AFAIK there is only code for converting the glcommands into nvparse statements. http://developer.nvidia.com/view.asp?IO=register_combiner_state

kon

Diapolo
03-12-2002, 05:04 AM
Thanks for that reply http://www.opengl.org/discussion_boards/ubb/wink.gif.
You are right and what you said is logical.
But the result doesnīt seem to be correct.

Now perhaps I have got some errors in the CombinerInput <mapping> parameter or in my half-angle vector code.

First I would like to eleminate the possibility of wrong <mapping> parameters.

unsigned = GL_UNSIGNED_IDENTITY_NV
expand = GL_EXPAND_NORMAL_NV

But what if there is no explicit mapping conversion in the RC NV Parse code, which one do I have to use for the CombinerInput call?

Diapolo

richardve
03-12-2002, 05:34 AM
Hm, I would swear that I've seen code to convert back to GL calls..

Diapolo
03-12-2002, 06:43 AM
OK, I guess I now have got the correct RC gl calls, but the result still looks wrong.






// UNIT 0 (Normal Map)
glActiveTextureARB(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D, TEXTURES.Return_TextureIDs(3));
glEnable(GL_TEXTURE_2D);
// UNIT 1 (Normalisation Cube Map - L)
glActiveTextureARB(GL_TEXTURE1_ARB);
glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, TEXTURES.Return_TextureIDs(4));
glEnable(GL_TEXTURE_CUBE_MAP_ARB);
// UNIT 2 (Normalisation Cube Map - H)
glActiveTextureARB(GL_TEXTURE2_ARB);
glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, TEXTURES.Return_TextureIDs(4));
glEnable(GL_TEXTURE_CUBE_MAP_ARB);
// UNIT 3 (Base Texture)
glActiveTextureARB(GL_TEXTURE3_ARB);
glBindTexture(GL_TEXTURE_2D, TEXTURES.Return_TextureIDs(2));
glEnable(GL_TEXTURE_2D);


// Register Combiner Setup Code
float fSpecularColor[4] = {0.2f, 0.2f, 0.2f, 0.0f};


glCombinerParameteriNV(GL_NUM_GENERAL_COMBINERS_NV , 2);


glCombinerParameterfvNV(GL_CONSTANT_COLOR0_NV, fSpecularColor);


// N <dot> L -> Spare0 und N <dot> H -> Spare1
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_A_NV, GL_TEXTURE0_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_B_NV, GL_TEXTURE1_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_C_NV, GL_TEXTURE0_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_D_NV, GL_TEXTURE2_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerOutputNV(GL_COMBINER0_NV, GL_RGB, GL_SPARE0_NV, GL_SPARE1_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_TRUE, GL_TRUE, GL_FALSE);


// Base Texture * Spare0 (= N.L) -> Spare0 und Spare1 (= N.H) * Spare1 (= N.H) -> Spare1 (= N.L hoch 2)
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_A_NV, GL_TEXTURE3_ARB, GL_SIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_B_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_C_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_D_NV, GL_SPARE1_NV, GL_SIGNED_IDENTITY_NV, GL_RGB);
glCombinerOutputNV(GL_COMBINER1_NV, GL_RGB, GL_SPARE0_NV, GL_SPARE1_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_FALSE, GL_FALSE, GL_FALSE);


// Final Combiner Formel = A*B + (1-A)*C + D
// ConstColor0 * (Spare1 * Spare1) + Spare0
glFinalCombinerInputNV(GL_VARIABLE_A_NV, GL_CONSTANT_COLOR0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_B_NV, GL_E_TIMES_F_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_C_NV, GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_D_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_E_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_F_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);


I guess Iīm doing something wrong, where I compute the half-angle vector and all that stuff, so could someone have a look at that (what the functions do should be easy to understand *g*):






Vector_3D_Subtract(fCameraPosition_OS, fQuad_XYZ[i], fViewVector_OS);
Vector_3D_Negate(fViewVector_OS, fViewVector_negate_OS);
Vector_3D_Negate(fLightVector_OS, fLightVector_negate_OS);
Vector_3D_Add(fViewVector_negate_OS, fLightVector_negate_OS, fHalfAngleVector_OS);


Vector_3D_Matrix_3x3_Multiply(fHalfAngleVector_OS, fTangentMatrix[i], fHalfAngleVector_TS);


This is my TexCoord code:






glMultiTexCoord2fvARB(GL_TEXTURE0_ARB, fQuad_ST[i]);
glMultiTexCoord3fvARB(GL_TEXTURE1_ARB, fLightVector_TS);
glMultiTexCoord3fvARB(GL_TEXTURE2_ARB, fHalfAngleVector_TS);
glMultiTexCoord2fvARB(GL_TEXTURE3_ARB, fQuad_ST[i]);


Here is a link to an image produced by the current code:
http://www.t-online.de/home/Phil.Kaufmann/OGL/dot3bump_1.jpg

Diapolo

[This message has been edited by Diapolo (edited 03-12-2002).]

kon
03-12-2002, 11:47 PM
You do set the cubemap's wrap parameters to GL_CLAMP or GL_CLAMP_TO_EDGE , don't you?

kon

Diapolo
03-13-2002, 04:16 AM
They are set to GL_CLAMP_TO_EDGE_EXT for S, T and R for the normalisation Cube Map http://www.opengl.org/discussion_boards/ubb/smile.gif.

But Iīm pretty sure there is something wrong with the half-angle vector (code), because the result I get looks very similar to the one, I get if only use N.L <modulate> Base tex http://www.opengl.org/discussion_boards/ubb/frown.gif.
Or do you think the result looks correct???

Diapolo

kon
03-13-2002, 05:22 AM
Your base tex has unsigned values hasn't it? If that's the case then in the second combiner change tex3's format from GL_SIGNED_IDENTITY_NV to GL_UNSIGNED_IDENTITY_NV.
Well, not only for debugging I've found it very useful to render L and H. Try to compute L' = L - quad[i]. And why are you negating L and H? Calculating unnormalized H is simply


Vector_3D_Subtract(fCameraPosition_OS, fQuad_XYZ[i], fViewVector_OS);
Vector_3D_Add(fViewVector_OS, fLightVector_OS, fHalfAngleVector_OS);


kon


[This message has been edited by kon (edited 03-13-2002).]

Diapolo
03-13-2002, 06:08 AM
OK, I changed the Half-Angle vector code into this, like you said kon http://www.opengl.org/discussion_boards/ubb/smile.gif.






Vector_3D_Subtract(fCameraPosition_OS, fQuad_XYZ[i], fViewVector_OS);
Vector_3D_Add(fViewVector_OS, fLightVector_OS, fHalfAngleVector_OS);


The light vector code should be correct, or?





Vector_3D_Subtract(fLightPosition_OS, fQuad_XYZ[i], fLightVector_OS);


I dunno if you are right about the <mapping> parameter, because in the NVParse RC code, tex3 is signed_identity (because itīs not explicit converted like expand(tex3)).
But there I have got another problem Iīm not really sure, when to use which mapping http://www.opengl.org/discussion_boards/ubb/frown.gif.

For example this line from the RC code:
spare1 = unsigned(spare1) * spare1; // NdotH 2

Why is spare1 unsigned for the first variable and after that signed for the next variable?

Well OK, but here is the resulting image, after the code change:

before: http://www.t-online.de/home/Phil.Kaufmann/OGL/dot3bump_1.jpg

after: http://www.t-online.de/home/Phil.Kaufmann/OGL/dot3bump_2.jpg

Dunno if it looks right, but I donīt think the specular part looks very impressive *g*.

Diapolo

kon
03-13-2002, 06:33 AM
Yes, the light computation is ok. And in the second combiner all mappings should be unsigned shouldn't they?
Well, try to rotate the quad and see if there's some change in the specular rendering.
And your tangent space for the quad is for every vertex T = (1,0,0), B = (0,1,0) and N = (0,0,1), right?

kon

Diapolo
03-13-2002, 07:31 AM
Tangent, Bi-Normal and Normal have the correct values (the ones you mentioned).

Iīm not sure about the mappings in the second combiner.
Is there any NV tech doc part, that describes, why I need THAT mapping for THAT thing I want to do.
These mapping things currently confuse me a bit http://www.opengl.org/discussion_boards/ubb/smile.gif.

By the way, have you got a GF3 / GF4 kon?
Perhaps you could have a look at the current app and the results it produces?

If no, I īll post more screenshots later today http://www.opengl.org/discussion_boards/ubb/smile.gif.

Diapolo

kon
03-13-2002, 11:26 AM
Well, there some docs about regcomb like http://developer.nvidia.com/view.asp?IO=registercombiners with some graphs showing the i/o mappings. But, you still have to know what you are reading and writing. http://www.opengl.org/discussion_boards/ubb/wink.gif
At home I've got a GeForce 1 where I can try your app (even it will run in software).
Remark:
Actually you could do it without normalization saving 2 texture units! Then putting the light and half vectors into the primary and secondary color should produce the same, shouldn't it?

kon

Diapolo
03-13-2002, 06:37 PM
kon, I tried to use primary and secondary color, so that the code would run on GF1 and GF2 class hardware, but the result was really "ugly" I think.
Iīm not sure if I did it correct though.

1. I calculated the view vector and normalized it.
2. I calculated the Half-Angle vector with the use of the light vector and view vector (both normalized).
3. I normalized the half-angle vector.
4. I converted the resulting vector into tangent space.
5. I scaled and biased that vector, so that itīs range is between 0 and 1.





fHalfAngleVector_TS[0] = fHalfAngleVector_TS[0] * 0.5 + 0.5;
fHalfAngleVector_TS[1] = fHalfAngleVector_TS[1] * 0.5 + 0.5;
fHalfAngleVector_TS[2] = fHalfAngleVector_TS[2] * 0.5 + 0.5;


6. I put the resulting vector into secondary color (glSecondaryColor3fvEXT(fHalfAngleVector_TS)).

6. I setup the combiner in the same way I do for the cubemap normalisation technique, but use the primary color for L and the secondary color for H.

Dunno, if thereīs something wrong there.

But back to the <mapping> thing http://www.opengl.org/discussion_boards/ubb/smile.gif.
I know the NVIDIA doc you mentioned and itīs clear to me, what all the mapping parameters mean.
But how can I know, when I have to use which mapping.
Letīs say I have got a texture on a TMU. Is it signed or unsigned in the beginnning and why do I need it expanded, if itīs unsigned for the dot product ... perhaps Iīm stupid, but I donīt get to the point http://www.opengl.org/discussion_boards/ubb/frown.gif *sigh*.

Diapolo

[This message has been edited by Diapolo (edited 03-13-2002).]

SirKnight
03-13-2002, 08:43 PM
You should be putting your halfangle vector (and even your surface to light vector) into a texture unit which is using a normalization cube map which will correctly normalize the vector across a surface. If not, then your vector will not interpolate across the surface correctly, it will become un-normalized. What will happen (along with other problems too) is the closer your light gets to a surface, the dimmer it will be. Not good. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

[This message has been edited by SirKnight (edited 03-13-2002).]

Diapolo
03-14-2002, 04:58 AM
You are right SirKnight, and I know that, but I tried to check it out, because it frees up 2 texture units, that are in use, if Iīm on the Cube Map normalisation technique and so it would run on GF1 and GF2 cards.
But the result looked really ugly (Iīll post a screenie later).

Did you take a look at the 2 screenshots above in the post?
First one is only diffuse and second one should be diffuse + specular, but looks not really cool http://www.opengl.org/discussion_boards/ubb/smile.gif.

And anyone a got tip for the <mapping> thing http://www.opengl.org/discussion_boards/ubb/smile.gif?

Diapolo

Diapolo
03-14-2002, 08:45 AM
OK, I played arround with the constant color values, I use 2 more combiners, in order to get (N.L)^16, I use an uncompressed normal map and another texture and I have to say WOW ... THAT looks pretty sweet now http://www.opengl.org/discussion_boards/ubb/smile.gif.
And thatīs the first time I really feel "confused" about the capabilities of GF3 and what I did with it.

Well, here is a pic: http://www.t-online.de/home/Phil.Kaufmann/OGL/dot3bump_3.jpg

For that I used unsigned mappings in every combiner stage and only for N.L and N.H an expand mapping ... anything to say about that, because I still dunno how to use these mappings *G*?

By the way, is there any way to get (N.L)^16 on GF1 / GF2 class hardware?

Diapolo

Tandy
03-14-2002, 01:05 PM
Well i have been playing with bumpmapping to, it's really funny.(until my computer died)

If i understand it correctly the function is (cos(x))^16 where x is the angle between L and H.

If it is like this, it would be easy too use an approximation (maybe Taylor series).
approximate it with a polynom of the second degree and it would fit int the Register Combiner on a GF1 right?

How ever iam not sure about the base equation!

[This message has been edited by Tandy (edited 03-14-2002).]

[This message has been edited by Tandy (edited 03-14-2002).]

SirKnight
03-14-2002, 02:55 PM
in order to get (N.L)^16, I use an uncompressed normal map ...


Your supposed to raise N dot H to a power not N dot L! Typo maybe? http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

SirKnight
03-14-2002, 02:57 PM
I just looked at your screenshot and that looks real good. I love that texture. Where did you get it?

-SirKnight

SirKnight
03-14-2002, 03:02 PM
By the way, is there any way to get (N.L)^16 on GF1 / GF2 class hardware?


Yes you can do N dot H(!) with a GF1/2. I do it! Actually its just an approximation but it looks just like it. Here is the function to do it: 4*((N'dotH)^2 - 0.75f) where N' is the normal map. I learned this function from Ron Frazier BTW.




glCombinerInputNV(GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_A_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_BLUE);
glCombinerInputNV(GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_B_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_BLUE);
glCombinerInputNV(GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_C_NV, GL_CONSTANT_COLOR0_NV, GL_SIGNED_NEGATE_NV, GL_BLUE);
glCombinerInputNV(GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_D_NV, GL_ZERO, GL_UNSIGNED_INVERT_NV, GL_ALPHA);
glCombinerOutputNV(GL_COMBINER1_NV, GL_ALPHA, GL_DISCARD_NV, GL_DISCARD_NV, GL_SPARE0_NV, GL_SCALE_BY_FOUR_NV, GL_NONE, GL_FALSE, GL_FALSE, GL_FALSE);


That is the combiner code to do it.

[EDIT] BTW, that stores it into spare0 alpha. In Ron's code (and mine in a similar but different way) the lighting is rendered into the alpha buffer.

-SirKnight

[This message has been edited by SirKnight (edited 03-14-2002).]

SirKnight
03-14-2002, 03:12 PM
Opps, i meant 'Yes you can do N dot H(!) to the power 16.' I would have edited the message and fixed it but when i do, it wants to cut off the first half of my post. :p

Also in the line:
glCombinerInputNV(GL_COMBINER1_NV, GL_ALPHA, GL_VARIABLE_C_NV, GL_CONSTANT_COLOR0_NV, GL_SIGNED_NEGATE_NV, GL_BLUE);
I have color0 set with 0.75 in it. Thats how i subtract the 0.75.

-SirKnight

[This message has been edited by SirKnight (edited 03-14-2002).]

Diapolo
03-14-2002, 03:52 PM
I just looked at your screenshot and that looks real good. I love that texture. Where did you get it?


Thanks SirKnight http://www.opengl.org/discussion_boards/ubb/smile.gif.
I found that texture in a texture packet from S3.
Some time ago, they had downloadable S3TC compressed textures on their ftp.
One pack contained many egypt-style textures, that simply look AWESOME.
I upped the texture I currently use on my webspace and you can download it there (format is Direct Draw Surface / DXT1 - GL_COMPRESSED_RGB_S3TC_DXT1_EXT): http://www.t-online.de/home/Phil.Kaufmann/OGL/Egypt.dds

And you were right, (N.L)^16 was a typo, I meant (N.H)^16 (^ means raise to the power of, or?) http://www.opengl.org/discussion_boards/ubb/smile.gif.

No letīs talk about the approximation, that you use http://www.opengl.org/discussion_boards/ubb/wink.gif.
You do ((Spare0 * Spare0) + (-ConstantColor0 * 1)) * 4.
And this time itīs clear, why you use which mapping.
Signed_negate for ConstCol0, because you want it to be negative.
Unsigned_Invert for Zero, because you want it to be 1 (why did NV leave GL_ONE out?).
But I still donīt get it, why I use an expand mapping for the dot product *g*.

Iīve never used the alpha component in RC code, so could you try to explain what exactly GL_BLUE does (I remember, that GL_BLUE can only be used for Alpha)?

Diapolo

SirKnight
03-14-2002, 06:18 PM
Well the expand on the dotproduct is because the dotproduct gives you an number in the range [-1,1]. Well the values in the textures are [0,1]. The expand converts from [0,1] to [-1,1]. I think thats correct.

About the blue thing. If you look at the presentation on register combiners one slide talks about the General Combiner Input Registers. It shows two tables that indicate what can be used in both the RGB and ALPHA input registers. On the alpha table where it talks about spare0 and spare1, it shows that you cant use the Red and Green parts only the Blue and Alpha parts. So i used blue b/c the spare0 i put in variable A and B comes from the RGB portion of combiner0 and b/c of the input rules, i must specify blue. I dont think putting alpha instead of blue would work in this case b/c like i said the spare0 i am using came from the RGB portion of combiner0 and that only computes RGB values, the Alpha values are totaly separated. So there wouldnt be anything in the alpha part. I think im correct about that, ive never tested it though. I hope that makes sence. http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

Diapolo
03-15-2002, 08:49 AM
Well the expand on the dotproduct is because the dotproduct gives you an number in the range [-1,1]. Well the values in the textures are [0,1]. The expand converts from [0,1] to [-1,1]. I think thats correct.


Seems clear, I didnīt know, that a DP produces a result in [-1, 1] range.

For the whole stuff on a GF / GF2, could I have a look at your RC code http://www.opengl.org/discussion_boards/ubb/wink.gif?

btw.: Did you download the texture?

Diapolo

SirKnight
03-15-2002, 10:33 AM
Ya i got the texture and its pretty cool. I converted it to a tga and made a normal map so its ready to go. But ya the dot product gives a value from -1 to 1. -1 is 180 degrees, 0 is 90 and 1 is 0.

I think if you go to nvidia's dev site and search for Ron Frazier's Per Pixel Lighting demo you will be in good shape. He also includes docs describing what he is doing. Which is what i used to learn how to make a better lighting engine w/ only 2 combiners. My RC code is just like his except i use nvparse. Well there are a few things i took out that im not using. Actually i took out the last lighting pass for both specular and diffuse. The last pass just adds color and has a color filter cubemap for effects and a material color map. I decided to drop that to gain some speed and i add the color in a different place.

His code also comes with shadow volumes! http://www.opengl.org/discussion_boards/ubb/smile.gif

-SirKnight

Diapolo
03-16-2002, 07:58 AM
@SirKnight: Congratulations, that you are one of the happy people, that own a GF4 http://www.opengl.org/discussion_boards/ubb/smile.gif.
By the way, I forgot to ask that ... your GF1 / GF2 RC code is using multiple passes, or (how many?)?

OK, I read through the NV doc called RegisterCombiners.pdf.

This part confused me a bit:


Some OpenGL multitexture TexEnv modes only use 1 general combiner:
• No Alpha component in Texture1 AND
• TexEnv for second texture unit (Texture1) is GL_MODULATE OR
• TexEnv for Texture1 is GL_ADD and color sum not used


So I can set a TexEnv for a texture unit AND use RCs AND the TexEnv influences the final color (weird)?
No, if they talk about GL_ADD they mean, do "GL_ADD" with the RCs, or?

Another question I have is, how can I build a "gloss map" and how do I use it?
I saw that somewhere and it looks PRETTY cool.
But I guess this will eat another texture unit, right http://www.opengl.org/discussion_boards/ubb/frown.gif?

Diapolo

[This message has been edited by Diapolo (edited 03-16-2002).]

SirKnight
03-16-2002, 08:13 AM
@SirKnight: Congratulations, that you are one of the happy people, that own a GF4 http://www.opengl.org/discussion_boards/ubb/smile.gif.


Thanks. Its been a long time since i upgraded vid cards so i thought it was about time to do it again.

In nvidia hardware, the TexEnv stuff is actually done in their register combiners. But using the register combiner extension will let us work with the combiners at a lower level giving us more power.



Another question I have is, how can I build a "gloss map" and how do I use it?
I saw that somewhere and it looks PRETTY cool.
But I guess this will eat another texture unit, right ?


Well that demo by Ron Frazier i told you about does gloss mapping. Its just a monochrome image telling the combiners or whatever hardware your using for tex ops, how much specular to apply to a certain pixel. And yes, it will eat another tex unit since it is a texture. To use it is just a simple multiply. In Ron's demo, he calculates the attenuation, then multiplies this with the selfshadow term which is multiplied by the gloss map. This is stored into the alpha buffer to get multiplied with the other lighting calcs like the N.H ^p calc, etc.

-SirKnight

SirKnight
03-16-2002, 08:29 AM
Heh, i posted right when you were editing. http://www.opengl.org/discussion_boards/ubb/smile.gif My lighting engine right now is 2-3 passes for diffuse and 2-3 for specular. Total of 4 to 6 passes. Like i said before sometimes i disable the last pass for diffuse and specular to gain some speed. Saves me 2 passes for each light which is a pretty good boost. On my gf4, ill be able to crunch it down to about 3 to 4 passes total. Maybe less.

Right now the passes are like this: (ill just show the diffuse since specular is pretty much the same except for the addition of gloss map and H vector)

Pass 1
------
* Attenuation using a XY radial texture and the brightness scaled tangent space light vector for the Z part of the attenuation.
* Selfshadow term
* Then the selfshadow and attenuation are multiplied and stored in the alpha buffer

Pass 2
------
* N dot L is calculated and stored into the alpha buffer (actually its multiplied with the DST_ALPHA since i set the blend func to GL_DST_ALPHA,GL_ZERO

Pass 3 (optional)
------
* Material diffuse color map is multiplied with the alpha transperency map stored in the alpha component of the diffuse color map texture
* Color filter cubemap (for some cool effects) is multiplied by the light color in col0
* These two things ara multiplied together and multiplied into the alpha buffer with GL_DST_ALPHA, GL_ONE

And thats it. Like i said, its about 90% the same for specular. Im sure glad Ron Frazier made that demo of his and wrote those papers about per pixel lighting. Its helped me a lot. Plus with all the people who know about this stuff on this msg board.

-SirKnight