PDA

View Full Version : Doom3 shading questions



JD
09-27-2004, 06:10 PM
I was wondering how doom3 does the shading. I looked at the screenshots and it seems like the patches are always smooth shaded while brushes are flat shaded? What about meshes? Is the shading variable for both meshes and brushes or only for meshes? By variable I mean based on the angle between faces, too sharp the face is flat shaded, not sharp edge the face is smooth shaded. In my tests, the variable shading works the best. The bumps nicely hide the mesh trinagulation lines we've seen while smooth shading. Not sure if I implement variable shading on brushes. Last time I smooth shaded the brushes the tangent basis got skewed too much and controlling this per brush or collection of them might get too tiresome. What do you think?

Jens Scheddin
09-28-2004, 12:44 AM
I looked at the screenshots and it seems like the patches are always smooth shaded while brushes are flat shaded?That's the idea behind patches. You'll also note, that there is no more dynamic sub division on them.


What about meshes?I think the models (or meshes) are generated in max or maya. Thus normals will be exported. So the shading depends on how you build the model. You can use curves, patches (whatever they are called there) or simple brushes to make the model look smooth on some parts while you can have hard edges on other parts. I don't think there's somethink like "variable shading", as you describe it, going on.

V-man
09-28-2004, 05:57 AM
Hope you don't mind me hijacking your thread. :)

Does Doom 3 use GL2 shaders at all or is it just ARB_vp, ARBfp, NVvp, NVfp?

I know that in the .plan file he says he experiments with them (a very old .plan)

Jens Scheddin
09-28-2004, 07:05 AM
Originally posted by V-man:
Hope you don't mind me hijacking your thread. :)

Does Doom 3 use GL2 shaders at all or is it just ARB_vp, ARBfp, NVvp, NVfp?

I know that in the .plan file he says he experiments with them (a very old .plan)Nope, no GL2 shaders. But there are some cvars for CG shaders. Didn't figure out if they're really supported.

JD
09-28-2004, 09:48 AM
I think the low poly chars are using bumps from high poly meshes and think they're smooth shaded, ie. the tangent basis is calculated per vertex. I'm not so sure about non-char meshes like doors and door frames. It's hard to tell whether the effect is light attenuation, whether it's smooth shaded or flat or inbetween ie. flat and smooth on some parts. It almost looks like flat but I've noticed a light gradient across a face which could be attributed to smooth shading with variable edge hardness because I still see corner edges in the door frame.

No problem with hijacking :) Carmack said he's going to be using glsl from now on. Only asm shaders in doom3 to my knowledge (including gf2 reg.cmbs and/or tex.env.combine dot3).

Jens Scheddin
09-28-2004, 03:45 PM
I think geometry is smooth shaded, if the vertex is shared by two faces and flat shaded for two touching faces with different vertices. The algorithm for calculating tangents and binormals should automatically smooth the vectors, if they are shared by more than one index (speaking of indexed vertices) by summing the calculated vectors up and normalizing them at the end. It doesn't really matter, if the geometry comes from model or map data.

PS: I'm sorry if this is not a very clear explanation but it's pretty late already :)

Nil_z
09-29-2004, 01:48 AM
hope you don't mind another hijacking :p do you know how JC do dynamic lighting in DOOM3? I think all those lights are project texture, there are places that have one, two or more lights there. does he write different shaders for all these conditions, or do it in other way, e.g. multipass?

tfpsly
09-29-2004, 03:34 AM
Originally posted by Nil_z:
hope you don't mind another hijacking :p do you know how JC do dynamic lighting in DOOM3? I think all those lights are project texture, there are places that have one, two or more lights there. does he write different shaders for all these conditions, or do it in other way, e.g. multipass?Multipass, as it is required by the shadowing.

Nil_z
09-29-2004, 05:32 AM
Originally posted by tfpsly:

Originally posted by Nil_z:
hope you don't mind another hijacking :p do you know how JC do dynamic lighting in DOOM3? I think all those lights are project texture, there are places that have one, two or more lights there. does he write different shaders for all these conditions, or do it in other way, e.g. multipass?Multipass, as it is required by the shadowing.can you explain some details? thank you

tfpsly
09-29-2004, 06:01 AM
The way Doom3 works is, for each light :
* Compute shadow volume (if the light casts shadows. Each light may have this flag set in the *.map files).
* Display everything that is near the light and not in the shadow volume, hightlighed and add this into the framebuffer (using normal mapping and DOT3).
There are a few projected light ; in which case you add a projected texture in the second step.

nystep
09-29-2004, 10:03 AM
By the way, i even wonder why you make assumptions about doom3 shaders, since the text files are stored in the directory base/pak000.pk4 (open with winrar).
Just browse in the glprogs directory and you'll see the nice ARB vertex programs and they seem well commented by their author...

regards.. ;)

Nil_z
09-29-2004, 03:06 PM
thanks, nystep, those vp&fp programs are very good reference. but still I can't find out how they handle project light in those programs, can you explain how it works? I have been wondering about how to do (multiple) dynamic project light in a game scene for some time, can't figure out a good solution...

Tom Nuydens
09-29-2004, 11:45 PM
Check out http://www.ronfrazier.net/apparition/index.asp?appmain=research/per_pixel_lighting.html -- a little outdated in that the code snippets are based on old-school NV_register_combiners, but the text should give you the general idea.

-- Tom

nystep
09-30-2004, 03:09 AM
Hi Nil_z,


but still I can't find out how they handle project light in those programs, can you explain how it works? I have been wondering about how to do (multiple) dynamic project light in a game scene for some time, can't figure out a good solution... First you're totally right to ask this because it's much harder and interresting than just finding out some text files in an archive, and i'm really sorry if my previous post looked arrogant or whatever..
This being said, i had a look at the nv20_bumpAndLight.vp.
From the comment: "# TEX2 will be the 3D light texture" i understand that TEX2 is used as a volumetric spotlight (or whatever 3d light shape you wish) mask. It means that the projection light is already precalculated in a way. Later in the vertex program, you find the following part of code that generates the texture coordinate for this 3d-lookup-table:

# texture 3 has three texgens
DP4 result.texcoord[3].x, vertex.position, program.env[6];
DP4 result.texcoord[3].y, vertex.position, program.env[7];
DP4 result.texcoord[3].w, vertex.position, program.env[8];

You'll notice that the texture coordinate is a linear combinaison of the vertex position and a special 4*3 matrix stored in environement.
I'd guess this matrix is a scaling/rotating/translation matrix to put the light and orientate it correctly in the game. DP4 is really required because you can't translate elseway, and it's nice to put the light where you wish in the 3d world. The scaling is usefull to simulate a random varying light source, as so often used in the mars planet levels.
Thinking about it, you wouldn't even need a big 3d-texture-lookup spotlight shape, one should test, but 16*16*16 could be enough thanks to the trilinear-filtering (trilinear filtering is required to have a correct interpolation on a 3d texture without mipmaps or am i mistaking?).
You should keep in mind that this shader is executed once per light. You can't really do more lights per passes when you combine this to stencil shadowing (considering where the stencil test is done in the opengl pipeline), but if you don't use stencil shadows, further optimisation is possible.
The vertex data seems also pretransformed when entering the shader (the homogenous divide isn't done yet), and that's what the OPTION ARB_position_invariant means. It's a clever optimisation since you don't have to transform your vertex at each pass, but it means that you probably have to use the CPU to transform them, since you can't store pretransformed vertices in the GPU memory when the camera moves arround.

However, after watching this shader, i began to wonder why it was specific to nv20. Nothing here seems unrunable on ati cards, except maybe the ARB_position_invariant, but it's also specified in the R200_interaction.vp.
The R200 codepath is slighly longer and more complex than this one. By watching more closelly, we can see that the spotlight calculation are not precalculated in a 3d-lookup-texture but are done in the shader, with the half-angle vector.

If you combine this with the fact that the nvidia cards up to geForceFX can do twice more depth-stencil writes when no color is written, there's no wonder these boards perform so well in the benchmarks compared to ati, is it?
Or is it just that i'm ill today and that my fever gives me weird ideas.. ???

regards,

Jens Scheddin
09-30-2004, 05:37 AM
Doom 3 does NOT use 3D textures at all. Each light can have a projected texture (or even multiple (will result in multiple passes)) and an attenuation texture. These two textures combined result in the spot/whatever light in the game. For spot lights, there is a simple attenuation texture, that eliminates the back projection of the projected texture. The 2D projected texture is sampled by the TXP instruction, which takes the s/q and t/q texture coordinates for sampling.

Uhh, and the vertex data is not "pretransformed". It's transformed _after_ the vertex program execution. This is no optimization. It's necessary for the other passes (the GUIs and so on) that don't use the vertex program. The ARB_position_invariant option came with ATI, so nVidia had it later supported on request in their vertex_program_NV, IIRC.

You'll see this wehen looking at the ARB vp/fp shaders. The NV20 shaders are not really a good example, since you don't see what's going on in the pixel pipeline.

Some of my thoughts about higher performance of nVidia in Doom3: X800 is a little modified version of the R3x0 chip against an almost new NV40 (i take this from the much better FP32 performance). Even the depth_clamp extension doesn't bring any noticeable gain (must be activated by hand). Also nVidia is using shader replacement to optimize. ATI currrently not (correct me, if i'm wrong). Even if so, they're known for, well, not the best drivers around.

Now, please close this thread :)

tfpsly
09-30-2004, 09:13 AM
Also nVidia is using shader replacement to optimize. ATI currrently not (correct me, if i'm wrong). Even if so, they're known for, well, not the best drivers around. They do now. And they even officaly promote it as "AI"(sic, wtf is AI in there?) Catalyst drivers.

Nil_z
10-01-2004, 01:16 AM
thanks nystep, it is very kind of you to explain how those shaders work. so doom3 uses multipass for each light? i'll think about how it works

SirKnight
10-01-2004, 06:17 AM
Originally posted by Nil_z:
thanks nystep, it is very kind of you to explain how those shaders work. so doom3 uses multipass for each light? i'll think about how it worksDoom 3 uses multiple passes per light for (i think) all the render paths [i]EXCEPT the ARB2 path, which everything can fit into one pass. The ARB2 path is using ARB_vertex_program and ARB_fragment_program.

-SirKnight

Jens Scheddin
10-01-2004, 07:45 AM
Originally posted by SirKnight:

Originally posted by Nil_z:
thanks nystep, it is very kind of you to explain how those shaders work. so doom3 uses multipass for each light? i'll think about how it worksDoom 3 uses multiple passes per light for (i think) all the render paths [i]EXCEPT the ARB2 path, which everything can fit into one pass. The ARB2 path is using ARB_vertex_program and ARB_fragment_program.

-SirKnightThe R200 path (GL_fragment_shader_ATI) can do it in one pass too. But only, if the texture matrices for the specular and diffuse maps are identity (so no rotation, scrolling, etc.). Else it does two passes with the same shader.

Nil_z
10-01-2004, 10:49 AM
there are up to more than 4 project lights in DOOM3, some rooms have even more, can those be fitted into one pass? I am thinking of using additive blend for each light(in multipasses), but can't figure out how those can be rendered in one pass, for the number of lights is not fixed.

Korval
10-01-2004, 11:38 AM
They split the room up. I haven't seen Doom3, so I can't say, but they probably don't have more than one or two projected lights on any single surface.

SirKnight
10-01-2004, 12:42 PM
Originally posted by Korval:
They split the room up. I haven't seen Doom3, so I can't say, but they probably don't have more than one or two projected lights on any single surface.I remember like a year or two ago John Carmack saying something about how they try not to have many lights with overlapping volumes as it would go really slow (of course that makes sense :) ). From playing doom 3 this is definately the case, there are not many overlapping light volumes. There are some scenes where a hallway or room has like 20 or more lights but they all have very small boundaries so it runs fast.

BTW...GO GET THE DEMO NOW! hehe. :D

-SirKnight

Korval
10-01-2004, 04:41 PM
I don't need the demo; I have the actual game. I've just not yet had the impetus to install it and play it.

Nil_z
10-02-2004, 07:08 AM
I found in DOOM3 vertex programs, there is no code about vertex position transformation. What have they done with those positions?

CrazyButcher
10-02-2004, 11:29 AM
the vertex programs are with the position_invariant flag, that means fixed function pipeline does the position computing.
useful for mixing shader and fixed function passes, cause position computation of a vertex prog slightly differs from fixed function.

Nil_z
10-03-2004, 05:01 AM
does it cost speed, using position_invariant ?

Jens Scheddin
10-03-2004, 07:12 AM
Originally posted by Nil_z:
does it cost speed, using position_invariant ?probably, but generally not noticeable.

tfpsly
10-03-2004, 11:44 AM
Why? It should be faster, as
1) the card may used the hardware hard-coded pipeline to compute final vertex positon, instead of running a program

2) far more important, the early-Z discarding should be activated, boosting a lot the performances.

Korval
10-03-2004, 03:59 PM
the card may used the hardware hard-coded pipeline to compute final vertex positon, instead of running a programNo ATi card of R300 or better has any fixed-function T&L. The NV30 line does, but the jury is still out on the NV40's.


far more important, the early-Z discarding should be activated, boosting a lot the performances.Not using position_invariant will not turn off early-Z. Only writing to the Z-depth in a fragment program does that (among possibly other hardware-specific fragment-based things).

tfpsly
10-04-2004, 12:20 AM
Originally posted by Korval:

far more important, the early-Z discarding should be activated, boosting a lot the performances.Not using position_invariant will not turn off early-Z. Only writing to the Z-depth in a fragment program does that (among possibly other hardware-specific fragment-based things).[/QB]Hem true, you're right. So there should not be any performance variation.

nystep
10-06-2004, 11:20 AM
thanks nystep, it is very kind of you to explain how those shaders work.or rather, trying to.. ;)

Well it's ok for the 3d texture, i've been commenting some vertex program that wasn't used in the final product :) The interaction.vfp gives a slighly better overview of how the shading works. I wonder why 3d light textures were abandonned by carmack. did it require too much texture bandwidth? too much video memory? i'm really not sure putting 1 more txp and 1 mul in a vertex program is really faster and is worth the texture bandwidth gain, but he must have tested..
As for the difference of performance between radeon and geforce, i've seen the ARB_precision_hint_fastest in the begining of the fragment program...
From the ARB specification:


However, the "ARB_precision_hint_fastest" and
"ARB_precision_hint_nicest" program options allow applications to
guide the GL implementation in its precision selection. The
"fastest" option encourages the GL to minimize execution time,
with possibly reduced precision. The "nicest" option encourages
the GL to maximize precision, with possibly increased execution time.So it enables the geForceFX and up cards to run the fragment program with 16 bits floating point or even 16bits fixed point precision.. Whereas ATI Cards are bound to their 24 bits precision.
But anyway considering the content of the fragment program nothing seems to require a higher precision does it?

Nil_z
10-17-2004, 08:16 PM
another question about DOOM3 shader. How does it use those heightmap textures in the models? and I can't find code of calculating specular light in the fragment program, my result seems much worse than the game effect. anyone knows how to do that?

unreal
10-17-2004, 10:09 PM
hey! Does Anyonw try to make a demo/ example using interactionR200.vp or interaction.vfp programs?
I had i quick look at them. But I can not find the code of interactionR200 fragment shader because the extension is not supported by the R200. Must try to make one cause the interaction.vfp is more complicated and i dont understand some parts.

KRONOS
10-18-2004, 12:02 AM
Originally posted by Nil_z:
another question about DOOM3 shader. How does it use those heightmap textures in the models? and I can't find code of calculating specular light in the fragment program, my result seems much worse than the game effect. anyone knows how to do that?Carmack converts the heightmaps to normal maps and then adds that to the normals of the normal map (if a normal map exists). Check the addnormals "function" with the SDK.

Nil_z
10-18-2004, 12:47 AM
I wonder how and why he does that :confused: BTW, what is the SDK you talked about?

Jens Scheddin
10-18-2004, 03:32 AM
Originally posted by Nil_z:
I wonder how and why he does that :confused: BTW, what is the SDK you talked about?It's done to add small detail, that would be a mess to add directly to the high polygon model, that is used to create the normal maps. The height maps are probably all hand painted via PS, or so.
The code to add a height map to a normal map needs two steps:
1. Create a normal map from the height map. In this step, you take three pixels from the height map formaing a plane and calculate the normal of that plane. This is the normal to encode into the new image.
2. Add this to the other normal map. After encoding the two normals from the two maps, use the following code to add the normals together:

n0[0] /= n0[2];
n0[1] /= n0[2];
n1[0] /= n1[2];
n1[1] /= n1[2];
normal[0] = n0[0] + n1[0];
normal[1] = n0[1] + n1[1];
normal[2] = 1.0f;
normal.VxNormalize();Where n0 and n1 are the two decoded normals and normal is the resulting one to encode into the final normal map.

Take a look at the id developer site (http://www.iddevnet.com) for the SDK. It contains all the game code (no renderer, network code).

tfpsly
10-18-2004, 03:58 AM
The Doom3 mod sdk:
http://www.materiel.be/logclic/click.php?id=1716&url=http%3A%2F%2Fwww.iddevnet.com%2F
http://www.materiel.be/logclic/click.php...2FDoom3_SDK.exe (http://www.materiel.be/logclic/click.php?id=1559&url=ftp%3A%2F%2Fftp.idsoftware.com%2Fidstuff%2Fdoo m3%2Fsource%2FDoom3_SDK.exe)

Nil_z
10-18-2004, 11:47 PM
why not add the normal info from the heightmap to the normalmap previously into the data?

Jens Scheddin
10-19-2004, 12:01 AM
Originally posted by Nil_z:
why not add the normal info from the heightmap to the normalmap previously into the data?Adding and renormalize them leads to an average normal. When I implemented this, I realized that it looks really bad :)

EDIT: Uh, i should read the post before replying :)

I think it was done because of flexibility. One material could have the normal map only and another one would have the same map with added scratches on it.

Nil_z
10-19-2004, 12:20 AM
how to generate that normalmap?
i am thinking about:
get 3 pixels from (x,y), (x+1, y) and (x, y+1), calculate normal from them. but what should i do with the pixels on the right&bottom edge?

Jens Scheddin
10-19-2004, 04:49 AM
Originally posted by Nil_z:
but what should i do with the pixels on the right&bottom edge?Depends on the texture clamping. If you want a repeated normal map, you wrap around like (x+1)%width. Otherwise you take the rightmost pixel twice.

Nil_z
10-19-2004, 11:16 PM
why don't they use another normalmap instead of the heightmap to add to the base normalmap?
anyway, i will try the heightmap to normal converting method.

btw, studying the rendering method in DOOM3 is quite interesting. I'd like to render some models in DOOM3 to see the result in my rendering code. does anyone know where i can find the md5mesh spec of the released DOOM3? I can only find some spec of the alpha.

GordonShumway
10-20-2004, 12:53 AM
Hi!

Either take a look at www.doom3world.org/phpbb2/viewtopic.php?t=2884 (http://www.doom3world.org/phpbb2/viewtopic.php?t=2884) or simply use the Doom3 SDK available at http://www.iddevnet.com/

Gordon

Sunray
10-20-2004, 01:13 AM
To manualy convert height maps to normal maps would take unnecessary time and loss of flexibility. The addnormals function could take a Strength parameter which the artist could specify in the material file.

Iīve written a DOOM 3 model loader with full animation support, both for version 6 (alpha) and 10 (release). Itīs interesting to see the tangent-space symmetry seams (mirrored texcoords) in the release models, which practically isn't visible in the alpha models. DOOM 3 seems to only take polygons with the same "handedness" in account when computing vertex normals..

Specs can be found here: http://www.doom3world.org/phpbb2/viewtopic.php?t=2884

Nil_z
10-21-2004, 08:31 AM
there is no per vertex normal info in DOOM3 model data. How should i calculate the tangent space transformation?

SirKnight
10-21-2004, 10:25 AM
Originally posted by Nil_z:
there is no per vertex normal info in DOOM3 model data. How should i calculate the tangent space transformation?You can always compute the normals yourself on model load up.

-SirKnight

Nil_z
10-21-2004, 10:47 AM
there is no smooth setting either, should I average all normals of polygons that share the same vertex to create vertex normal?

Jens Scheddin
10-21-2004, 02:11 PM
Originally posted by Nil_z:
there is no smooth setting either, should I average all normals of polygons that share the same vertex to create vertex normal?correct

Nil_z
10-27-2004, 12:49 AM
I almost finished the MD5mesh viewer, it seems work fine, thanks for all the help from here. But i noticed there are some strange bright lines over the model when the light moves, like the specular light is not correct. the screenshot can be found here:
http://gba.ouroad.org/misc_download/2004/10/screenshot.JPG_1098866407.jpg
it only appears on the left side of the model,
anyone knows what is wrong?

KRONOS
10-27-2004, 03:08 AM
Originally posted by Sunray:
To manualy convert height maps to normal maps would take unnecessary time and loss of flexibility. The addnormals function could take a Strength parameter which the artist could specify in the material file.
And you can... The addnormals doesn't take that parameter, but the heightmap function does, and since you then add both normals, it is like having a strength parameter...

Jens Scheddin
10-27-2004, 06:49 AM
The specular lighting looks completely wrong. How do you calculate it?

Nil_z
10-27-2004, 11:57 PM
the strange specular only appears when the light is at certain angle, at other times it looks OK to me. Maybe it is a method that only looks right sometimes :p

here is my vp&fp, the idea is to dot normal with the half angle vector, power it, multiply with texel from specular map and add to the result color. I hope the idea is not wrong.

!!ARBvp1.0 OPTION ARB_position_invariant;

ATTRIB iTex0 = vertex.texcoord[0];
ATTRIB tangent = vertex.texcoord[1];
ATTRIB bitangent = vertex.texcoord[2];
ATTRIB normal = vertex.normal;

PARAM mvi[4] = { state.matrix.modelview.inverse };
TEMP lightpos, lightvec,halfvec, temp;

DP3 lightpos.x, mvi[0], state.light[0].position;
DP3 lightpos.y, mvi[1], state.light[0].position;
DP3 lightpos.z, mvi[2], state.light[0].position;

#i am using directional light, normalize to get light direction
DP3 temp, lightpos, lightpos;
RSQ temp, temp.x;
MUL lightvec.xyz, lightpos, temp.x;

#vector from vertex to camera
DP3 temp, vertex.position, vertex.position;
RSQ temp, temp.x;
MUL temp, -vertex.position, temp;

#get half angle vector
ADD halfvec, temp, lightvec;
#normalize half angle vector
DP3 temp, halfvec, halfvec;
RSQ temp, temp.x;
MUL halfvec, halfvec, temp;
#transform to tangent space
DP3 result.texcoord[1].x, lightvec, tangent;
DP3 result.texcoord[1].y, lightvec, bitangent;
DP3 result.texcoord[1].z, lightvec, normal;
DP3 result.texcoord[2].x, halfvec, tangent;
DP3 result.texcoord[2].y, halfvec, bitangent;
DP3 result.texcoord[2].z, halfvec, normal;
MOV result.texcoord[0], iTex0;
END;

!!ARBfp1.0
PARAM lightcolor = state.light[0].diffuse;
PARAM ambient = state.light[0].ambient;
PARAM const = {32.0, 0.0, 0.0, 0.2};
TEMP normal, temp, lightvec, texel,spec, halfvec;
TEX texel, fragment.texcoord[0], texture[0], 2D;
TEX normal, fragment.texcoord[0], texture[1], 2D;
TEX spec, fragment.texcoord[0], texture[2], 2D;

#calc normal from normalmap
MAD normal, normal, 2.0, -1.0;
DP3 temp, normal, normal;
RSQ temp, temp.x;
MUL normal.xyz, normal, temp;

#normalize light direction
DP3 temp, fragment.texcoord[1], fragment.texcoord[1];
RSQ temp, temp.x;
MUL lightvec, fragment.texcoord[1], temp;

#normalize half angle vector
DP3 temp, fragment.texcoord[2], fragment.texcoord[2];
RSQ temp, temp.x;
MUL halfvec, fragment.texcoord[2], temp;

# dot normal with half angle vector
DP3_SAT halfvec, halfvec, normal;
POW halfvec, halfvec.x, const.x;

DP3_SAT temp, normal, lightvec;
MAD_SAT temp, lightcolor, temp, ambient;
MUL temp, texel, temp;

#add specular to result
MAD_SAT result.color, halfvec, spec, temp;
END;

Ysaneya
10-28-2004, 12:29 AM
In your vertex shader code... how do you calculate the vector from vertex to camera ? You don't even use the camera position - unless you implicitely assume that the camera position is always at (0,0,0) ??

Y.

Nil_z
10-28-2004, 01:14 AM
i am using OPTION ARB_position_invariant in vertex program, which means fixed function pipeline does the position computing. I think the vertex position I get in vertex program has been transformed into view space, so the camera position is always 0,0,0. Maybe I am wrong, I'll try transforming the position by vertex program.

Jens Scheddin
10-28-2004, 02:19 AM
#vector from vertex to camera
DP3 temp, vertex.position, vertex.position;
RSQ temp, temp.x;
MUL temp, -vertex.position, temp;Ysaneya is right, this code assumes the camera to be static at position (0,0,0). You need to pass the camera position to the vertex program, if you want to move it around in your scene.
In a position invariant program, the vertices will be transformed after the program execution. So the vertex is in world or whatever space it was in during program execution.


"SUB halfVec, viewPos, vertex.position;\n"
"ADD halfVec, lightVec, halfVec;\n"This will give you the half angle vector for the specular term. viewPos is the camera position in world space and lightVec the vector from the vertex to the light source (or a constant vector for directional lights). Take care with normalization.

EDIT: typo

Nil_z
11-01-2004, 09:31 PM
I have changed my vertex program to get camera position by transfer (0,0,0) with inverse modelview matrix. I should set this position by some program env parameter, but right now it is just a test. The specular light moves with diffuse light now, looks ok. The strange line of specular light is because sometimes when the light is coming from behind, I still have positive value from half angle vector dotproduct normal. I am now using (H.N)*(N.L), like in the ARB_fragment_program spec.

BTW, about adding hightmap to normalmap, instead of add the two normal together, I calculate the offset between the normal from heightmap and (0,0,1), then add the offset to normalmap. It gives much better result than just add the two normal.