Can anyone tell me the difference?

What is the difference between the following two:

vec3 Normal = normalize(vec3(ProjectionMatrix*vec4(Normal, 1.0f)));

and

Normal = normalize(vec3(ProjectionMatrix*vec4(Normal, 0.0f)));

I find both give the same result.

Also how the following two differs:

vec4 Position1 = vec4 Position0 + vec4(Normal)*0.25, 0.0f);

and

vec4 Position1 = vec4 Position0 + vec4(Normal)*0.25, 1.0f);

Any explanation will be very highly appreciated.

The difference is right there in what you posted. In each case, one of the expressions has a one where the other has a zero.

If you can’t figure out how that affects the result, then you really need to spend more time working through the problems with pen and paper until you have a reasonable understanding of the concepts.

I think I could not make you understand what I want to know.

For the first case (multiplying by the projection matrix), it seems to me that the second case makes more sense as normal is a vector and ‘w’ component should be zero instead of 1. It does not matter whether it is multiplied by a matrix or added to another point. Am I right about my understanding.

On the other hand in lighting calculation, how these two computation can affect. Also for the second case, the top one is right as normal is again a vector and ‘w’ component should be zero .

To be more clear, please consider the following two expressions: aren’t they equivalent?

mat3 normalmatrix = mat3(vec3(model_view[0]), vec3(model_view[1]), vec3(model_view[2]));

vec3 normal *= normal_matrix; <—same

vec3 normal = vec3( model_view*(vec4(normal, 0.0)); <— same

But it will be definitely different if I write the bottom one as follows:

vec3 normal = vec3( model_view*(vec4(normal, 1.0));

Is this kind of calculation ( w = 1), while dealing with vectors like normal is totally wrong?

Thanks in advance.

On the other hand in lighting calculation, how these two computation can affect.

The question is kinda irrelevant in your example, because lighting calculations are not done in clip space. The projection matrix transforms from camera space to clip space. You don’t do lighting in clip space, since clip space is a homogenous 4D coordinate system, where lines that were parallel in camera space are not necessarily parallel anymore. You cannot (easily) compute things like distances and directions in that space.

The reason you don’t see a difference comes down to two things:

  1. You’re using the wrong matrix. The traditional perspective matrix doesn’t have much of a positional offset, which is the main thing affected by the W component. Normally, it only has a positional offset on the Z component.

  2. You are normalizing the result. That effectively hides the problem.

In short, your result is merely two different kinds of gibberish. It just turns out that the two forms of gibberish aren’t that noticeably different.

I don’t understand. Do you mean the following calculation produces garbage?

mat3 normalmatrix = mat3(vec3(model_view[0]), vec3(model_view[1]), vec3(model_view[2]));

vec3 normal *= normal_matrix; <—same

vec3 normal = vec3( model_view*(vec4(normal, 0.0)); <— same

or this one:

vec3 normal = vec3( model_view*(vec4(normal, 1.0));

This isn’t even syntactically valid.

Also, use

 rather than 
>  for text which isn't actually a quote. When composing a reply, anything within [quote] tags is automatically removed.
> 
> [QUOTE=Lee_Jennifer_82;1281791]
> Is this kind of calculation ( w = 1), while dealing with vectors like normal is totally wrong?


Direction vectors would normally have W=0 so that they are unaffected by any translation component.

I don’t understand. Do you mean the following calculation produces garbage?

Did you read what I said? I said that multiplying normals by the projection matrix doesn’t make sense. That is (a big part of) why your two computations don’t seem to be different.

My syntax may be wrong. What I was saying the following expressions should be equivalent:

mat3 normalmatrix = mat3(vec3(model_view[0]), vec3(model_view[1]), vec3(model_view[2]));

vec3 normal = normal_matrix*normal; <—same

vec3 normal = vec3( model_view*(vec4(normal, 0.0)); <— same

I was talking about multiplying normal by projection matrix because I found when drawing a line in geometry shader along the normal, we need to do that. Otherwise I found it was not giving the correct result.

What I was saying the following expressions should be equivalent:

I’m not sure what that has to do with anything. Yes, what you said is correct. But that doesn’t change anything about the validity of the specific example you started with

I was talking about multiplying normal by projection matrix because I found when drawing a line in geometry shader along the normal, we need to do that. Otherwise I found it was not giving the correct result.

Then you must have had a bug in your code. Transforming a normal into clip space is never the right answer to a problem.

If you’re trying to visualize vertex normals, the general idea is to:

  1. Transform the position & normal into camera space.

  2. Compute a second camera-space position by multiplying the normalized normal by some distance, then adding that to the position.

  3. Transform the two positions into projection clip-space, and emit them from your GS.

By doing it this way, the length of each visualized normal will be constant, defined in camera space.

If you do the offsetting in clip-space like you say you did, then not only will your normal not be correct (even if it isn’t necessarily that wrong), the distances will not necessarily be consistent from vertex to vertex.

I am again confused. For visualizing normals, I’ve done follows:

my vertex in vertex shader in clip space, suppose v;
normal in vertex shader in clip space, suppose n;

I passed both to the geometry shader.

then did the following: v1 = v+n*some_costant;

Will not it produce the same result as you explained? Please let me know.

[QUOTE=Lee_Jennifer_82;1281800]
my vertex in vertex shader in clip space, suppose v;
normal in vertex shader in clip space, suppose n;

I passed both to the geometry shader.

then did the following: v1 = v+n*some_costant;

Will not it produce the same result as you explained?[/QUOTE]
It will. Matrix multiplication distributes over addition, so M*(v+n) = Mv+Mn.

It will. Matrix multiplication distributes over addition, so M*(v+n) = Mv+Mn.

Normalization doesn’t. Take a look at the OP; he clearly normalizes the vector after multiplying it. M*(v+n) != M*v + norm(M*n).

Equally importantly, throwing away the fourth component also doesn’t distribute over addition. Which is another thing the OP clearly does. Not by making W = 0, but by converting the vec4 output of M*n into a vec3.

So no, they’re not in any way the same.

Thanks for the clarification. It would be very kind of you to make things a bit clearer:

1)Is it totally wrong to calculate like this: Mv + norm(Mn), as what I did? I converted w component of normal to zero before multiplication, doesn’t it eliminate the effect?

  1. Can you also explain what you mean “Not by making W = 0, but by converting the vec4 output of M*n into a vec3.” It would be good to explain with example.

Thank you!

In your case, yes.

You almost certainly want M*(v+norm(n)), which is equal to Mv+Mnorm(v) but is not (in general) equal to Mv+norm(Mv).

Sooner or later, you’re going to need to understand the basics of linear algebra. Rote memorisation won’t get you very far.

Is it totally wrong to calculate like this: Mv + norm(Mn), as what I did?

If M is a perspective projection matrix, then yes that is totally wrong.

Can you also explain what you mean “Not by making W = 0, but by converting the vec4 output of M*n into a vec3.” It would be good to explain with example.

It was your example. In your very first post. Go look at it.

vec4(Normal, 0.0f)) is a vec4.

ProjectionMatrix*vec4(Normal, 0.0f) is also a vec4.

vec3(ProjectionMatrix*vec4(Normal, 0.0f)) is a vec3. You chopped off the fourth component of the vector that resulted from multiplying a vec4 by a mat4.

I converted w component of normal to zero before multiplication, doesn’t it eliminate the effect?

No. Nor does it mean that the result of a vector/matrix multiplication has no W component.

The easiest thing for you to do is to just not multiply vector directions by perspective projection matrices at all. Do your operations as I said before: in camera space. Transform the resulting positions into clip space.

Thank you so much for the clarification. Now I totally understand. One last thing, what will be the problem with the resulting output: normals are not of equal length? or there can be more problems.