PDA

View Full Version : glScale



mr_coolio
03-26-2003, 01:34 AM
My program looks a bit like this:



glEnable(GL_NORMALIZE);
glScale(1,1,z);
glCallList(model_id);


I find that for values of z > 1e-10 it's fine, and I just get a flat model, and the lighting is appropriate.
However when z is zero or 1e-50 it goes mad! The lighting is very wrong: although it is flat, the lighting suggests it isn't flat. Futher, the lighting seems to be dependent upon the angle at which I view it: if I look at it from one angle it's bright, if I rotate a bit it's dark.

Any ideas please? I think it's a bug in gl.

[This message has been edited by mr_coolio (edited 03-26-2003).]

Relic
03-26-2003, 02:05 AM
Normals are transformed by the inverse transposed modelview matrix.
As a glScale(1, 1, z) looks like this:



1 0 0 0
0 1 0 0
0 0 z 0
0 0 0 1

The inverse transpose of it would be



1 0 0 0
0 1 0 0
0 0 1/z 0
0 0 0 1

Any sacle with z == 0 or exceeding the floating point precision (near to MIN_FLOAT) results in crap. Lighting calculations with this are undefined.

[This message has been edited by Relic (edited 03-26-2003).]

mr_coolio
03-26-2003, 02:34 AM
I'm sorry, there's a mistake. I meant to put 1's in not 0's. i've edited my original post.

Relic
03-26-2003, 06:23 AM
http://www.opengl.org/discussion_boards/ubb/biggrin.gif Me too. Same problem lim(z) -> gives crap for the normals. +Inf or NaN doesn't normalize well.

mr_coolio
03-27-2003, 01:17 AM
Well, well this is very interesting isn't it? I suppose that a zero value will generate a singular matrix, but large values should be fine.

How big do the values of z in glScale(1,1,z) have to be to make it go nuts?

[This message has been edited by mr_coolio (edited 03-27-2003).]

Relic
03-27-2003, 05:20 AM
Depends on the internal floating point precision in the OpenGL implementation you use. Normally IEEE float 32 bits. The Intel CPU documentations contain the min and max values you can represent with this.
The problem you have is that glScale(1, 1, 0) allows only two normals as result (0, 0, 1) and (0, 0, -1). It's possible to decode that from -Inf and +Inf FPU results, if an implementation would like that, but that'd be pretty useless.
One solution would be to squish flat your geometry yourself and specifying one of the above normals for all vertices, depending on which side of the plane your light is positioned.

mr_coolio
03-27-2003, 08:12 AM
Well that's interesting. It seems that a 32 bit +ve float only goes between 1.175x10-38 and 3.403x1e38. I thought it was much more than that! Well, I suppose that if I set z to be 1e-50 it's outside these bounds so something is going to go wrong; 1e-50 is probably going to get rounded to zero by the double->float conversion.
,
However if I specify 1e+99 for z, gl is going to see a DWORD and interpret it as a number, even though not the number I meant. So I don't see why it should go bananas in that case.
,
Does it make any difference if I use doubles throughout? eg
double z
glScale3d(1,1,z)

or does gl use floats internally?

[This message has been edited by mr_coolio (edited 03-27-2003).]

[This message has been edited by mr_coolio (edited 03-27-2003).]

Relic
03-27-2003, 08:30 AM
Because if you do anything with numbers the FPU cannot deal with, the FPU will return errors and numbers like NaN or denormals. Anything done with them is just as NaN or denormalized. Interpreting such numbers as DWORD is not giving you any valid information other than that the FPU through up its hands. Look up the result table of FDIV in "Intel Architecture Software Developer's Manual, Volume 2 Instruction Set Reference Manual"
And for more info read "Intel Architecture Software Developer's Manual, Volume 1 Basic Architecture" Chapter 7 Floating-Point Unit. ('nuff said. http://www.opengl.org/discussion_boards/ubb/wink.gif)

PS: No, doubles on API side won't help if the implementation uses floats internally.

[This message has been edited by Relic (edited 03-27-2003).]