Velocity vector transformation in shaders

Some minor confusion going on that perhaps someone can help me with…

I’m working on some shaders where I need to render objects differently based on their velocity. Right now I’m just doing a little testing of the concept and I’m running into a little bit of trouble. I’ll try to give a run down.

(First I’ll confess that my top layer is OpenSceneGraph, but given it’s just a wrapper over OpenGL I don’t think it’s the issue… I think it’s dumb math or a misunderstanding of the shaders on my part.)

Alright so. As I said, I’m rendering objects differently based on their velocity vector. Velocity vector is tracked in the main app. I have a uniform tied to each object which updates the velocity vector each frame. Now, I should be able to transform the velocity vector via the ModelViewMatrix just like I would the normal vector of a vertex (if they were facing the same direction, transforming the vertex should yield the normal and velocity still facing the same direction, and so on.) So in my vertex shader I have:


relativeVelocity = vec4(  gl_ModelViewMatrix * vec4(velocity.xyz, 0.) );

I’ve printed the ModelViewMatrix out from the main application, and when I position the camera at (-100, 0, 0) and look at (1, 0, 0) I get:


[   0   0   -1   0   ]
[   0   1    0   0   ]
[   1   0    0   0   ]
[   0   0  -100  1   ]

Which since OpenGL stores in column-major, that looks correct (90 degree rotation about Y, -100 X translation).

I’m having a bit of trouble from here. As a test to see if this was working correctly, I thought I’d set up a simple red/green/blue test where an object was just moving straight forward and back and test if the velocity vector was flipping the colors accordingly (Z positive when it was moving toward us, Z negative when moving away from us) because as I understand it, the negative Z axis is how the camera is oriented (after all transformations), so when the velocity vector was pointing Z+ it should be facing us, yes?

So what I was doing in the fragment shader:


if ( relativeVelocity.z > 0. )
    gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
else if ( abs(relativeVelocity.z) < 1e-10 )  // zero, for rounding errors
    gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
else
    gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );

However, I’m seeing the opposite of what I expected. So, is my math wrong, my understanding of the orientation of the camera, both, or something else? :S

If velocity is a uniform (and therefore within a single draw call cannot change), and gl_ModelViewMatrix is a uniform (which it is), then there’s no point in doing this transformation in the vertex shader. Or even doing anything with the velocity in the vertex shader. Velocity is a constant; just transform the velocity vector on the CPU and pass that as your uniform.

Also:

if ( relativeVelocity.z > 0. )
    gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
else if ( abs(relativeVelocity.z) < 1e-10 )  // zero, for rounding errors
    gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
else
    gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );

What does “for rounding errors” mean? A number (if it is actually a number and not NAN) is either > 0 or <= 0. Unless you’re specifically looking for a value that is close to zero, this makes no sense. And if you are looking for a value that is “close to zero,” it still doesn’t do what you need.

What you need to find a “close to zero” value is this:


if ( relativeVelocity.z > EPSILON )
  //Above 0
else if ( relativeVelocity.z < -EPSILON )
  //below 0
else
  //close to 0. Or NAN.

I don’t know whether this is your problem right now, but if it isn’t it will become one. The velocity vector, like the normal vector, is 3D. Neither should be multiplied by the 4x4 ModelViewMatrix. The velocity vector should only be transformed by the rotation factors of the ModelViewMatrix. In general, it should never be scaled, translated, or sheared, which are components of the ModelViewMatrix.

[UPDATE:]
Using a uniform to provide the velocity vector is extremely restrictive. Only a very small number of uniforms are available, which means you could only render a very small number of objects per draw call. If all you want to do is render one object, it’s no big deal. But if you want to render tens or hundreds or thousands of objects, you can’t use uniforms for your velocity vectors. (Unless you only draw one object per draw call, which is very inefficient.)

It’s more general, and efficient, to store your velocity vectors as vertex attributes. Besides, the velocity of your object will not be everywhere consistent unless your object has no rotation while it moves. So, depending on what you are trying to do, it may be wrong to treat an object as having just one velocity. If you are only concerned with just one point in your object, then just one velocity will do. Otherwise you need to provide a separate velocity per vertex.

Anyway, if you do store your velocity vectors as vertex attributes, then you will need to rotate them same as you rotate the vertices of your objects, and it would make sense to do that in the vertex shader.

The velocity vector, like the normal vector, is 3D. Neither should be multiplied by the 4x4 ModelViewMatrix. The velocity vector should only be transformed by the rotation factors of the ModelViewMatrix. In general, it should never be scaled, translated, or sheared, which are components of the ModelViewMatrix.

This isn’t true. By putting a zero in the W column, you ensure that the translation will not be applied. As for scaling and/or shearing, normals do need to be scaled and sheered. However, they do not use the same matrix for this; you have to take the inverse-transpose of the modelview matrix. This is the gl_Normal matrix.

It would help if the W component of the velocity vector is 0.0, because that would neutralize the translation terms of the ModelViewMatrix, but that part of the code was cut off in the original post so we don’t know what value it has. Even if the velocity vector’s W component is 0.0, that doesn’t guarantee correct results after multiplying by the 4x4 ModelViewMatrix, unless the fourth row of the ModelViewMatrix is [0 0 0 1].

In any case, the scale factors that are part of the ModelViewMatrix should not be applied to the velocity vector (drawing an object twice as large does not make it go twice as fast).

Typically, normals should be unit length, so scaling them is typically not a good idea. By that I mean, if you start with unit length normals, and then scale them, you will in many cases then have to normalize them, which is very expensive. Better not to scale them if possible.

As far as shearing normals goes, a normal needs to be perpendicular to its surface. If the surface’s vertices are sheared, the normals must be appropriately adjusted. That appropriate adjustment is not the same as the shear terms in the ModelViewMatrix, so it would be wrong to multiply the normals by the ModelViewMatrix, just as I had written. In any case, I specifically referred to velocity vectors in that part of my post.

In the case of velocity vectors, it isn’t clear what should be done to them when their object’s vertices are sheared. Probably nothing.

Typically, normals should be unit length, so scaling them is typically not a good idea. By that I mean, if you start with unit length normals, and then scale them, you will in many cases then have to normalize them, which is very expensive. Better not to scale them if possible.

If you scale your model, the normal must be adjusted appropriately. Otherwise, they’re not approximating the scaled surface correctly. Hence the inverse/transpose.

In the case of velocity vectors, it isn’t clear what should be done to them when their object’s vertices are sheared.

Actually, that raises a good question. Why isn’t the velocity vector already in world-space?

I gave this problem a little more thought and decided that none of these approaches is very good, in the general case. The general case is that the object is being acted on by gravity, friction, has rotation, forward velocity, may collide with other objects (causing a sharp change in forward velocity and possibly rotation), and may deform (such as following a collision). The velocity vector is different for each vertex, and it changes for each vertex each frame in a very complex manner. Due to the frame-to-frame changes, using vertex attributes isn’t a good solution.

The general solution is really very efficient and simple, given vertex shaders. Instead of passing the object’s ModelViewMatrix to the vertex shader as a uniform, pass in two ModelViewMatrices as uniforms: the ModelViewMatrix of the previous frame, and the ModelViewMatrix of the current frame. (This does not require any extra calculations, since it is trivial to always already know what the ModelViewMatrix was for the previous frame, except when rendering the first frame.) To calculate the velocity vector of each vertex for the current frame, all you have to do is multiply each vertex in the object by its current frame ModelViewMatrix (which you need to do anyway) and then also multiply the vertex by the ModelViewMatrix of the previous frame. Subtract the position of the vertex from the previous frame from the position of the vertex in the current frame, and you have a velocity vector of that vertex for that frame. Finally, you should scale that velocity vector by the inverse of the change in time between frames, so that should also be passed in as a uniform.

It might be possible to be even more efficient by using a VBO (or something, I’m not sure what since I’ve never tried anything like this): In the vertex shader, calculate the velocity vector for each vertex by subtracting the vertex’s position as stored in the VBO the previous frame from the vertex’s position just calculated for the current frame. Scale the velocity vector by the inverse of the change in time between frames. Replace the vertex’s position in the VBO with its location in the current frame (so that it will be available when the next frame is rendered). This approach only requires uniforms that pass the current frame’s ModelViewMatrix and the inverse of the change in time between the previous frame and the current frame.

These approaches automatically deal with everything, including scaling, shearing and even dynamic geometry. And it’s easy to do!

If you scale your model, the normal must be adjusted appropriately. Otherwise, they’re not approximating the scaled surface correctly. Hence the inverse/transpose.

I don’t think this is true. Angles/orientations do not change when an object is scaled, provided it is scaled uniformly in all dimensions (which is usually the case). The normal describes the angle or orientation of a surface at a point. It doesn’t matter how big or small you make the object, as long as you scale it uniformly in all dimensions the direction of the normal will not change. For many common calculations involving the normal, it must be unit length.

Angles/orientations do not change when an object is scaled, provided it is scaled uniformly in all dimensions (which is usually the case)

And again, when it’s not the case, you need the inverse/transpose.

Best to do what always works than to do checks to see if a matrix needs an inverse/transpose or not.

Best to do what always works than to do checks to see if a matrix needs an inverse/transpose or not.

Probably in 99.9% of programs, scaling is uniform in all dimensions 100% of the time (when converting from object coordinates to eye coordinates, i.e., following the ModelViewMatrix transformation). If it is a certainty in any given program that scaling will always be uniform in all dimensions, then there’s no need to to do any checks and no need to do inverse transposes, right?

If you’re trying to use the normal vector after conversion to clip coordinates or normalized device coordinates, then that’s a whole new ball game.

If it is a certainty in any given program that scaling will always be uniform in all dimensions, then there’s no need to to do any checks and no need to do inverse transposes, right?

Right up until someone assumes that putting arbitrary scales in will work perfectly fine. Someone needs an oval somewhere, so they take a sphere model and stretch it out. It’ll work just fine, right?

Robust coding practices dictate that you should make sure that your code always works, even if people do something unexpected.

Robust coding practices dictate that you should make sure that your code always works, even if people do something unexpected.

Code should do what it is documented to do. It is generally impossible to guarantee always correct behavior when people use something outside of documented requirements.

If I was writing a function that allowed the programmer to specify separate scale factors for each of the dimensions, then it would be necessary to do something such as you have suggested. But if my function only allowed the programmer to specify a single scale factor (which would be applied uniformly to all dimensions), then it would never be necessary to do as you suggested.

OpenGL allows the programmer to scale each dimension individually. If the programmer does so, he will have to deal with the consequences. If the programmer doesn’t do so, then the code will always work without doing anything special regardless of how end users use the program.

The typical graphics application allows end users to enter coordinates of model vertices, but not to enter individual scale factors for each coordinate dimension.

Ivory tower programming at its finest. Just let the documentation handle it. Because documentation is never out of date. And everyone always reads and remembers the documentation perfectly. And nobody will ever want to do something against the documentation, but not actually update the documentation to match. And… well, I can keep going, but you get my point.

Defensive coding is like defensive driving. Sure, that car might not be backing out of the driveway, but why take the chance?

The typical graphics application allows end users to enter coordinates of model vertices, but not to enter individual scale factors for each coordinate dimension.

What do you mean by the “typical graphics application?” If you’re talking about game engines, their entity systems are generally not capable of handling non-uniform scales, due to being unable to do collision detection properly. This goes double for objects that use skeletal animation systems. However, even so, there are many, many games that apply non-uniform scales to objects. These are generally more “cartoony”, but they still have lighting that needs to be correct.

Virtually every 3D modelling package is perfectly happy with non-uniform scales. And most non-gaming graphics systems (Ogre, for example) are likely just fine with non-uniform scale.

The point of the original poster’s question had to do with velocity vectors, not normals. I just happened, incidentally, to mention normals in addition to velocity vectors in my initial response to the original poster:

That’s all I wrote about normal vectors. What I wrote then is correct. You misread what I wrote and now we’ve been having this back and forth about something that has nothing to do with the original question, or with how I responded to it.

As I wrote:

The correct behavior of such a function doesn’t have any dependence on the user following the documentation. But the design does have a requirement to follow the documentation: if I am required (as established by the design document) to write a program that handles independently scaling the coordinate axes, then I will do what is mathematically required to always obtain the correct result. But, if the design document does not require the program independently scale the coordinate axes, and if the program is not written beyond the design specifications, then it will be impossible for the situation to occur in which the coordinate axes are scaled independently. I don’t consider handling cases that are impossible to be defensive programming. I consider them to be things that tend to delay the completion of the project, make it needlessly more complex, have slower performance, be more difficult to maintain, and cost more to complete.

I wasn’t really thinking of game engines as the typical graphics application. I was really thinking more of the thousands of small graphics programs that are written every year, and things like CAD programs. These are the types of programs I have more experience with, so I can tend to know what I’m talking about there. I have never messed with a game engine, so I wouldn’t want to comment on what capabilities they have, since I don’t really know.

Now, I consider 3D CAD programs to fall in the realm of 3D modeling programs. I haven’t used 3D modeling programs that are intended to be used for game design, but I have used 3D CAD programs. I wouldn’t be surprised if there are a lot of similarities. In the case of 3D CAD programs, the user doesn’t create geometry by typing in the coordinates of vertices, and typing in the coordinates of the normals at those vertices, and typing in separate scale factors for each of the coordinate dimensions. Instead, the user uses the user interface tools to create and modify geometry. Among the user interface tools are often the ability to scale one coordinate axis differently than another. But the program handles the actual scaling internally and adjusts whatever needs to be adjusted appropriately, leaving the model so that each coordinate axis is scaled the same as the others.

I don’t know of any 3D data interchange file format that allows specifying separate scale factors for each coordinate dimension. Obviously, programs must function correctly internally, but users don’t mess directly with program internals. They can mess with data files, and they can mess with the user interface tools provided by the program.

In my experience, most graphics programs faced with geometry that has different scale factors in each coordinate axis, would simply modify the model then and there so that from that point forward, the scale factor will be the same in each coordinate axis. It just makes life so much simpler, and it’s just pointless to carry around separate scale factors for each coordinate axis. So, by the time it comes to sending the model to the GL, all the coordinate axes will be scaled uniformly and all the normals will be unit length and pointing in the correct directions.

But, if the design document does not require the program independently scale the coordinate axes, and if the program is not written beyond the design specifications, then it will be impossible for the situation to occur in which the coordinate axes are scaled independently.

That’s a lot of ifs. Defensive programming ensures that your program will work despite the ifs.

You also forgot, “if the design doesn’t change.” Because the only constant in software development is that it will change. Best to be prepared for it ahead of time, rather than blindsided by a sudden need and have to implement a lot of inverse/transposes everywhere.

In the case of 3D CAD programs, the user doesn’t create geometry by typing in the coordinates of vertices, and typing in the coordinates of the normals at those vertices, and typing in separate scale factors for each of the coordinate dimensions.

I’ve seen graphical modeling tools from 3DS Max to Maya to XSI to Blender3D to Milkshape and several others. Some of these are used by the lowest of the low-end developer, and some of them are used by movie studios to create assets for films like Up and Toy Story 3. And I don’t recall a single one that makes it at all easy for you to directly type in coordinate values.

Unless you’re under the impression that game developers making models that use millions of triangles actually sit down and type each position in by hand.

Oh, and each of the high end tools? They allow arbitrary transforms, including non-uniform scaling and sheering. You certainly can bake the transform into the vertex data, and it is often advantageous to do this since some adjustment tools don’t work with some transforms. But they will preserve the transforms until you tell them to bake them.

But the program handles the actual scaling internally and adjusts whatever needs to be adjusted appropriately, leaving the model so that each coordinate axis is scaled the same as the others.

I’m curious as to how you know this. Here’s what I mean.

I know that 3DS Max stores scaling rather than just moving the vertex positions internally because I’ve written 3DS Max exporters. The data coming out of 3DS Max does not have any transformation applied to it; you must query the node and object’s transform and apply them to the data yourself to get the transformed positions.

If I had not written exporters, I would not have any evidence of this. Sure, the tool could show the scale transform there, but I wouldn’t be able to tell if that were just a UI convenience or actually showing how things work.

I don’t know of any 3D data interchange file format that allows specifying separate scale factors for each coordinate dimension.

I only know of one standardized 3D data interchange format (namely Collada) and it certainly does allow for non-uniform scales.

That’s informative about the art world. I’m used to the physical world. In the world of CAD, for example, models are expressed in physical units, like inches or millimeters. It makes no sense to have each coordinate axis scaled independently, because then the model can not be in any physical units.

If you want to write your programs so that they handle every possible contingency not required by the design document, be my guest. I won’t be hiring you, though.

I’ll write my programs so that they do what they are required to do. If I believe the requirements are inadequate, then I will try to get them changed. I will not take it upon myself to surreptitiously ignore the design requirements and substitute my own. If I’m aware of likely future changes to the requirements, then I may take those into account in some appropriate fashion when I design and write my programs, provided that won’t add significant time or complexity to the current work. In each case, I use my own judgment and can’t state a hard-and-fast rule.

To each their own.

P.S. This whole discussion we’ve had about vertex normals is kind of funny for me because I actually do most of my current work with Bezier surfaces. One of the many desirable features of Bezier surfaces is that it is very easy and efficient to find true surface tangents (and therefore true surface normals) at any point on a surface. Consequently, there’s never any need to store normals with the model; they are calculated exactly and on the fly where needed (as in after the eye coordinates transformation and at the end of tessellation, so there’s no need to rotate them or multiply them by any inverse transposed matrix or anything else). Unlike triangulated surfaces, Bezier surfaces can exactly represent most geometry, so there’s no need to provide with the model estimated vertex normals or deal with the complications of smooth junctions versus sharp junctions that must be dealt with with triangulated approximations of surfaces. But taking advantage of all this would not be possible without tessellating the surface on the GPU, since ultimately, current graphics hardware can only rasterize triangles (and lines and points). If graphics hardware could directly rasterize Bezier surfaces, then there would be no need for any tessellation. Now there’s an idea for OpenGL 5.0!

I’ve gotten a bit busy this weekend and just got a chance to look at this… I didn’t realize this would be such a popular topic. :stuck_out_tongue: I’m going to catch up and respond to as much of this as possible soon(ish) :smiley:

As a very quick direct question (something I didn’t see in my EXTREMELY quick scan through the replies as I’m about to have to leave again, so my apologies if I missed it): was I wrong that -Z is forward (away) and +Z is (toward the eye)?

(I’ll check up on everything else everyone said asap! Thanks for all the input! I agree on the stuff about needing per-vertex velocities but right now I’m just working on a dummy test / proof-of-concept / etc., working out how to do this on the small scale before the more complicated scale :slight_smile:

No, in standard OpenGL eye space, you are correct. The eye is at the origin, and -Z is forward in front of the eye.

Okay so quick recap:

  1. The w value of the velocity vector is 0, though I guess you’re right, drawing it out and looking at it, I suppose I really only need to go through the normal matrix.

  2. You mentioned only being able render a certain number of objects if I used per object uniforms…? Is there a limit? I wasn’t aware there was… I’ve dug around in the wiki, I see that there is a limited number of “active uniforms” but I wasn’t sure if that means uniquely named uniforms or just bound uniforms… I’ve been assigning “velocity” to each object thinking it would only act as one but now I’m realizing maybe not so much?

I suppose I do need to see about finding a way to get the normal changed to a vertex attrib then…

  1. Still kind of curious about the Z direction thing… now that Photon has confirmed what I thought about the Z direction, I’m not sure why I’m seeing the opposite of what I was expecting…

Yes. Check out:

GL_MAX_VERTEX_UNIFORM_COMPONENTS
GL_MAX_FRAGMENT_UNIFORM_COMPONENTS

for instance. This is the number of individual floats (for instance) you can have as uniform data. This will likely be 4 * GL_MAX_PROGRAM_LOCAL_PARAMETERS (old ARB_vertex_program/ARB_fragment_program syms), which is the number of vec4s.

I am going to but in a little bit on the transformation business because I have seen under so many circumstances so many errors.

Here goes.

Let F:R^3 --> R^3 be any smooth mapping, i.e. F(x,y,z)=( X(x,y,z), Y(x,y,z), Z(x,y,z) ).

Let Df be the differential matrix given by:


|  X_x X_y X_z |
|  Y_x Y_y Y_z |
|  Z_x Z_y Z_z |

Where A_b is the partial derivative of A in the direction b. Note that Df is a function, given a point (x,y,z) one gets a matrix.

then if one has a vector (not a co-vector) in the domain of F at a point (x,y,z), the to realize v in the range of F it is:

Df(x,y,z) * v.

Simple.

Now for GL and the typical transformation matrices. In the vast, vast majority of situations it looks like this:


M= |       a |
   |  R    b |
   |       c |
   | 0 0 0 1 |

and M is applied to points: (x,y,z,1), the thinking is a point on the model is given by (x,y,z).

Thus the transformation to eye co-ordinates is then just:

f(x,y,z) = R*(x,y,z) + (a,b,c).

Thus Df(x,y,z) = R, i.e the differential matrix is constant, which then has that a vector in the the domain (i.e. model co-ordinates) is transformed to a vector in the range (eye co-ordinates) as:

v_eye = R * v_model

which if we want the GLSL magic it is:

v_eye= (M*vec4(v_model, 0.0)).xyz.

What so, so many people get confused on is that the normal “vector” is not transformed this way. The reason is actually really simple: the normal is not used as a vector. Lets see why:

Usually one does the following with the normal:

d= dot( n, L ) where n is the normal, L is the unit vector from the point on the surface to a light.

Now we want to do the computation after some affine transformation M(v)=Rv + b:

d= dot( ??, R(L) )

the object is to find ??, so we require for any unit vector L:

dot(??, R(L) ) = dot(n, L)

now the one touch of tiny magic: for any transformation M and any two vector v,w one has that dot(Mv,w) = dot(v, transpose(M)*w) .

So:

dot( transpose®*(??), L ) = dot(n,L)

thus we see:

transpose®*?? = n

which gives

?? = inverse(transpose®) *n

The lesson here: in lighting the normal vector is used not as a vector but as a co-vector, as such the transformation rules are not the same.

Lastly, if R is orthogonal, then inverse®=transpose® and thus inverse(transpose®)=R. In order for R to be orthogonal, the matrix R must not contain any shearing or scaling, which by the way is like over 90% time anyways.