Model and World Coordinates

I’m getting a bit confused here, I think.

If I have a shader that distorts the geometry of several discrete plane meshes, is it possible to treat them as if they share a common coordinate space?

For example, I have a number of separate plane meshes arranged as an array of thin vertical strips, spread over the x-axis, with gaps inbetween them. I’d like to be able to bend each strip around so that collectively, they form a sphere, made up of bands, with gaps between them, somewhat like the screenshot below (but without the distortion, which I’ll add later).

What I currently get, though, is a number of small identical spheres. I’m assuming this is because each plane is treated as if it has it’s own coordinate system. Is there a way of using a global space, so that their geometries are transformed collectively, rather than individually?

Sorry this is a really basic question, but I’m new to Vertex Shaders.

Cheers,

alx
blog http://machinesdontcare.wordpress.com

I’ve used a similar technique to render bezier surfaces. http://lumina.sourceforge.net/Tutorials/Bezier_Surface.html The vertexshader uses two steps: The first transforms the quad into a bezier patch, and the last projects the vertex into modelview projection space.

Hi oc2k1,

thanks for the link, looks intriguing. I haven’t used beziers at all so far, just a really simple Sphere primitive, and a striped texture applied. I do love smooth curves though, so I’ll give the tutorial a go.

Thanks again,

alx

You need to think in terms of having 2 matrices :

  • an object matrix, which transforms vertices from object space to world space
  • a camera matrix, which transforms vertices from world space to camera space.

Do either of you (or anyone else) know of any good introductory material to this subject? I think I have some reading to do!

alx

The OpenGL Red Book’s Viewing chapter has a pretty nice description, conveniently described in terms of the OpenGL API.

This can be a little confusing sometimes simply because some folks (including me occasionally :wink: ) don’t always use the standard OpenGL terminology like we should:

  • MODELING Transform - takes you from object space to world space
  • VIEWING Transform - takes you from world space to eye space
  • PROJECTION Transform - takes you from eye space to clip space

leaving you to intuit what they meant. See the “Stages of Vertex Transformation” diagram in the Viewing chapter. (In V-man’s post: Object matrix -> MODELING transform. Camera matrix -> VIEWING transform. Camera space -> Eye space.)

The OpenGL fixed-function pipeline combines MODELING and VIEWING transforms into a single GL transform you provide it called (intuitively) MODELVIEW, but what V-man is saying is that you probably want to keep them distinct in your app (at least initially, to get your math straight) so you can have a common “world” space that you can deal with all your meshes in. Then, you may find that you’ll be able to combine matricies and collapse this “world” space right out of your math, going straight from object space to eye space for each patch.

Hope this helps.

Hi Dark_Photon,

thanks for the advice and very clear explanation of the different transformation types.
I’m on a steep learning-curve here, but this looks like good info.

I’d actually forgotten I had the OpenGL Red Book- I picked it up a while back, started reading it, but didn’t get as far as the chapter on coordinates and transforms. I don’t have the book to hand at the moment, as I’m at work, but I think I’ve found the sections you’re referring to online, here
http://www.glprogramming.com/red/chapter03.html
and here
http://www.glprogramming.com/red/appendixf.html
So I’ll have a look, and see if it starts to make a bit more sense.

In the particular case I mentioned at the start of this thread, would I be correct in thinking that I’ll need to transform the coordinates of each vertex into World Space before applying my transformations, in order to have my separate meshes behave as a single object?

Thanks again,

alx
http://machinesdontcare.wordpress.com

Sorry for being so silly, but I wonder why can’t we do a lot of computations in object space rather in eye space. For instance, in a deferred rendering, we would store eye space positions and eye space normals (every paper I read say world space, but in every implementation I saw, they write, for instance gl_ModelViewMatrix * gl_Vertex), and then doing calculations with those values.

But why should I make those computations (I know a matrix vector multiplication won’t kill the performance) ? What are the cons of storing object space position (gl_Vertex) and object space normalize normal (normalize (gl_Normal)) and then doing all the computation in object space (I don’t use the OpenGL lighting system, so I could give OpenGL light position, normal… in object space too).

I’m sure there are some reasons I don’t know for not using object space, but I would like to know them (furthemore, I see a lot of papers that say to store world space position, and normal, but when I take a look at their implementation, it is, in fact, eye space. What’s the difference ?).

Object space is specific to an object. Only by defining a transformation to world space are you actually placing objects relative to each other in a world.

You can indeed perform calculations in object space (usually with the condition that the object-to-world transformation is orthogonal, i.e. consists only of rotations, translations and flips). But then you have to transform all other positions and directions into object space first - for each object. And you have to know which object you are working on. For deferred rendering that would mean storing an object index along with the position, then accessing arrays of precalculated eye and light positions in object space in the shader. This is less practical than transforming the object space positions to view space first.

View space is a world space, with the added property that the viewer is at the origin looking down a specified axis, which can simplify some calculations that take into account the viewer position.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.