I work on my little CAD 3d editor and has a problem how to organize any transformation in the scene with a future work of algorithms: boolean, mouse picker, intersection and other. I have a gizmo wich moves any objects in the scene. As I already know, is the usual way of storing any local transformations is store that transformation in the model matrices of this objects and execute any local transformation directly in the shader. BUT, for example, in my program I implement a classic ray-picking algorythm: the ray is in the world space and detect any intersection with the real (transformed in world space, locally) vertex positions. For example, I have a box and a sphere and I move it by gizmo (I edit their model matrix) on 1,0,0 and 0,1,0 respectively. His model matrices is now different. HERE I get data that I need for ray-picking ant another algorithms - ever objects has own local individual places.

My question is how to interact with the objects, wich the real place is unknown until it's will locally transformed into the shaders? What is the usual way to store data for CAD or 3d editor program, where a real position of object is a base of work of any algorithm?

I gathered some ideas:

1) Multiply any local transformation immediatly on CPU and store already transformed data. I think it's a clear way but it's expensive: each frame I convert the delta of moving to matrices and multiply the whole data by it on the CPU.

2) Produce any transformation by directly changing data. This is a not multiplication of matrices and much easier. But how to implement rotation? Hmm..But this is another story. ))

3) Store any transformation in the model matrices until the mouse picking will starts and then quick multiply verticies by matrix on CPU to prepare data for picking. This is very fancy but there is ways to optimize it.

4) Store the inversed version of model matrices of each object and multiply ray by this matrices when picking is run.This seems to me is helpful only for the picking algorithm.

5) Organize the picking and other algorithms in the shaders or by CUDA, OpenCL.

Which is the more usual way to work in CAD programs? Wich method is more convenient and easy for a future work of boolean algorithm, for example? Or may be I'm not on the right track and there is some other method?

Thank you! ]]>

Code...]]>
hey,

i get weird results. currently i have the following:

i've read that if i keep the scale vector in the transform matrix, then i can scale whole models with just 1 matrix applied. the problem is when i have skeletons made up of several meshes with each a separate scale matrix, then the result looks not as it looks like in blender.

so:

is it wrong to keep the "vec3 Size;" within the transform matrix ?

should i instead keep it in a separate variable to only scale each mesh of the current node with it ?

i'm confused because assimp doesnt structure it like that ..

]]>
i get weird results. currently i have the following:

Code :

struct CTransformation {
vec3 Position = vec3(0, 0, 0);
quat Rotation= quat (1, 0, 0, 0);
vec3 Size = vec3(1, 1, 1);
mat4 Matrix () const { return translate(Position) * toMat4(Rotation) * scale(Size ); }
};

i've read that if i keep the scale vector in the transform matrix, then i can scale whole models with just 1 matrix applied. the problem is when i have skeletons made up of several meshes with each a separate scale matrix, then the result looks not as it looks like in blender.

so:

is it wrong to keep the "vec3 Size;" within the transform matrix ?

should i instead keep it in a separate variable to only scale each mesh of the current node with it ?

i'm confused because assimp doesnt structure it like that ..