1.) Turn the screen coordinates into OpenGL coordinates (from 0 - 1080 to -1.0 - 1.0)

2.) Plug those new coordinates into a vec4

3.) Multiply by the inverse orthographic projection matrix

4.) Multiply by the inverse view matrix

5.) Extract the x and y values of the vec4

I believe that this is good solution, but I'm not sure if it is the best solution. I'm essentially asking if there is a better, quicker way to convert from screen coordinates to world coordinates. Any input is appreciated! :) ]]>

I want to view this quaternion on a object in my opengl scene.

I currently convert the quaternion to a matrix and load it into the scene with glLoadMatrixf(mat); then drawing the object

Code :

static inline void QuatTo4x4Matrix(float* quat, float* mat) {
float X = quat[0];
float Y = quat[1];
float Z = quat[2];
float W = quat[3];
float xx = X * X;
float xy = X * Y;
float xz = X * Z;
float xw = X * W;
float yy = Y * Y;
float yz = Y * Z;
float yw = Y * W;
float zz = Z * Z;
float zw = Z * W;
mat[0] = 1 - 2 * (yy + zz);
mat[1] = 2 * (xy - zw);
mat[2] = 2 * (xz + yw);
mat[4] = 2 * (xy + zw);
mat[5] = 1 - 2 * (xx + zz);
mat[6] = 2 * (yz - xw);
mat[8] = 2 * (xz - yw);
mat[9] = 2 * (yz + xw);
mat[10] = 1 - 2 * (xx + yy);
mat[3] = mat[7] = mat[11] = mat[12] = mat[13] = mat[14] = 0;
mat[15] = 1;
}

However the quaternion seems to be rotating about a different axis that I anticipate. Is there anyway I can correct this? I tried offseting the quaternion with a offset quaternion to set it to identity but it still rotates about it's own axis.

Am I doing anything wrong? Been working for this for a couple days now, just can't wrap my head around this. ]]>

I'm trying to simulate lens distortion effect for my SLAM project.

A scanned color 3D point cloud is already given and loaded in opengl.

What I'm trying to do is render 2D scene at a given pose and do some visual odometry between the real image from a fisheye camera and the rendered image.

As the camera has severe lens distortion, it should be considered in the rendering stage too.

The problem is that I have no idea where to put the lens distortion. Shaders?

I've found some open codes that put the distortion in the geometry shader:

https://emmanueldurand.net/spherical_projection/

But this one I guess the distortion model is different from the lens distortion model in ComputerVision community.

In CV community, lens distortion usually occurs on the projected plane.

This one is quite similar to my work but they didn't used distortion model.

https://github.com/mp3guy/ElasticFus...re/src/Shaders

Anyone have a good idea? ]]>

I'm not interested to properly retargeting translations for now. Only rotations.

Let's say the Source skeleton S have its rest/bind pose "Sb", and a neutral T-pose "St".

The same is for the Target skeleton, with its rest/bind pose "Tb" and "Tt".

So:

Sb = source rest / bind pose

St = source neutral T-pose

Tb = target rest / bind pose

Tt = target neutral T-pose

In local space (before computing the global space transform of the animation) I want to convert / retarget the source animation, and the formula should be like (for every joint):

Tb * D * inverse(Sb) * ( Sb * A )

Where D is the matrix I should find, and ( Sb * A ) is the keyframe of the animation.

1) If Tb == Tt and Sb == St, clearly D should be the Identity.

2) If Tb == Tt but Sb =/= St I suspect the formula should be one of the following:

a) D = inverse(St) * Sb

b) D = inverse(Sb) * St

Which one is correct?

3) If Tb =/= Tt and Sb == St

should be one of the following, but wich one?

a) D = inverse(Tt) * Tb

b) D = inverse(Tb) * Tt

Thanks in advance for any response! ]]>

I am in the process of implementing shadow maps but I have some questions for those more experienced in this.

When I create light view matrix I am using the equivalent of the gluLookAt (because this is what the general populous seem to do). GluLookAt takes an eye position, but what is the eye position of a directional light? If I had a spot or point light this would be obvious but not so for directional lights. The eye position of the light changes the values in the depth map I create.

The projection matrix for a directional light is orthographic, I am experienced in creating orthographic matrix for ui drawing but no so for light. Should the near far be the same as my normal perspective matrix? What are good parameters for the left right top bottom? People put examples up but they never really explain why they choose the values that did. ]]>

https://pastebin.com/mcz9b0Zy

Firstly I convert 3D world coordinate to screen coordinate, then back. But results is strength. Where is my mistake? ]]>

i'd like to know how to setup a skeletal animation, how to structure the necessary data practically. first, assimp gives me the data like this:

Code :

struct Bone {
mat4 offset;
vector<pair<int, float>> vertexweights;
};
struct Mesh {
vector<vec3> vertices;
vector<vec3> normals;
...
vector<Bone> Bones;
};
struct Scene {
vector<Mesh> meshes;
vector<Animation> animations;
...
};

i know how to extract the geometry so that i can draw the model without bones, for each mesh i create a "drawarrays" call. but there are several things to keep in mind:

1. a "joint" (or "bone") has a offset matrix, it transforms from mesh space to "bind pose"

2. the order in which the joint matrices are multiplied is crucial

how i plan to do it: (vertex shader)

Code glsl:

#version 450 core
/* uniform */
/****************************************************/
layout (std140, binding = 1) uniform StreamBuffer {
mat4 View;
mat4 Projection;
};
layout (std140, binding = 2) uniform BoneBuffer {
mat4 Bones[256];
};
layout (location = 0) uniform mat4 Model = mat4(1);
/****************************************************/
/* input */
/****************************************************/
layout (location = 0) in vec3 in_position;
layout (location = 1) in vec2 in_texcoord;
layout (location = 2) in vec3 in_normal;
layout (location = 3) in vec3 in_tangent;
layout (location = 4) in vec3 in_bitangent;
layout (location = 5) in uvec4 in_boneindices;
layout (location = 6) in vec4 in_boneweights;
/****************************************************/
/* output */
/****************************************************/
out VS_FS {
smooth vec3 position;
smooth vec2 texcoord;
smooth vec3 normal;
} vs_out;
/****************************************************/
mat4 Animation()
{
mat4 animation = mat4(0);
vec4 weights = normalize(in_boneweights);
for (uint i = 0; i < 4; i++)
animation += Bones[in_boneindices[i]] * weights[i];
return animation;
}
void main()
{
mat4 Transformation = Model;// * Animation();
mat4 MVP = Projection * View * Transformation;
gl_Position = MVP * vec4(in_position, 1);
vs_out.position = (Transformation * vec4(in_position, 1)).xyz;
vs_out.texcoord = in_texcoord;
vs_out.normal = (Transformation * vec4(in_normal, 0)).xyz;
}

so i need to send up to 4 references to joint matrices up to the vertexshader.

if a vertex need less than 4 joint matrices, then i'll fill the rest with a reference to the very first joint matrix which is by default mat4(1) --> to eliminate undesired effects (as a fallback). i a vertex needs more then 4 joint matrix references, then ... what ?

how to build these attribute data ?

vector<vector<unsigned int> indices_lists(vertexcount);

vector<vector<float>> weights_lists(vertexcount);

for (each bone of this mesh)

--> fill both arrays:

each array element belongs to e certain vertex, and is an array of needed joint references (index to uniform joint matrix + weight)

is this the "usual way" to do this or am i thinking a little too simple/complicated/weird ? :doh:

assuming thats rihgt, the next thing is to KEEP THE JOINT MATRIX ORDER, that means i have to rebuild these 2 arrays (of arrays of references to joints): how ?

i guess that i just have to sort the indices (to uniform joint matrices) ascending, and BEFORE that i recursively go through the scene node tree and add needed "bones" BEFORE i add their needed children (if any) ... correct, or wrong ?

-----------------------------------------------------

when done, and assuming thats correct (?), what data to i need on the cpu-side?

-- joint offset matrix array

-- array of the same size of type mat4, containing computed joint matrices (i have to send as uniform block)

-- animations

when computing animations, how to compute the mat4s ?

for each animation

-- for each "channel" (affecting exact 1 joint)

---- uniformJoints[..certain.index..] = jointOffsetMatrix[..certain.index..] * interpolatedKeys

.. where "interpolatedKeys" are pairs of vec3 location / quat rotation / vec3 scale

is this how its done ?

(i've searched for simple examples, could find anything [including model file] that is understandable .. this one is simply not transpaerent enough for me :()

i appreciate every advice !! ]]>

Currently, I can load the basic mesh data and texture data, as well as vertex weights for each vertex. However, I am unsure how to get and use skinning data.

First of all, I am unsure of the exact terminology, specifically what the bind pose and inverse bind pose are, and what I need them for.

Second, I am unsure where to find these matrices. I think the inverse bind pose is found in a <source> tag in the file, labelled with a attribute with "bind_poses" at the end. I will show a sample tag below:

Code :

<source id="Armature_Cube-skin-bind_poses">
<float_array id="Armature_Cube-skin-bind_poses-array" count="32">1 0 0 0.05214607 0 0 1 0.07466715 0 -1 0 -0.01507818 0 0 0 1 -0.9893552 0.1241412 0.0759291 -0.1199789 -0.0462982 -0.763187 0.644517 -0.6103146 0.1379593 0.6341408 0.7608106 -0.6872473 0 0 0 1</float_array>
<technique_common>
<accessor source="#Armature_Cube-skin-bind_poses-array" count="2" stride="16">
<param name="TRANSFORM" type="float4x4"/>
</accessor>
</technique_common>
</source>

Like I said, I am not sure how to use the matrices that I load from this tag.

Do note that I am trying not to animate the model (yet), just add bones that I can manually manipulate in my code.

Sorry if my question is hard to understand; I am completely lost on what terminology to use and what to do in general.

Thanks in advance. ]]>

Code :

// First I get the difference between the current mouse coords and the last. The html lock cursor api returns it and I store it in Game.Input.
this.yaw += Game.Input.x * this.sensitivity;
this.pitch += Game.Input.y * this.sensitivity;
// I cap the pitch and yaw
if (this.yaw < 0.0) this.yaw += 360.0;
if (this.pitch > 89.0) {
this.pitch = 89.0;
}
if (this.pitch < -89.0) {
this.pitch = -89.0;
}

Code :

// This is where I calculate the view matrix and use the pitch and yaw (by the way these are all matrices, JS doesn't really help specifying that)
var view = Mathf.Mat4();
var rotX = Mathf.rotateX(cam.pitch),
rotY = Mathf.rotateY(-cam.yaw),
rot = Mathf.mul(rotX, rotY),
pos = Mathf.translate(cam.x, cam.y, cam.z),
cam = Mathf.Mat4();
cam = Mathf.mul(rot, pos);
view = Mathf.inverse(cam);
return view;

Code :

// ... and in my game object draw function, I use the view matrix
model = Mathf.mul(model, scale);
normMat = Mathf.transpose(Mathf.inverse(model));
modelView = Mathf.mul(model, view);
mvp = Mathf.mul(modelView, projection);

Everything works fine, I see the world, I could translate left, right, forward and back if I coded those instructions. But I can't seem to get the mouse look working, I clearly don't understand the fps camera concept properly. I also understand that in order to walk towards the correct direction I'm looking at, I need a few more steps using the cross product, but at the moment I just want to look around and can't seem to get it.

Hope this all makes sense :S Thanks!

Oh by the way, I wrote my own Math lib, I'm aware that I could have used any popular one out there, but I wanted to learn. If you want to take a look at it (just in case I did something, wrong) it's here: https://gist.github.com/hashbrownjs/...e3cdb60f3d8371

I'll go ahead and copy the rotateX and rotateY functions for your convenience since those are crucial in making the cam rotate.

Code :

Mathf.rotateX = function (degree) {
var r = Mathf.Mat4(),
angle = Mathf.degToRad(degree),
cos = Math.cos(angle),
sin = Math.sin(angle);
r[5] = cos; r[6] = -sin;
r[9] = sin; r[10] = cos;
return r;
};
Mathf.rotateY = function (degree) {
var r = Mathf.identity(r),
angle = Mathf.degToRad(degree),
cos = Math.cos(angle),
sin = Math.sin(angle);
r[0] = cos; r[2] = -sin;
r[8] = sin; r[10] = cos;
return r;
};

I have an idea how I could solve the problem in the title but I want to know your oppinion about that before I waste time In programming something not working.

Basicly the idea is to have a grid of boxes with height = width, all of these boxes are the same size and they don't have a gap between each other, I also have a function to calculate the box index which belongs to a point.

Now I calculate a box around the grid and calculate the hit points of a ray with it.

So I know the intersection points and of course the rays direction, now I take the direction, normalize it and devide it by 2. ( Im not realy sure if I need /2 but I feel better with it)

The resulting vector is now my "step size". (I call this vector v)

So if I take now my first intersection point add v, calculate the box which belongs to the resulting point and do this again and again until I reach my second intersection point I shouldn't miss any box. (rigth ?)

But Im not sure about that so I ask for your oppinion.

Thanks. ]]>