1.) Turn the screen coordinates into OpenGL coordinates (from 0 - 1080 to -1.0 - 1.0)

2.) Plug those new coordinates into a vec4

3.) Multiply by the inverse orthographic projection matrix

4.) Multiply by the inverse view matrix

5.) Extract the x and y values of the vec4

I believe that this is good solution, but I'm not sure if it is the best solution. I'm essentially asking if there is a better, quicker way to convert from screen coordinates to world coordinates. Any input is appreciated! :) ]]>

I want to view this quaternion on a object in my opengl scene.

I currently convert the quaternion to a matrix and load it into the scene with glLoadMatrixf(mat); then drawing the object

Code :

static inline void QuatTo4x4Matrix(float* quat, float* mat) {
float X = quat[0];
float Y = quat[1];
float Z = quat[2];
float W = quat[3];
float xx = X * X;
float xy = X * Y;
float xz = X * Z;
float xw = X * W;
float yy = Y * Y;
float yz = Y * Z;
float yw = Y * W;
float zz = Z * Z;
float zw = Z * W;
mat[0] = 1 - 2 * (yy + zz);
mat[1] = 2 * (xy - zw);
mat[2] = 2 * (xz + yw);
mat[4] = 2 * (xy + zw);
mat[5] = 1 - 2 * (xx + zz);
mat[6] = 2 * (yz - xw);
mat[8] = 2 * (xz - yw);
mat[9] = 2 * (yz + xw);
mat[10] = 1 - 2 * (xx + yy);
mat[3] = mat[7] = mat[11] = mat[12] = mat[13] = mat[14] = 0;
mat[15] = 1;
}

However the quaternion seems to be rotating about a different axis that I anticipate. Is there anyway I can correct this? I tried offseting the quaternion with a offset quaternion to set it to identity but it still rotates about it's own axis.

Am I doing anything wrong? Been working for this for a couple days now, just can't wrap my head around this. ]]>

I'm trying to simulate lens distortion effect for my SLAM project.

A scanned color 3D point cloud is already given and loaded in opengl.

What I'm trying to do is render 2D scene at a given pose and do some visual odometry between the real image from a fisheye camera and the rendered image.

As the camera has severe lens distortion, it should be considered in the rendering stage too.

The problem is that I have no idea where to put the lens distortion. Shaders?

I've found some open codes that put the distortion in the geometry shader:

https://emmanueldurand.net/spherical_projection/

But this one I guess the distortion model is different from the lens distortion model in ComputerVision community.

In CV community, lens distortion usually occurs on the projected plane.

This one is quite similar to my work but they didn't used distortion model.

https://github.com/mp3guy/ElasticFus...re/src/Shaders

Anyone have a good idea? ]]>

I'm not interested to properly retargeting translations for now. Only rotations.

Let's say the Source skeleton S have its rest/bind pose "Sb", and a neutral T-pose "St".

The same is for the Target skeleton, with its rest/bind pose "Tb" and "Tt".

So:

Sb = source rest / bind pose

St = source neutral T-pose

Tb = target rest / bind pose

Tt = target neutral T-pose

In local space (before computing the global space transform of the animation) I want to convert / retarget the source animation, and the formula should be like (for every joint):

Tb * D * inverse(Sb) * ( Sb * A )

Where D is the matrix I should find, and ( Sb * A ) is the keyframe of the animation.

1) If Tb == Tt and Sb == St, clearly D should be the Identity.

2) If Tb == Tt but Sb =/= St I suspect the formula should be one of the following:

a) D = inverse(St) * Sb

b) D = inverse(Sb) * St

Which one is correct?

3) If Tb =/= Tt and Sb == St

should be one of the following, but wich one?

a) D = inverse(Tt) * Tb

b) D = inverse(Tb) * Tt

Thanks in advance for any response! ]]>

I am in the process of implementing shadow maps but I have some questions for those more experienced in this.

When I create light view matrix I am using the equivalent of the gluLookAt (because this is what the general populous seem to do). GluLookAt takes an eye position, but what is the eye position of a directional light? If I had a spot or point light this would be obvious but not so for directional lights. The eye position of the light changes the values in the depth map I create.

The projection matrix for a directional light is orthographic, I am experienced in creating orthographic matrix for ui drawing but no so for light. Should the near far be the same as my normal perspective matrix? What are good parameters for the left right top bottom? People put examples up but they never really explain why they choose the values that did. ]]>

https://pastebin.com/mcz9b0Zy

Firstly I convert 3D world coordinate to screen coordinate, then back. But results is strength. Where is my mistake? ]]>

i'd like to know how to setup a skeletal animation, how to structure the necessary data practically. first, assimp gives me the data like this:

Code :

struct Bone {
mat4 offset;
vector<pair<int, float>> vertexweights;
};
struct Mesh {
vector<vec3> vertices;
vector<vec3> normals;
...
vector<Bone> Bones;
};
struct Scene {
vector<Mesh> meshes;
vector<Animation> animations;
...
};

i know how to extract the geometry so that i can draw the model without bones, for each mesh i create a "drawarrays" call. but there are several things to keep in mind:

1. a "joint" (or "bone") has a offset matrix, it transforms from mesh space to "bind pose"

2. the order in which the joint matrices are multiplied is crucial

how i plan to do it: (vertex shader)

Code glsl:

#version 450 core
/* uniform */
/****************************************************/
layout (std140, binding = 1) uniform StreamBuffer {
mat4 View;
mat4 Projection;
};
layout (std140, binding = 2) uniform BoneBuffer {
mat4 Bones[256];
};
layout (location = 0) uniform mat4 Model = mat4(1);
/****************************************************/
/* input */
/****************************************************/
layout (location = 0) in vec3 in_position;
layout (location = 1) in vec2 in_texcoord;
layout (location = 2) in vec3 in_normal;
layout (location = 3) in vec3 in_tangent;
layout (location = 4) in vec3 in_bitangent;
layout (location = 5) in uvec4 in_boneindices;
layout (location = 6) in vec4 in_boneweights;
/****************************************************/
/* output */
/****************************************************/
out VS_FS {
smooth vec3 position;
smooth vec2 texcoord;
smooth vec3 normal;
} vs_out;
/****************************************************/
mat4 Animation()
{
mat4 animation = mat4(0);
vec4 weights = normalize(in_boneweights);
for (uint i = 0; i < 4; i++)
animation += Bones[in_boneindices[i]] * weights[i];
return animation;
}
void main()
{
mat4 Transformation = Model;// * Animation();
mat4 MVP = Projection * View * Transformation;
gl_Position = MVP * vec4(in_position, 1);
vs_out.position = (Transformation * vec4(in_position, 1)).xyz;
vs_out.texcoord = in_texcoord;
vs_out.normal = (Transformation * vec4(in_normal, 0)).xyz;
}

so i need to send up to 4 references to joint matrices up to the vertexshader.

if a vertex need less than 4 joint matrices, then i'll fill the rest with a reference to the very first joint matrix which is by default mat4(1) --> to eliminate undesired effects (as a fallback). i a vertex needs more then 4 joint matrix references, then ... what ?

how to build these attribute data ?

vector<vector<unsigned int> indices_lists(vertexcount);

vector<vector<float>> weights_lists(vertexcount);

for (each bone of this mesh)

--> fill both arrays:

each array element belongs to e certain vertex, and is an array of needed joint references (index to uniform joint matrix + weight)

is this the "usual way" to do this or am i thinking a little too simple/complicated/weird ? :doh:

assuming thats rihgt, the next thing is to KEEP THE JOINT MATRIX ORDER, that means i have to rebuild these 2 arrays (of arrays of references to joints): how ?

i guess that i just have to sort the indices (to uniform joint matrices) ascending, and BEFORE that i recursively go through the scene node tree and add needed "bones" BEFORE i add their needed children (if any) ... correct, or wrong ?

-----------------------------------------------------

when done, and assuming thats correct (?), what data to i need on the cpu-side?

-- joint offset matrix array

-- array of the same size of type mat4, containing computed joint matrices (i have to send as uniform block)

-- animations

when computing animations, how to compute the mat4s ?

for each animation

-- for each "channel" (affecting exact 1 joint)

---- uniformJoints[..certain.index..] = jointOffsetMatrix[..certain.index..] * interpolatedKeys

.. where "interpolatedKeys" are pairs of vec3 location / quat rotation / vec3 scale

is this how its done ?

(i've searched for simple examples, could find anything [including model file] that is understandable .. this one is simply not transpaerent enough for me :()

i appreciate every advice !! ]]>