Matrices Help

Hi guys,

getting really confused with matrices… again.
As usual, it’s an HLSL > GLSL conversion project. I’ve managed to get the fragment shader code (an isosurface ray-tracer based on Keenan Crane’s original Quaternion Julia set GPU renderer) to work, but I’m a bit confused about how to implement the vertex shader. This is a conversion of a VVVV patch (by the excellent tonfilm), where the texture coordinates and position of a virtual camera inside the Fragment shader are transformed by a series of matrices.
In VVVV, it’s possible to do this outside the shader itself, but the application I’m using can only handle vertex data and matrix-transforms inside a shader, so I’ll have to try and implement this in the VS.

Here’s a diagram of what should happen (which is actually just a screenshot from the application itself).

I’m aware there’s no direct equivalent to the HLSL view projection, but I think I should be able to use gl_ModelViewProjectionMatrix here.

The LFO module gives a rotation on the Y-Axis, but I’d like to be able to rotate on all 3 axes if possible. I could possible pre-calculate the values for the rotation matrices outside the shader, if this would help speed things up.

The * node just scales the initial vertex positions.

‘eye position’ would be a varying sent to the Fragment Shader, determining the position of the virtual camera.

Camera position would be a uniform in the Vertex Shader which could be controlled from outside the shader.

Hope this makes sense. I’ve still not quite got my head around matrices, sadly :frowning:

I know this is a big ask, but wondering if anyone has any advice on how I might implement this, I’d be hugely grateful…

Incidentally, the geometry is very simple- a 2x2 vertex plane.

Thanks in advance guys,

alx
http://machinesdontcare.wordpress.com

OK… I’ve come up with some Vertex Shader code here to try and emulate the above. I wonder if anyone can see why it doesn’t work. It’s very possible I’m doing a number of things wrong. For example, I don’t know if the matrix invert function is correct. I also don’t know if I can use one of the builtin gl transforms in place of the view projection matrix in the diagram.

Here is the code, anyway. I’d be really grateful to anyone who could take a look, and giver any advice at all on where I might be going wrong:

mat4 matrixInvert(mat4 m) {
	mat3 rtr;
	rtr[0] = vec3(m[0][0],m[1][0],m[2][0]);
	rtr[1] = vec3(m[0][1],m[1][1],m[2][1]);
	rtr[2] = vec3(m[0][2],m[1][2],m[2][2]);
	
	vec3 t = vec3(m[3]);
	
	vec3 rtrt = rtr * t;
	
	mat4 minverse;
	minverse[0]=vec4(rtr[0],0.0);
	minverse[1]=vec4(rtr[1],0.0);
	minverse[2]=vec4(rtr[2],0.0);
	minverse[3]=vec4(rtrt,1.0);
	
	return minverse;
}

// Camera Rotation
uniform vec3 CameraRotate;
// Camera position
uniform vec3 CameraPos;
// Interpolated Camera position
varying vec3 eyePos;
uniform float Zoom;

uniform vec4 M0;
uniform vec4 M1;
uniform vec4 M2;
uniform vec4 M3;

void main()
{
	// Assemble texture transform matrix
	mat4 tt;
	tt[0] = M0;
	tt[1] = M1;
	tt[2] = M2;
	tt[3] = M3;

	vec4 tex = gl_TextureMatrix[0] * gl_MultiTexCoord0 * 2.0;
	
	mat4 viewProjection = gl_ModelViewMatrix;
	mat4 viewProjectionRotated = tt * viewProjection;
	mat4 viewProjectionRotatedInverse = matrixInvert(viewProjectionRotated);
	vec4 texRotated = viewProjectionRotatedInverse * tex;
	
	mat4 ttInverse = matrixInvert(tt);
	vec4 cameraPosTransformed = ttInverse * vec4(CameraPos,1.0);
	
	// Eye Position in Fragment Shader
	eyePos = cameraPosTransformed.xyz;
	
	// Texture Coordinates
	vec4 texTransformed = texRotated - cameraPosTransformed;
	gl_TexCoord[0] = texTransformed;
		
	//Transform vertex by modelview and projection matrices
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	
	//Forward current color and texture coordinates after applying texture matrix
	gl_FrontColor = gl_Color;
}

the mat4 ‘tt’ is an XYZ rotation matrix I generate outside the shader program.
It has to be passed-in as 4 vec4s due to a limitation of the application I’m using.

the CameraPos uniform can be fixed, as long as I can get the rotation to work correctly, I’m not too worried about moving the camera around. I think I can zoom in/out by adding an offset to the texture coordinates z property, anyway. If I can use a fixed matrix for viewProjection too, that would be cool.

As far as I can work out, the fragment shader code works by using the texture coordinates (X, Y and Z) to work out the ‘blob’ formula for the current frame, and the eyePos varying is used as the position from which rays are cast.

What’s currently happening is that the rendered ‘blobs’ are much too big. They’re also rendered with a strange kind of reversed perspective. If I rotate the camera on the X or Y axes, I get all kinds of weird distortions.

As I said, any advice at all very much appreciated.

Thanks in advance guys,

a|x
http://machinesdontcare.wordpress.com

Here’s the fragment shader code:

/*
	Ported from:

	QJuliaFragment.cg
	4/17/2004

	VVVV HLSL conversion October 2005 by Tebjan Halm

	Ported to GLSL/QC Alex Drinkwater April 2008

	Intersects a ray with the qj set w/ parameter mu and returns
	the color of the phong shaded surface (estimate)

	Keenan Crane (kcrane@uiuc.edu)
*/

// Interpolated camera position from VS
varying vec3 eyePos;
// Bacground color
uniform vec4 BackColor;

struct GRIDCELL {
	vec3 p[8];
	float val[8];
};

//	Linearly interpolate the position where an isosurface cuts
//	an edge between two vertices, each with their own scalar value

uniform float isolevel;		// Def. 0.25
uniform float cubesize;		// Def. 0.001


// ISO FUNCTIONS ---------------------------------------------------------------

uniform vec3 a;	// Range 0.0 > 10.0
uniform vec3 b;	// Range 0.0 > 10.0
uniform float t;	// Def. 1.125

//	The Blob, this function defines the look of the blobs.
//	You can put in here any function of a 3d point that has
//	some roots close to the origin.

float getVertexValue(vec3 p) {
	// The Blob isosurface by Paul Bourke 
	vec3 sqrP = p*p;
	return sqrP.x + sqrP.y + sqrP.z + b.x*sin(a.x*p.x) + b.y*sin(a.y*p.y) + b.z*sin(a.z*p.z)-t;
}

// ISO FUNCTIONS END -----------------------------------------------------------

// --------------------------------------------------------------------------------------------------
// PIXELSHADERS:
// --------------------------------------------------------------------------------------------------

//	Some constants used in the ray tracing process.
//	These constants were determined through trial and error and
//	are not by any means optimal.

uniform float BOUNDING_RADIUS_2; // square of radius of a bounding sphere for the set used.
							// to accelerate intersection. Def. 25.0
uniform float epsilon;			// Specifies precision of intersection. Def. 0.01
vec3 eye = eyePos;				// Location of the viewer
uniform vec3 light;			// Location of a single point light


// ---------- intersectObject() ------------------------------------------

//	Comments from original Julia set implementation:

//	Finds the intersection of a ray with origin rO and direction rD with the
//	quaternion Julia set specified by quaternion constant c.  The intersection
//	is found using iterative sphere tracing, which takes a conservative step
//	along the ray at each iteration by estimating the minimum distance between
//	the current ray origin and the closest point in the Julia set.  The
//	parameter maxIterations is passed on to iterateIntersect() which determines
//	whether the current ray origin is in (or near) the set.

float intersectObject(inout vec3 rO, vec3 rD)
{
	float dist ;	//	The (approximate) distance between the first point along the ray within
				//	epsilon of some point in the Julia set, or the last point to be tested if
				//	there was no intersection.
					
	do {
		dist = getVertexValue(rO);	//distance to surface
		rO += rD * epsilon * dist;	// (step)
		
		//	Intersection testing finishes if we're close enough to the surface
		//	(i.e., we're inside the epsilon isosurface of the distance estimator
		//	function) or have left the bounding sphere.
	} while (dist >= isolevel && abs(dot(rO, rO)) <= BOUNDING_RADIUS_2);
		
		//	Return the distance for this ray
		return dist;
}

vec3 normEstimate(vec3 p, vec3 rD){
	GRIDCELL cube;
	GRIDCELL cubeVector;
	vec3 normalAverage = vec3(0.0);
	float csh = cubesize * 0.5;
	
	cube.p[0] = p + vec3(-csh, -csh,  csh);
	cube.p[1] = p + vec3( csh, -csh,  csh);
	cube.p[2] = p + vec3( csh, -csh, -csh);
	cube.p[3] = p + vec3(-csh, -csh, -csh);
	cube.p[4] = p + vec3(-csh,  csh,  csh);
	cube.p[5] = p + vec3( csh,  csh,  csh);
	cube.p[6] = p + vec3( csh,  csh, -csh);
	cube.p[7] = p + vec3(-csh,  csh, -csh);
	
	cubeVector.p[0] = vec3(-csh, -csh,  csh);
	cubeVector.p[1] = vec3( csh, -csh,  csh);
	cubeVector.p[2] = vec3( csh, -csh, -csh);
	cubeVector.p[3] = vec3(-csh, -csh, -csh);
	cubeVector.p[4] = vec3(-csh,  csh,  csh);
	cubeVector.p[5] = vec3( csh,  csh,  csh);
	cubeVector.p[6] = vec3( csh,  csh, -csh);
	cubeVector.p[7] = vec3(-csh,  csh, -csh);
	
	for(int i = 0; i < 8; i++){
		cube.val[i] = abs(getVertexValue(cube.p[i]));
	}
	
	for(int i = 0; i < 8; i++){
		normalAverage += cubeVector.p[i]*cube.val[i];
	}
	
	return normalize(normalAverage);
}

vec3 normEstimate2(vec3 p) {
	return normalize(fwidth(p));
}

// ----------- Phong() --------------------------------------------------
//
// Computes the direct illumination for point pt with normal N due to
// a point light at light and a viewer at eye.
//

// Light properties

uniform vec4 lAmb;		//	Ambient Color. Default (0.15, 0.15, 0.15, 1.0)
uniform vec4 lDiff;	//	Diffuse Color. Default (0.85, 0.85, 0.85, 1.0)
uniform vec4 lSpec;	//	Specular Color. Default (0.35, 0.35, 0.35, 1.0)
uniform float lPower;	//	Shininess of specular highlight. 0.0 > = 25.0

vec3 lit (float ndotl, float ndoth, float m)
{
	float ambient = 1.0;
	float diffuse = max(ndotl, 0.0);
	float specular = step(0.0,ndotl) * max(ndoth * m, 1.0);

	return vec3(ambient, diffuse, specular);
}

vec3 Phong(vec3 light, vec3 eye, vec3 pt, vec3 N)
{
	vec3 diffuse = lDiff.rgb;       			//	Base color of shading
	//vec3 L = normalize( light - pt );		//	Find the vector to the light
	vec3 E = normalize( eye   - pt );		//	Find the vector to the eye

	// Halfvector
	vec3 H = normalize(E + light);
	
	// Compute blinn lighting
	vec3 shades = lit(dot(N, light), dot(N, H), lPower);
	
	vec4 diff = vec4(lDiff * shades.y);
	diff.a = 1.0;
	
	// Reflection vector (view space)
	vec3 R = vec3(normalize(2.0 * dot(N, light) * N - light));
	// Normalized view direction (view space)
	
	// Calculate specular light
	vec4 spec = vec4(pow(max(dot(R, E),0.0), lPower*0.2)) * lSpec;
	
	vec4 col = vec4(1.0,1.0,1.0,1.0);
	col.rgb *= vec3((lAmb + diff) + spec);

	// compute the illumination using the Phong equation
	return col.rgb;
}

// ---------- intersectSphere() ---------------------------------------
//
// Finds the intersection of a ray with a sphere with statically
// defined radius BOUNDING_RADIUS centered around the origin.  This
// sphere serves as a bounding volume for the Julia set.

vec3 intersectSphere(vec3 rO, vec3 rD)
{
   float B, C, d, t0, t1, t;

   B = 2.0 * dot(rO, rD);
   C = dot(rO, rO) - BOUNDING_RADIUS_2;
   d = sqrt(B*B - 4.0*C);
   t0 = (-B + d) * 0.5;
   t1 = (-B - d) * 0.5;
   t = min( t0, t1);
   rO += t * rD;

   return rO;
}

// ------------ MAIN() -------------------------------------------------
//
//  Each fragment performs the intersection of a single ray with
//  the quaternion Julia set.  In the current implementation
//  the ray's origin and direction are passed in on texture
//  coordinates, but could also be looked up in a texture for a
//  more general set of rays.
//
//  The overall procedure for intersection performed in main() is:
//
//  • move the ray origin forward onto a bounding sphere surrounding the Julia set
//  • test the new ray for the nearest intersection with the Julia set
//  • if the ray does include a point in the set:
//      • estimate the gradient of the potential function to get a "normal"
//      • use the normal and other information to perform Phong shading
//      • cast a shadow ray from the point of intersection to the light
//      • if the shadow ray hits something, modify the Phong shaded color to represent shadow
//  • return the shaded color if there was a hit and the background color otherwise


void main()
{
	vec4 col;  // This color is the final output of our program.
	// Initially set the output color to the background color.  It will stay
	// this way unless we find an intersection with the Julia set.
	vec3 rO = eye;
	col = BackColor;
	
	// First, intersect the original ray with a sphere bounding the set, and
	// move the origin to the point of intersection.  This prevents an
	// unnecessarily large number of steps from being taken when looking for
	// intersection with the isosurface.
	
	vec3 rD = normalize(vec3(gl_TexCoord[0].xyz));	//the ray direction is interpolated and may need to be normalized
	vec3 rDir = rD;
	rO = intersectSphere(rO, rD);
	
	// Next, try to find a point along the ray which intersects the Julia set.
	// (More details are given in the routine itself.)
	
	float dist = intersectObject(rO, rD);
	
	// We say that we found an intersection if our estimate of the distance to
	// the set is smaller than some small value epsilon.  In this case we want
	// to do some shading / coloring.
	
	if(dist < isolevel)
	{
		// Determine a "surface normal" which we'll use for lighting calculations.
		vec3 N = normEstimate(rO, rD);
		
		// Compute the Phong illumination at the point of intersection.
		col.rgb = Phong(light, rD, rO, N);
		col.a = 1.0;			// Make this fragment opaque
	}

	//Multiply color by texture
	gl_FragColor = col;
}

Anyone?

a|x
http://machinesdontcare.wordpress.com

OK, now it works!
Once I’d (finally) managed to visualise what I was trying to do, it all fell into place.
Not sure if this is the best, or most efficient way to do this, but it seems to work…

uniform mat4 rotate;	// Rotation matrix
uniform vec3 Camera;	// Camera position (transformed to eye position in frag shader)
varying vec4 eyePos;	// Eye position to fragment shader

void main()
{
	/*
	Transforms texture coordinates and position of virtual camera to be
	send to raytracing fragment shader code.
	*/
	
	// Set tex to GL texture coordinates in world space
	vec4 tex = gl_TextureMatrix[0] * gl_MultiTexCoord0;
	tex = tex * 2.0 - 1.0;
	tex = gl_ModelViewMatrix * tex;
	// Rotate texture coordinates
	tex = rotate * tex;
	
	// Transform camera position into world space
	eyePos = gl_ModelViewMatrix * vec4(Camera,1.0);
	// Rotate camera
	eyePos = rotate * eyePos;
	
	// Transform vertex by modelview and projection matrices
	gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
	
	// Forward texture coordinates
	gl_TexCoord[0] = tex;
}

Cheers guys,

a|x
http://machinesdontcare.wordpress.com

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.