Projecting texture onto another texture

Can anybody help me with with a way to project a texture onto another, exactly as if one puts a slide into an overhead projector and targets it onto a 3D object. (The aim is to investigate the possibility of a GPU based texture painting program).

I want to use the render-to-texture feature for this, so the question should probably be posed as how to set up a transform for the projected image so it can be rendered the normal way. Perhaps these images can be of help:

…The above situation as seen from the render-to-texture viewpoint:

I have tried fumbling with the projection matrix as in this:

glTranslate (…to painters eyepoint…)
glFrustum (…ordinary setup…)
glTranslate (…back from painters eyepoint…)

…In an attempt to get this principle working (I’m wildly guessing here, helped by some simple reasoning):

Hello,

I’ve done something like this (different context, but seems to be the same idea). What I do is render the destination texture and use the reference viewpoint as the texture coordinate generation matrix.

If P and M are the projection and modelview matricies, T is the texgen matrix and P’ and M’ are the projection and extrinsics matrix of the reference view, then I set P=H{^-1}, M=I and T=HP’M’, where H is the homography that maps OpenGL image coordinates to texture coordinates. I’d then render the quad using its texture coordinates as verticies and supplying its world coordinate as its texture coordinate. This way, its texture coordiante will be projected into the reference view and then applied to the texture space (which is being rendered to the framebuffer). You’ll need to clamp coordinates that go outside the image, of course.

Incidentially, here is a picture of my program doing something to illustrate what’s going on:
http://www.cs.adelaide.edu.au/~john/construct6.png

This picture back-projects a reference view onto a scene. The bottom picture shows the textures applied to the sphere and cube (the texture appears warped because that’s what you get when you project an image onto a non-planar scene and then warp the surface to a plane.)

I hope this helps,
cheers
John

This is exactly what I need. Sorry for not answering, after a week I gave up and made a primitive solution where I calculated vertices and then rendered the quad onto to texture. Yours is better since it gets the perspective correct, where I have to subdivide the quad into many pieces to get texturedistortion.

I’m not sure I understand all things in your explanation though, but give me a little time to read it through first. P’ and M’ are simply the projection- and modelview matrices of the eyepoint/referencepoint, right? What are the openGL image coordinates you use to get H?

…And how on earth did you figure this out :smiley:
But anyway, thanks a lot.

Hi,

no worries; I hope it helps you out. Actually, that post was hastily written while caffine deprived and I knew at the time it was vague; but I figured you’d either get the idea or else ask follow up questions. Good work on the mesh sub-division as an interim solution, though. I’d imagine it’d be a pain to implement :frowning:

The P’ and M’ are indeed the projection and modelview matricies of the reference camera (or, in your case, the texture source that you’re projecting onto the scene). P and M refer to the third-person view of the scene (ie., the one that’s actually being rendered by OpenGL into the current window.)

H is the homography that maps the image coordinates to texture coordinates. OpenGL projection maps scene points to a cube of size 2 centred at the origin: ie. the point of the bottom left corner of the image plane is [-1,-1]. On the other hand, texture space is defined in the range [0,1] so the bottom left corner is [0,0]. Consequently, you need a map from the image plane to texture space so the projected point that ends up at [-1,-1], for example, is mapped to the texel [0,0]. This is the homography (planar mapping) that I was talking about and is defined as

0.5 0.0 0.0 0.5
0.0 0.5 0.0 0.5
0.0 0.0 0.5 0.5
0.0 0.0 0.0 1.0

This stuff is just shadow mapping, anyway ('cept it’s projecting light). I’d imagine there are tutorials on the net somewhere for this kind of thing if you want a second opinion :slight_smile: Incidentially, I also do shadow mapping so the radiance from the camera is only assigned to one scene point. Without it, the texture is propagated through the scene, which may or maynot be the effect you’re after.

I hope this helps. It sounds like you’re doing a research project, so I’m glad to help out; feel free to ask for more clarification. Cheers, also, for the kudos :slight_smile:

cheers,
John

Thanks again. It all makes more sense now. I’m just playing with making a small texture paint utility, no research project, I’m afraid :slight_smile:

Before I even saw your solution, I made a quick hack with a vertex program that handled the projection:

	// TOOL ray info
	//
	// The incoming coordinates are expressed in the tools own coordinate system.
	// They will have to be changed to world coords first, thus we begin with
	// constructing the tool basis axes:
	// 		I = JxK
	//		J = up
	//		K = - direction (shoots paint along negative Z)
	vec3 tool_I = - normalize (cross (tool_up, -tool_direction));
	vec3 tool_J =   normalize (cross (tool_I , -tool_direction));
	
	// And we also find the direction-vector of
	// the RAY: tool_position ==> tool_position    +    x*tool_I + y*tool_J + z*tool_K
	//  
	vec3 ray_dir = gl_Vertex.x * tool_I + gl_Vertex.y * tool_J + tool_direction;


	// PLANE of the face
	//
	// Next we want to build info for the PLANE of the FACE that we are painting at,
	// i.e. the normal of the plane: AxB
	vec3 plane_OA = plane_A - plane_O;
	vec3 plane_OB = plane_B - plane_O;
	vec3 plane_N = normalize (cross (plane_OB , plane_OA));
	

	// RAY-PLANE INTERSECTION
	//
	// Then we find the intersection point in global coordinates of the RAY and the PLANE
	//   Plane: (V-plane_O) dot plane_N = 0
	//   Ray  : t*ray_dir + tool_position
	//
	// This yields: t = (plane_O - tool_position) dot plane_N
	//					-------------------------------------
	//					               tool_dir dot plane_N
	
	float RdotN = dot (ray_dir, plane_N);
	if (RdotN < 0.001f)
	{	// Problem!
		gl_Position = vec4 (0.0f, 0.0f, 1.0f, 1.0f);
		no_hit = 100.0f;
	}
	else
	{
		no_hit = 0.0f;
		float t = dot((plane_O - tool_position) , plane_N)  /  RdotN;
		vec3  hit = tool_position + t * ray_dir;
		
		// Construct a basis I,J,K based on O,OA,OB
		// P is the point we want to project into O,OA,OB coordinates
		vec3  P = hit - plane_O;
		vec3  I = normalize (plane_OA);		
		vec3  J = - normalize(cross (I,plane_N));
		
		// Doing some math gives the stuff below as a way to find a matrix
		//   M    = [m o]
		//          [n p]    that satisfies:
		//   I    = [a,0]
		//   J    = [b,c]
		//   M*OA = [1,0]    this operation maps the vector OA onto (1,0) or E1 in the O,OA,OB coordinate space
		//   M*OB = [0,1]    this operation maps the vector OB onto (0,1) or E2 in the O,OA,OB coordinate space
		//   
		//   Then we project P onto I,J to get (i,j)
		//   Then we do a M*(i,j) to get the coordinates in O,OA,OB jargon: (s,t)
			
		float a = length(plane_OA);		// [a 0] is OA projected onto I,J
		float b = dot(plane_OB, I);		// [b c] is OB projected onto I,K
		float c = dot(plane_OB, J);
		float m = 1.0f/a;
		float n = 0.0f;
		float o = - m*b / c;
		float p = 1.0f/c;
		mat2  M = mat2(m,n,o,p);
		float PonI = dot (P,I);
		float PonJ = dot (P,J);
		vec2  PinAB= M * vec2(PonI,PonJ);
		
		// Going from P in O,OA,OB coordinates to the UV texture space is easy:
		// PinUV = To + Toa*s + Tob*t
		vec2  PinUV= tex_O + PinAB.x*(tex_A-tex_O) + PinAB.y*(tex_B-tex_O);
		gl_Position = vec4(PinUV.x, PinUV.y, 1.0f, 1.0f) * 2 - 1;		

	}
	gl_FrontColor = gl_Color;
	gl_BackColor = gl_Color;
	gl_TexCoord[0] = gl_MultiTexCoord0;
  

Though it required some hard debugging, it is running now, but the quality is not good as I suspected.