Texture projection mapping (android glsl2)

I have some textured object and image. I need to project this image on object.
Also i have
“rayStart” - vector3, the point i’m looking from.
“rayEnd” - vector3, the point i’m looking at.
“rayRot” - vector3, wihch defines rotation of image

And i cant figure out how to write vert and frag shaders, and looking for help to get this working. I was trynig make some use of tutorial from ozone3d.net, but stuck at the very beginning with “TexGenMatrix” For now i only have basiс code:
VERT:


uniform mat4 modelViewProjectionMatrix;


attribute vec4 position;
attribute vec2 texture0;
attribute vec2 texture1;

varying vec2 texCoord1;
varying vec2 texCoord2;
varying vec2 tc;


void main() {
	texCoord1 = texture0;
	texCoord2 = texture1;
	tc = texCoord1;
	gl_Position = modelViewProjectionMatrix * position;
}

FRAG:


precision highp float;

uniform sampler2D textureUnit0;
uniform sampler2D textureUnit1;


varying vec2 texCoord1;
varying vec2 texCoord2;

uniform vec3 rayStart;
uniform vec3 rayEnd;
uniform vec3 rayRot;


void main() {


	vec4 texColor = texture2D(textureUnit0, texCoord1);
	vec3 decalColor = texture2D(textureUnit1, texCoord1).rgb;

    gl_FragColor = texColor;

}

[QUOTE=AnNE DoM.ini;1278923]I have some textured object and image. I need to project this image on object.
[/QUOTE]
That isn’t much to go on, and nothing else in your post really elaborates upon that, other than the fact that you mention “TexGenMatrix”.

One common way of performing texture projection is to specify a matrix which maps coordinates (in either object space, “world” space or eye space) to texture space. This functions much like a camera transformation, except that the camera is a projector, projecting a texture onto the objects in the scene.

To implement that, you’d transform the vertex coordinates by the texture matrix to obtain the texture coordinates.

Key points:

  1. The vertex coordinates which are transformed by the matrix need to be in the correct coordinate system for the matrix. If the matrix is constructed for world coordinates, it needs to be applied to world coordinates (this implies that you need separate model and view matrices rather than a combined model-view matrix).

  2. The texture coordinates arising from the transformation will be homogeneous coordinates. For a 2D texture, they will have 3 or 4 components. These should be used directly as a vertex shader output, without normalisation. The texture lookup should be performed in the fragment shader using textureProj() (or textureProjLod(), textureProjGrad() etc).

  3. The model-view-projection matrix (or matrices) for a camera ultimately yield clip coordinates, which are converted to normalised device coordinates, which are in the range -1 to +1, while texture coordinates are in the range 0 to 1.

This approach is similar to how the legacy glTexGen function operates, except that glTexGen treats the matrix as four distinct planes rather than as a matrix per se.

So, how to construct proper matrix which will do the work?
As I undestand, i need:

  1. take identity matrix translate to “starting point”
  2. rotate matrix to look at “end point”
  3. multiply it by world matrix
    after this step i’m stuck (as well i’m not sure i’m right with prev steps)

I.m looking for some help with all of this math and/or more detailed steps. Especially in figuring out how to construct matrix from rayStart, rayEnd and rayRot