By this analogy, you would appear to be attempting to take a texture and copy it to another texture, based on projecting the source texture onto an object that is mapped to the destination texture.
That’s… complicated. Doable, but complicated.
The key here is to remember what you’re rendering. You have the following things:
- A source texture.
- A destination texture.
- An object.
- A projection of the source texture onto that object. So you have some matrices and other settings to do that projection.
- A mapping from the object’s vertices to locations in the destination texture (aka: your object has texture coordinates).
So, your goal is to render into the destination texture. The question is this: what triangles do you render?
You can’t render the positions of the object, as they are meaningless. Your goal is to modify the destination texture based on a projection of a texture onto the vertex positions. Because you’re rendering to the destination texture, the triangle you actually rasterize needs to represent where that triangle is in the texture’s space.
So the gl_Position you write from your vertex shader is not based on the position of the object in the world. It’s based on the “position” of the object in the destination texture. And therefore, gl_Position should be based on the texture coordinate, the mapping of vertex positions to locations in the destination texture.
You’ll still need the model’s positions, but they are not part of computing gl_Position. Your gl_position are the texture coordinates, modified to fit the [-1, 1] range of the destination texture viewport.
The next question is how to fetch the right data from the source texture. To do that, you need to perform normal texture projection steps, based on the positions of your vertices. So you do the usual per-vertex transforms, but you use the projection camera viewport and an appropriate projection matrix. Also, you don’t write the result to gl_Position; instead, you pass this 4-dimensional data as a vertex attribute to the fragment shader. From there, you use the textureProj functions to do projective texturing.
Then just write the fetched value to the fragment shader output.
You also need to keep in mind backface culling. You probably don’t want faces that face away from the projecting texture to get painted. And you can’t rely on the vertex positions for that, since they’re texture coordinates. So instead, you need to cull triangles based on their normals. You could use a geometry shader for that. Alternatively, you can do the dot product between the normal and the camera viewing direction in the fragment shader, discarding the fragment if the dot product is positive.
I don’t really know what any of this has to do with deferred rendering, g-buffers, or anything of that nature. You’re just trying to copy part of one texture into another, based on a projection onto a surface that maps into the destination texture.