Projecting screen space texture onto UVs with GL 4.5

In a deferred rendering path with I have a plane with a default black texture, and a screen space texture. I want to permanently “stamp” this into the plane’s black texture from the current camera’s viewpoint vec3(5.f) looking at vec3(0.f). I’ve tried to use image load / store functions with image2D and regular texture() to no avail. I’ve also tried transforming the UV coordinates by the View, Projection, and ModelView matricies. The texture should be horizontally applied since I’m looking through the screen space tex, as seen here:

Should the object space UVs be transformed to a different space to match the camera? If so, which space and why?

/* gBuffer_F */
layout(bindless_sampler, location = 1) uniform sampler2D bakeTest_64;
layout(location = 0) out vec4 gbuf0;
layout(location = 1) out vec4 gbuf1;

in Vert
{
	vec2 UV;
} v;

void main()
{
        gbuf0 = vec4(v.UV, 0.f, 1.f);
	
	vec3 albedoM = texture(bakeTest_64, v.UV).rgb;
	gbuf1 = vec4(albedoM, 1.f);
}

/* projectedTexBake_F */
in Vert
{
	vec2 uv;
} v;

layout(bindless_sampler, location = 2) uniform sampler2D gbuf0;
layout(bindless_sampler, location = 3) uniform sampler2D toBake_64;
layout(location = 0) out vec4 Ci;

uniform mat4 PM, VM, MV;

void main()
{
	vec4 UV_regular = vec4(v.uv, 0.f, 1.f);
	Ci = texture(toBake_64, UV_regular.rg);
}

/* deferredLighting_F */
in Vert
{
	vec2 uv;
} v;

void main()
{
	Ci = texture(gbuf1_64, v.uv);
}

It’s not clear to me exactly what you’re trying to do. It sounds like you’re trying to render one texture into another, but it’s not clear what the problem is.

You’re talking a lot about UVs, but if you’re trying to render one texture onto the other, then the only UVs that are relevant are the UVs for the triangle(s) that you’re mapping the source texture onto. And those UVs should just be whatever part UVs should represent the portion of the source texture you want to copy. So if you’re rendering the whole texture as a quad, then the texture coordinates should be the four corners of the texture coordinate space: (0, 0), (0, 1), (1, 0) and (1, 1).

What matters for how it appears in the destination texture is not the source UVs, but the source positions. After all, you’re just rendering. Whether to a texture or to the screen, where things get rendered depends on the positions of the vertices that make up the triangles.

So the result you get depends on what happens to the vertex positions.

A similar situation would be in a projection painting application like Mari where you paint or load an image into the “paint buffer”, then press B to bake that data into the selected object’s current texture.

[QUOTE=Alfonse Reinheart;1271956]
What matters for how it appears in the destination texture is not the source UVs, but the source positions. After all, you’re just rendering. Whether to a texture or to the screen, where things get rendered depends on the positions of the vertices that make up the triangles.

So the result you get depends on what happens to the vertex positions.[/QUOTE]

So I should reconstruct P in my gBuffer and multiply that by the object space UVs? Right now the result of the projection seems to not have the MVP matrix applied to it…this is the source of my confusion.

By this analogy, you would appear to be attempting to take a texture and copy it to another texture, based on projecting the source texture onto an object that is mapped to the destination texture.

That’s… complicated. Doable, but complicated.

The key here is to remember what you’re rendering. You have the following things:

  • A source texture.
  • A destination texture.
  • An object.
  • A projection of the source texture onto that object. So you have some matrices and other settings to do that projection.
  • A mapping from the object’s vertices to locations in the destination texture (aka: your object has texture coordinates).

So, your goal is to render into the destination texture. The question is this: what triangles do you render?

You can’t render the positions of the object, as they are meaningless. Your goal is to modify the destination texture based on a projection of a texture onto the vertex positions. Because you’re rendering to the destination texture, the triangle you actually rasterize needs to represent where that triangle is in the texture’s space.

So the gl_Position you write from your vertex shader is not based on the position of the object in the world. It’s based on the “position” of the object in the destination texture. And therefore, gl_Position should be based on the texture coordinate, the mapping of vertex positions to locations in the destination texture.

You’ll still need the model’s positions, but they are not part of computing gl_Position. Your gl_position are the texture coordinates, modified to fit the [-1, 1] range of the destination texture viewport.

The next question is how to fetch the right data from the source texture. To do that, you need to perform normal texture projection steps, based on the positions of your vertices. So you do the usual per-vertex transforms, but you use the projection camera viewport and an appropriate projection matrix. Also, you don’t write the result to gl_Position; instead, you pass this 4-dimensional data as a vertex attribute to the fragment shader. From there, you use the textureProj functions to do projective texturing.

Then just write the fetched value to the fragment shader output.

You also need to keep in mind backface culling. You probably don’t want faces that face away from the projecting texture to get painted. And you can’t rely on the vertex positions for that, since they’re texture coordinates. So instead, you need to cull triangles based on their normals. You could use a geometry shader for that. Alternatively, you can do the dot product between the normal and the camera viewing direction in the fragment shader, discarding the fragment if the dot product is positive.

I don’t really know what any of this has to do with deferred rendering, g-buffers, or anything of that nature. You’re just trying to copy part of one texture into another, based on a projection onto a surface that maps into the destination texture.

And you may also want to use a depth pass so that the texture doesn’t get applied to camera-facing surfaces which are occluded by nearer surfaces. The principle behind that is essentially the same as shadow mapping.

That’s going to be rather difficult when rendering into a texture. Remember: the positions are locations in the texture being rendered to. And the texture mapping probably doesn’t overlap.

You could do some manual depth buffering with image load/store, but the automatic depth buffer won’t work for this approach.

Forward render (with the actual vertex positions in gl_Position) into the depth buffer first, then compare against the depth buffer during the “inverse” render (with destination texture coordinates in gl_Postion and projected vertex positions as source texture coordates).

I was successful in projecting a texture onto an obj by setting the destination obj’s UVs as the gl_Position and then doing textureProj if NdotV > 0.

However, I now have a problem with texture seams being extremely visible when sampling this texture on an object. This is the same ABJ tex with green letters and white BG, just projected onto a teapot and saved as a texture and then applied. You can see all my UV seams. I’m wondering if this is a problem with the shader or with incorrect sampling parameters when I initialized the texture?

[ATTACH=CONFIG]1164[/ATTACH]

Also, I found that changing the projection matrix settings when using glm::perspective - specifically the FOV - will cause multiple projection and cause the object to grow or shrink. Have I forgotten to negate this someplace ?

[ATTACH=CONFIG]1165[/ATTACH]

/* CPU */
glUniformMatrix4fv(glGetUniformLocation(proH, "VM_bakeDEBUG"), 1, GL_FALSE, &myGL->VM_bakeDEBUG[0][0]);

glm::mat4 MV_bakeDEBUG = myGL->VM_bakeDEBUG * MM;
glm::mat3 NM_bakeDEBUG = glm::mat3(glm::transpose(glm::inverse(MV_bakeDEBUG)));
glUniformMatrix3fv(glGetUniformLocation(proH, "NM_bakeDEBUG"), 1, GL_FALSE, &NM_bakeDEBUG[0][0]);

glm::mat4 ProjectorM = biasM * myGL->selCamLi->PM * myGL->VM_bakeDEBUG * MM;
glUniformMatrix4fv(glGetUniformLocation(proH, "ProjectorM"), 1, GL_FALSE, &ProjectorM[0][0]);



/* VERT */
layout(location = 0) in vec3 pE;
layout(location = 1) in vec2 uvE;
layout(location = 3) in vec3 nE;

uniform mat3 NM_bakeDEBUG;
uniform mat4 MVP, ProjectorM;

out Vert
{
    vec3 N_VS_bakeDEBUG;
    vec4 bakeCoord;
} v;

void main()
{
    vec2 transformUV = uvE * 2.f - 1.f;
    gl_Position = vec4(transformUV, 0.f, 1.f);

    v.N_VS_bakeDEBUG = normalize(NM_bakeDEBUG * nE);
    v.bakeCoord = ProjectorM * vec4(pE, 1.f);
}

/* FRAG */
in Vert
{
    vec3 N_VS_bakeDEBUG;
    vec4 bakeCoord;
} v;

layout(bindless_sampler, location = 0) uniform sampler2D toProject_64;
layout(location = 0) out vec4 Ci;

uniform mat4 VM_bakeDEBUG;

void main()
{
    vec3 V_VS_bakeDEBUG = normalize(VM_bakeDEBUG * vec4(10.f, 10.f, 10.f, 0.f)).xyz;

    if (v.bakeCoord.q > 0.f) // prevents reverse projection
    {
        if (dot(v.N_VS_bakeDEBUG, V_VS_bakeDEBUG) > 0.f) //NdotV prevents backfacing proj
        {
            Ci = textureProj(toProject_64, v.bakeCoord);
        }

        else
            Ci = vec4(0.f); //existing texture
    }
}

It sounds like you’re using different projections at different times.

To bake a rendered image into a texture, you need to generate texture coordinates by transforming object coordinates using the same transformations used for rendering the image. The model transformation can be used to position the object, but the transformation from world space to screen space (i.e. the view and projection transformations) needs to match.