Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 10

Thread: Projecting screen space texture onto UVs with GL 4.5

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Junior Member Newbie
    Join Date
    Mar 2014
    Posts
    10

    Projecting screen space texture onto UVs with GL 4.5

    In a deferred rendering path with I have a plane with a default black texture, and a screen space texture. I want to permanently "stamp" this into the plane's black texture from the current camera's viewpoint vec3(5.f) looking at vec3(0.f). I've tried to use image load / store functions with image2D and regular texture() to no avail. I've also tried transforming the UV coordinates by the View, Projection, and ModelView matricies. The texture should be horizontally applied since I'm looking through the screen space tex, as seen here:



    Should the object space UVs be transformed to a different space to match the camera? If so, which space and why?

    Code :
    /* gBuffer_F */
    layout(bindless_sampler, location = 1) uniform sampler2D bakeTest_64;
    layout(location = 0) out vec4 gbuf0;
    layout(location = 1) out vec4 gbuf1;
     
    in Vert
    {
    	vec2 UV;
    } v;
     
    void main()
    {
            gbuf0 = vec4(v.UV, 0.f, 1.f);
     
    	vec3 albedoM = texture(bakeTest_64, v.UV).rgb;
    	gbuf1 = vec4(albedoM, 1.f);
    }
     
    /* projectedTexBake_F */
    in Vert
    {
    	vec2 uv;
    } v;
     
    layout(bindless_sampler, location = 2) uniform sampler2D gbuf0;
    layout(bindless_sampler, location = 3) uniform sampler2D toBake_64;
    layout(location = 0) out vec4 Ci;
     
    uniform mat4 PM, VM, MV;
     
    void main()
    {
    	vec4 UV_regular = vec4(v.uv, 0.f, 1.f);
    	Ci = texture(toBake_64, UV_regular.rg);
    }
     
    /* deferredLighting_F */
    in Vert
    {
    	vec2 uv;
    } v;
     
    void main()
    {
    	Ci = texture(gbuf1_64, v.uv);
    }
    Attached Thumbnails Attached Thumbnails Click image for larger version. 

Name:	DUugxnz.jpg 
Views:	269 
Size:	18.6 KB 
ID:	1927  

  2. #2
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,974
    It's not clear to me exactly what you're trying to do. It sounds like you're trying to render one texture into another, but it's not clear what the problem is.

    You're talking a lot about UVs, but if you're trying to render one texture onto the other, then the only UVs that are relevant are the UVs for the triangle(s) that you're mapping the source texture onto. And those UVs should just be whatever part UVs should represent the portion of the source texture you want to copy. So if you're rendering the whole texture as a quad, then the texture coordinates should be the four corners of the texture coordinate space: (0, 0), (0, 1), (1, 0) and (1, 1).

    What matters for how it appears in the destination texture is not the source UVs, but the source positions. After all, you're just rendering. Whether to a texture or to the screen, where things get rendered depends on the positions of the vertices that make up the triangles.

    So the result you get depends on what happens to the vertex positions.

  3. #3
    Junior Member Newbie
    Join Date
    Mar 2014
    Posts
    10
    Quote Originally Posted by Alfonse Reinheart View Post
    It's not clear to me exactly what you're trying to do. It sounds like you're trying to render one texture into another, but it's not clear what the problem is.
    A similar situation would be in a projection painting application like Mari where you paint or load an image into the "paint buffer", then press B to bake that data into the selected object's current texture.

    Quote Originally Posted by Alfonse Reinheart View Post
    What matters for how it appears in the destination texture is not the source UVs, but the source positions. After all, you're just rendering. Whether to a texture or to the screen, where things get rendered depends on the positions of the vertices that make up the triangles.

    So the result you get depends on what happens to the vertex positions.
    So I should reconstruct P in my gBuffer and multiply that by the object space UVs? Right now the result of the projection seems to not have the MVP matrix applied to it....this is the source of my confusion.

  4. #4
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,974
    Quote Originally Posted by mrMatrix View Post
    A similar situation would be in a projection painting application like Mari where you paint or load an image into the "paint buffer", then press B to bake that data into the selected object's current texture.
    By this analogy, you would appear to be attempting to take a texture and copy it to another texture, based on projecting the source texture onto an object that is mapped to the destination texture.

    That's... complicated. Doable, but complicated.

    The key here is to remember what you're rendering. You have the following things:

    * A source texture.
    * A destination texture.
    * An object.
    * A projection of the source texture onto that object. So you have some matrices and other settings to do that projection.
    * A mapping from the object's vertices to locations in the destination texture (aka: your object has texture coordinates).

    So, your goal is to render into the destination texture. The question is this: what triangles do you render?

    You can't render the positions of the object, as they are meaningless. Your goal is to modify the destination texture based on a projection of a texture onto the vertex positions. Because you're rendering to the destination texture, the triangle you actually rasterize needs to represent where that triangle is in the texture's space.

    So the gl_Position you write from your vertex shader is not based on the position of the object in the world. It's based on the "position" of the object in the destination texture. And therefore, gl_Position should be based on the texture coordinate, the mapping of vertex positions to locations in the destination texture.

    You'll still need the model's positions, but they are not part of computing gl_Position. Your gl_position are the texture coordinates, modified to fit the [-1, 1] range of the destination texture viewport.

    The next question is how to fetch the right data from the source texture. To do that, you need to perform normal texture projection steps, based on the positions of your vertices. So you do the usual per-vertex transforms, but you use the projection camera viewport and an appropriate projection matrix. Also, you don't write the result to gl_Position; instead, you pass this 4-dimensional data as a vertex attribute to the fragment shader. From there, you use the textureProj functions to do projective texturing.

    Then just write the fetched value to the fragment shader output.

    You also need to keep in mind backface culling. You probably don't want faces that face away from the projecting texture to get painted. And you can't rely on the vertex positions for that, since they're texture coordinates. So instead, you need to cull triangles based on their normals. You could use a geometry shader for that. Alternatively, you can do the dot product between the normal and the camera viewing direction in the fragment shader, discarding the fragment if the dot product is positive.

    I don't really know what any of this has to do with deferred rendering, g-buffers, or anything of that nature. You're just trying to copy part of one texture into another, based on a projection onto a surface that maps into the destination texture.

  5. #5
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,722
    Quote Originally Posted by Alfonse Reinheart View Post
    You also need to keep in mind backface culling. You probably don't want faces that face away from the projecting texture to get painted.
    And you may also want to use a depth pass so that the texture doesn't get applied to camera-facing surfaces which are occluded by nearer surfaces. The principle behind that is essentially the same as shadow mapping.

  6. #6
    Senior Member OpenGL Lord
    Join Date
    May 2009
    Posts
    5,974
    Quote Originally Posted by GClements View Post
    And you may also want to use a depth pass so that the texture doesn't get applied to camera-facing surfaces which are occluded by nearer surfaces. The principle behind that is essentially the same as shadow mapping.
    That's going to be rather difficult when rendering into a texture. Remember: the positions are locations in the texture being rendered to. And the texture mapping probably doesn't overlap.

    You could do some manual depth buffering with image load/store, but the automatic depth buffer won't work for this approach.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •