Blend two textures with different coordinates and sizes in the same shader

I have two textures with different coordinates and sizes in my fragment shader:

varying highp vec2 v_currentTextureCoords;
varying highp vec2 v_backgroundTextureCoords;
uniform sampler2D u_currentTexture;
uniform sampler2D u_backgroundTexture;

void main()
{
    vec4 currentColor = texture2D(u_currentTexture, v_currentTextureCoords);
    vec4 backgroundColor = texture2D(u_backgroundTexture, v_backgroundTextureCoords);

    gl_FragColor = currentColor * backgroundColor;
}

Here is the corresponding vertex shader:

attribute vec4 a_position;
attribute vec2 a_currentTextureCoords;
attribute vec2 a_backgroundTextureCoords;
varying vec2 v_currentTextureCoords;
varying vec2 v_backgroundTextureCoords;

void main()
{
    gl_Position = a_position;
    v_currentTextureCoords = a_currentTextureCoords;
    v_backgroundTextureCoords = a_backgroundTextureCoords;
}

Those shaders are responsible for rendering u_currentTexture.

As you can read above, the two textures are:

[ul][li]u_backgroundTexture: it’s a video stream, full screen, size is 1080x1920
[/li][li]u_currentTexture: it can be any image, size is 469x833 (smaller but same ratio).[/ul]
[/li]
To make it simple, for now, I don’t want to blend anything, but just display the pixels of u_backgroundTexture in the shader program of u_currentTexture.

[ATTACH=CONFIG]1417[/ATTACH]

As you can see, the rendered image with the shaders above (top left corner, not the whole image), is the same as the background image, but scaled down to fit in a smaller rectangle. That’s not what I want.

I want to display the pixels which are “behind” u_currentTexture (those of u_backgroundTexture), so in the end, one wouldn’t even notice there are two textures.

But since the textures have different sizes and coordinates, it doesn’t give this result at all (for now, it’s what you see above).

Then, in my fragment shader, I managed to “scale” the texture so the image in the top left corner has the same “zoom” as the background image:

[ATTACH=CONFIG]1418[/ATTACH]

To do this, I modified my fragment shader:

varying highp vec2 v_currentTextureCoords;
varying highp vec2 v_backgroundTextureCoords;
uniform sampler2D u_currentTexture;
uniform sampler2D u_backgroundTexture;
uniform vec2 u_scaleRatio; // Notice here

void main()
{
    vec4 currentColor = texture2D(u_currentTexture, v_currentTextureCoords);
    vec4 backgroundColor = texture2D(u_backgroundTexture, v_backgroundTextureCoords * u_scaleRatio); // And here

    gl_FragColor = currentColor * backgroundColor;
}

I set u_scaleRatio in my program with glUniform2fv(). The values are basically (pseudo-code):

u_scaleRatio = vec2(currentTextureWidth / backgroundTextureWidth, currentTextureHeight / backgroundTextureHeight);

As you can see, it’s almost working, but it looks like there is an offset on the X axis: the rendered image in the top left corner is actually what we see in the top right corner… I can’t find a way to correct it.

How can I modify my shaders so I can correct this offset, knowing that the textures have different sizes and coordinates?

As nobody answers my question, can you tell me if it’s because:
[ul][li]You don’t understand it (poor description, not enough details, unclear question…)[/li][li]You understand the question but don’t know the answer[/ul][/li]
Thank you.

EDIT: correction: in the fragment shader, the line below

gl_FragColor = currentColor * backgroundColor;

is actually

gl_FragColor = backgroundColor;