I have a set of four textures. I need to render a QUAD of fullscreen size, combining R, G, and B pixel components from the four textures. I want to combine them as shown below:
DEST: R G B R G B R G B R G B…
SOURCE: T1R T2G T3B T4R T1G T2B T3R T4G T1B T2R T3G T4B …
i.e. pixel-1 in rendered image is formed by combining R from texture1, G from texture2 and B from texture3.
similarly, pixel-2 in rendered image is formed by combining R from texture4, G from texture1 and B from texture2 and so on.
Is this possible using OpenGL multitexturing?
Any help?
Well I trust Nvidia to have highly optimized GLSL compiler.
That does not mean that all GL vendors are on the same level.
Better safe than sorry, as they say.
Well, if you haven’t learned GLSL, now is the time.
There is a potential problem with that code. Namely, the majority of the texturing is in non-uniform control flow. Which means that the gradients aren’t available, so your texture functions may have problems.
The better way to arrange this code is as follows:
After studying GLSL basics, I created, compiled above fragment shader. I set the my RGB textures to the samplers in above fragment shader using a for loop as shown below:
for(int i = 0; i < 4; i++)
{
char name[256];
sprintf_s(name, 256, “Texture%d”, i);
int my_sampler_uniform_location = glGetUniformLocation(program, name);
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, g_videoTextures[i]);
glUniform1i(my_sampler_uniform_location, i);
}
Then I draw a QUAD of fullscreen size. But the QUAD is completely white.
Am I missing any step? Do I need to create vertex shader as well?
Please help.
One more thing, have u checked the error bit in the render routine. Add this call in your render function which will give u an assertion in case the error bit is set.
//include <cassert> at the beginning
assert(glGetError()==GL_NO_ERROR);
If you are pushing in the texture coordinates using glTexCoord* function, these coordinates come in glMultiTexCoord* variable in the vertex shader so u can store this value into another register gl_TexCoord[0] as follows
Note this is all pre opnegl3 shader handling. In opengl3 and above u need to handle the per vertex attributes and matrices yourself and there are no builtin uniforms. Just reminding u that while it works in earlier opengl version, it wont work in the modern opengl (core profile so to speak).
Basic things seems to be working for me. Thanks for the help. I am now stuck at a point. Let me explain the situation.
I have four RGB textures bound to four video input streams and a rendering pattern map of size =screensize*3. The pattern is like this:
DEST: pixel#1 pixel#2 pixel#3
MAP: 3 0 1 3 2 1 0 2 1 3 1 2…
For example:
To create pixel#1 in the output, take R from texture3, G from texture0 and B from texture1.
For pixel#2, take R from texture3, G from texture2 and B from texture1.
I could bind the RGB video textures to the samplers in the fragment shader. But the problem is, how can I read the pattern map in the shader. I already tried below option:
Create an RGB/RGBA texture taking values from pattern map and bind it to 2D sampler and then use texture2D() to read values from the map. And it doesn’t show up correct values in the shader.
There are different ways to resize.
You can create a render to texture of size 1400x1050 and render a fullscreen quad along with the first texture. This will at least use the GPU.
You can use GLU to rescale a image : gluScaleImage
There is my own lib from glhlib : glhScaleImage_asm386. See my signature.