Hello,
I would like to use opencv with openGL for texture mapping. With the use of opencv, channel RGB values can be manipulated. Now, suppose, my texture XY coorodinate range is(-4, -4) to (4, 4). My image has been texture mapped to the window with dimension (500, 500). For mapping between coordinate and pixel, I have used the following relationship:
x,y (0,0) to (8,8) mapping to pixel for 0,0 to (500, 500);
now for a particular (x,y)
x += 4 ;
if ( y >= 0)
y = 4 - y;
else
y = 4-(-y);
corresponding pixel value (x500/8) and (y500/8);
Is this mapping ok between x,y coordinate and pixel or any better method is available?
Hi,I need the reply to my query a bit urgently. Please let me know whether I can separate the channel RGB values using OpenGL. As I would like to touch some point of my textured plane and I am interested to know what is that RGB value of pixel at that point. I am not using mouse, I am using a haptic device to touch the plane. Thanks
glReadPixels will allow you to read from the render buffer.
Thanks for the reply. I have some simple question:
- Do I need to use glOrtho to use glReadPixels; doesn’t it work for gluPerspective;
2)Do I need to make the textured image mapped totally to the screen to use glReadPixels?
- In the following formula:
void glReadPixels(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid* data);
x, y are the position of the current pixel,
width, height is ( 1, 1) if correspondsto 1 pixel, can it correspond to more than one, then what value will be retured for different RGB value?
It would be highly appreciated if my question is answered. Thanks in advance.
Thanks for the reply. If some example codes are given for the different cases mention in (2), it would be easier for me to understand.
Do you need an example for a frame buffer object or just the default render buffer?