Ok, so as the title suggests, I’m trying to combine OpenGL with OpenCV (Computer Vision).
Originally, I was just trying to track objects based on color using OpenCV. I actually have that working pretty well. I have a Playstation Move controller hooked up via bluetooth, I can set the color, and track it.
After I got that working, I thought it’d be cool to integrate my OpenCV project with a simple OpenGL program. The idea was to have my object tracking code update a sphere created in OpenGL and move it on screen in relation to the user’s movements.
So here’s what I’ve got: (Disclaimer: I’m very new to OpenGL)
Inside my openGL display function:
[ul][li]I find and track the circle created by the Playstation Move controller on screen. (I do this by finding the center of the circle) []On each frame I compare the x and y values of the current center of the circle to the x and y values of the center found in the previous frame. This is how I determine movement / adjust the openGL program.[]If the center has changed by at least 4 pixels in any direction I consider that movement and adjust the sphere on screen accordingly.[/ul][/li]So, that’s the basic idea. Determine movement by comparing x and y values and translate the sphere in the appropriate direction.
This seems to work “ok”, at best. I get reasonable results if I move the controller slowly and only in left/right or up/down directions. It does respond to diagonal movement, but it doesn’t look nearly as smooth. I get horrible results if I try to move the controller at any kind of fast pace. The sphere on screen just sort of freaks out and jerks around trying to keep up, but definitely does not respond in a natural way.
I’m posting this here because my OpenCV/object tracking code seems to work, my issue seems to lie in finding a way to get the results from openCV to OpenGL in a format that is meaningful.
Am I taking this in the right direction? I can’t help but think that some of my issues come from trying to relate differences in pixel locations to translations in OpenGL. I assume there’s no way that I can relate my openCV coordinate system (0,0 in top left corner) to openGL and just draw the sphere at the correct coordinates automatically, right? If not, and translating is the only way to go, how should I be approaching this?
I hope this makes sense. I tried to break down the problem as much as I could without writing a book, heh.
Any help is appreciated.