i´m new to opengl and i have the following problem. I get images from a camera very fast up to 25 Frames per second. To show the images i use the Qt Toolkit and from then i use the QGLWidget.
My problem is that the images i get from the camera have a resolution from 1600 x 1200. My Widget that i use can be dynamicly resize and i have to scale the image to this widget.
To draw the image bits to the widget i use the following function:
But the image isn´t scaled. Now i found this function to scale images
gluScaleImage
This function is so slow and my cpu get up to 66%. My question is. How can i scale the image fast and resource friendly to the widget? Is there any OpenGL function that could help me with this problem ?
If it is possible show me a littel example but at moment it is not really easy for me to found the right GL functions.
can you bind the image you have as a texture, and then just apply it to a screen aligned quad in the resolution you require?
Or I’m not sure if this is exactly helpful but one option might be to render as you are currently doing to a Frame Buffer Object, then render a second pass in the resolution you require, using the result of the first rendering pass as a texture applied to a screen aligned quad.
thank you for your reply. It´s not easy for me to follow your hint, because i´m really new to OpenGL. It is possible that you give me a littel example code.
Then its a little bit easier to understand what you mean.
What he means is that instead of copying your image directly to render buffer by glDrawPixels, you should copy it into GPU’s texture memory by calling glTexImage2D and then simply draw a quad covered with this texture - this part will work fast.
First - read some tutorials on texturing and get it working.
Then optimize it:
use glTexImage2D only to create empty 1600x1200 texture during initialization
use glTexSubImage2D to upload new data into this texture
And further optimization (very simple but requires shaders):
use GPU-friendly formats when uploading texture to GPU, like GL_BGR or GL_ABGR.
if images captured from camera are not in these GPU-friendly formats, still send them to GPU using friendly formats but then use fragment shader to swap red and blue components on the fly.
Final optimization (doesn’t increase upload speed but allows more paralellism bertween GPU and CPU if you have use for it):
use pixel buffer object extension to upload images instead of glTexSubImage2D
“My problem is that the images i get from the camera have a resolution from 1600 x 1200. My Widget that i use can be dynamicly resize and i have to scale the image to this widget.”
from “camera” i sense a camera in your scene?
fast way to do that and avoid frame buffer objects (for beginners) to have your backbuffer (your rendered scene) to copy directly to a texture using glCopyTexImage2D. this is fast since done from vram to vram with the GPU. after that you have a texture with your scene and draw it again using a fullscreen quad (or lower res quad) with proper texture filtering.
1600x1200 may suggest that this is rendered scene, but from description in 1st post I think he meant real camera.
As for your code, Treehouse:
Init:
glGenTextures
glBindTexture
glTexImage2D
glTexParameter - set parameter GL_MIN_FILTER to either GL_NEAREST or GL_LINEAR.
In main loop, use:
glBindTexture
glTexSubImage2D
glEnable(GL_TEXTURE_2D)
…render…
glDisable(GL_TEXTURE_2D)
Look at any texturing tutorial - you would find exactly these steps in the code (except for glTexSubImage2D, which you use to dynamically update texture).