View Full Version : glReadPixels differing speed
05-29-2002, 09:26 AM
I have an interesting problem.
I wrote a sample application that reads the pixels from a small box in the center of the screen. Dimensions are typically 32x32.
I am working with a GEForce3
The time it takes is about .08 ms.
This is fine.
If I run the same code subroutine, (actually, I have timed it down to the glReadPixels call), in a far more complex app, then it takes about .4ms, a factor of five times bigger.
The first program was very simple, it just had a model rotating in the scene, whereas the second program has all sorts of crud in it.
I don't understand why this makes a difference though, since it should be reading the framebuffer, and not doing depth-testing or anything like that.
Anybody have any ideas?
BTW I also ran program #1, while #2 was running, to see if the card was simply getting maxxed out, but the speeds remained the same for both programs.
05-29-2002, 09:48 AM
getting data from gpu is not slow because it _IS_ slow (yes it _IS_ slow as well, but not for 32x32 pixels http://www.opengl.org/discussion_boards/ubb/wink.gif)
it is slow because of the stalling of the pipeline, or, you never know WHEN the gpu is finished drawing and can start sending the information you want
here for more info: http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/006489.html
Jeffry J. Brickley
05-29-2002, 10:05 AM
Try setting up opengl to render to a texture, there are some demo code sets in the Opengl.org news that have popped up to do just that. That doesn't break the opengl pipeline, as I understand it (which I may not), and therefore is a little safer than using the glReadPixels().
05-29-2002, 11:08 AM
Thanks for the quick replies. I should have included an additional piece of information.
I render straight to texture for other parts of this app, which is MUCH faster than the glReadPixels, then draw it.
This time, I don't actually want to draw anything, I only want to see what is underneath, and do some simple math. If I render to texture, with glCopyTexSubImage or something similiar, then I still need to do something so I can actually look at the data it has fed the texture.
glTexImage2D sets up the texture, with the data, but how do you read it back once you have the handle?
Reading back is the part that is slow, right?
05-29-2002, 11:30 AM
reading back means waiting till the gpu actually has processed all your fancy commands. you think the gpu is yet finished after calling glCopyTexSubImage? it DIDNT EVEN BEGAN YET!. it'll do as soon as it has finished all the other jobs. every readback is slow if you don't tell the gpu before that you want something back, and there are only very few features wich you can request something and get it later.
to view your stuff on the texture, just plot it on screen http://www.opengl.org/discussion_boards/ubb/wink.gif
05-29-2002, 11:41 AM
So, I guess the end result here, is that I have to use glReadPixels, and deal with the slow time.
It seems that copying straight to texture is great, if you do not have to actually analyze the image data.
To read the image data to the cpu memory, you end up doing a glReadPixels, which stalls the pipeline, and causes delays. Is this about the crux of it?
BTW the actual goal here is to read the pixels, and make a decision, based on the range of luminosity.
05-29-2002, 11:47 AM
05-29-2002, 11:51 AM
every glGet in any form _WILL STALL THE PIPELINE_ if you need to get data that is _NOT YET GENERATED_. so copy to tex, do something else and then glGet _could_ be faster yeah. dunno http://www.opengl.org/discussion_boards/ubb/wink.gif
Powered by vBulletin® Version 4.2.3 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.