histogram

Is there some way of obtaining a histogram of a projection without using a glDrawPixels or glCopyPixels previously?

thanks

mmm…
Explain your problem better.

Look into the ARB_imaging option. glHistogram() ought to be what you’re looking for. Don’t except it to run very fast on consummer-level hardware though.

As said al_bob ita works slow becouse it implemented in software(on NV and(possible)ATI cards).
Just some 3dlabs cards(Wildcat4) and AGI/Sun/IBM videoboards support it in hardware.

Originally posted by nat-marques:
[b]Is there some way of obtaining a histogram of a projection without using a glDrawPixels or glCopyPixels previously?

thanks[/b]

As has been said, yes, but not really, since the glHistogram may be slower than reading the FB and doing the summation yourself (using MMX and SSE if you can).

I’ve toyed around with using a fragment program to do this, but it’s not going to be faster than doing this on the CPU as far as I can tell.

The idea (in case anyone can improve on it and post their results) is that for each color in the histogram, we can figure out how many pixels in the framebuffer equal that color. The outgoing fragment to set to white if the corrseponding texel in a bound screen-mapped texture (the original framebuffer) equals a reference value and black if not. Binding this back to a texture and using HW mipmap generation, the 1x1 mip level will give a rough percentage of the coverage of that reference color (in 8 bit precision). Repeating one pass per color (or ideally 3 to 4 colors at a time if we use the 3-4 independent color channels) can build up a new image which contains the coverage percentages in an array, which still ultimately requires a glReadPixels to use, but is hopefully much smaller than the original framebuffer to transfer (it certainly can’t be any larger).

Like I said, it’s not going to be faster than glReadPixels and a CPU summation.

The one caveat to all this is that using the Microsoft GDI functions to get at the framebuffer seems faster than readpixels (non PDR, at least). See MovieMaker.cpp in the Nvidia OGL SDK for a reference. So technically, yes, if you use this and do your own summation on the CPU.

Avi

[This message has been edited by Cyranose (edited 11-13-2003).]