I’m generating a glow by:
- rendering the object to a low-res texture
(256x256) - then I ping pong the texture between 2 pbuffers
rendering it onto a quad and running it through
a fragment shader that blurs the texture.
Now, the fragment shader needs to do 4 texture
lookups (the samples for the 4 neighboring
fragments) and set the fragment color to an average of these 4 samples.
The problem I see is that the fragments that need
to be blurred are (usually) a small subset of the
entire texture (only the fragments in the texture
that define the object rendered and the nearby
fragments).
A lot of processing is wasted on ‘blank’ fragments
that lie from from the area in which the object
is rendered.
Hopefully, having explained my concerns somewhat
clearly, my question becomes:
Is there a way i can determine this sub region?
perhaps by finding the bounding box of the
transformed object and only rendering a quad
covering that area?
Thanks.
EDIT: yes, I know I should use FBOs. But that is
another optimization altogether.