Optimizing glows...

I’m generating a glow by:

  1. rendering the object to a low-res texture
    (256x256)
  2. then I ping pong the texture between 2 pbuffers
    rendering it onto a quad and running it through
    a fragment shader that blurs the texture.

Now, the fragment shader needs to do 4 texture
lookups (the samples for the 4 neighboring
fragments) and set the fragment color to an average of these 4 samples.

The problem I see is that the fragments that need
to be blurred are (usually) a small subset of the
entire texture (only the fragments in the texture
that define the object rendered and the nearby
fragments).

A lot of processing is wasted on ‘blank’ fragments
that lie from from the area in which the object
is rendered.

Hopefully, having explained my concerns somewhat
clearly, my question becomes:

Is there a way i can determine this sub region?
perhaps by finding the bounding box of the
transformed object and only rendering a quad
covering that area?

Thanks.

EDIT: yes, I know I should use FBOs. But that is
another optimization altogether.

(depending on the blur method) wouldnt it be faster to use SGI_GENERATE_MIPMAP

Is there a way i can determine this sub region?
perhaps by finding the bounding box of the
transformed object and only rendering a quad
covering that area?
u can quickly unproject the 8 vertices of the box (perhaps even fewer are necessary), to find their screenspace, though for the glow since it expands u might want to make the box a bit bigger

zed, thanks for the reply.

regarding SGI_MIPMAPS…

how exactly would i use them? specify a few levels
and then use a low sample mipmap with linear
filtering or something?

Aeluned, read the spec about this extention before asking.

It will generate mipmaps when level 0 is updated (through glTexSubImage for example)

SeskaPeel.