Render Target

Hi,

Recently, I encoutered following problem:

Scene has more than one camera -> needs more than one render target. I would like to make it as flexible as possible: every camera has a pointer to abstract “render target class” and it may be a texture or a window. Now the question is: how to make it?

The most basic idea is to render to a pixel buffer, and then copy it as a texture, and use whatever I like.

  1. How to do it? Take the view as a texture, and not post it to be drawn, just store it somewhere in memory as tex?
  2. Can it lead to picture quality loss?

You can render to screen and use glCopyTexSubImage to copy fragment of screen to texture - that’s the simplest way and it’s often fast enough. Of course this has one limitation - you cannot render to textures larger than your screen.
If you want to render directly to texture then use Framebuffer Object extension.

  1. How to do it? Take the view as a texture, and not post it to be drawn, just store it somewhere in memory as tex?
    Something like this:
    -set viewport / scissor to texture’s size
    -render what should be on texture (you can render this to screen)
    -copy from screen to texture
    -set viewport / scissor to screen’s size
    -render what should be on the screen and use previously rendered texture.
    -swap buffers
    What you render to texture will be completely overwritten so it won’t be visible when you swap buffers at the end. Of course it will not work if you don’t have double buffering.
  1. Can it lead to picture quality loss?
    If you don’t do it correctly then yes, but don’t worry - if you run into some quality problem it would be quite easy to fix (little corrections to texture cordinates, wrap modes, filtering modes, etc.).

I’ve implemented this scheme using an EXT_framebuffer_object extension. I have a class iRenderTarget capable of beeing rendered into and binded as a texture. You could also take a closer look at the source code at my webpage, but it could be a little bit complex for a newbie.