Fading from one scene to another

I’m working on an animation system right now where scenes are scripted, then stitched together with transitions, much like A/B style video editing or the way it’s done in RexEdit (if anyone’s ever used that).

Anyway, so say I’ve got two different scenes, and I want to do a standard cross-fade from one to the other. As in, once scene fades out while the next one fades in.

What I’m thinking of doing is:

  1. draw scene 1
  2. copy it into a texture
  3. wipe the frame buffer
  4. draw scene 2
  5. blend in the texture for scene one over the frame buffer, smoothly over time

is there a better way to do this, and also, can I expect reasonable results with a Radeon 7500 Mobility with AGP2x if my target is 800x600x32?

This is a perfectly fine way to do it, and if that Mobility has its own VRAM (i e, it’s not an “IGP” part) then it should perform reasonably (considering you’re drawing twice the amount of stuff).

So what exactly are the calls to copy my framebuffer into a texture? I mean, I’ll be looking it up on my own now, but if someone replies before I find them, that would be great.

Also, I remember the last time I did something like this (a long time ago - and the code is at a company that I’m not working for anymore), that I was for some reason forced to copy the image data into local memory. It seems to me that I should be able to just copy the frame buffer into another section of VRAM without dealing with the slowness of the AGP bus. Is there actually a way to do that?

If you have a Radeon 9500+ you can use the accumulation buffer. Real nice thing…

glAccum (GL_MULT, 0.3f);
glAccum (GL_ACCUM, 0.7f);
glAccum (GL_RETURN, 1.0f);

Jan.

But if you need to copy to a texture:

glCopyTexSubImage2D

Thanks a lot, Jan!

I’ll probably post when I get it working.

So on my hardware, the accumulation buffer idea worked, but incredibly slowly. We’re talking seconds per frame.

I finished off the setup for the texture rendering idea, and here’s the code for anyone who wants to know:

before the first scene:

void Transition::drawInitial(float time)
{
glViewport(0, 0, FADE_TEXTURE_SIZE_X, FADE_TEXTURE_SIZE_Y);
}

between the two scenes:

void Transition::drawBetween(float time)
{
glReadBuffer(GL_BACK);
if(tex == 0)
{
glGenTextures(1, &tex);
externalBind(tex);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, FADE_TEXTURE_SIZE_X, FADE_TEXTURE_SIZE_Y, 0);
}
else
{
externalBind(tex);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, FADE_TEXTURE_SIZE_X, FADE_TEXTURE_SIZE_Y);
}
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0, 0, RESOLUTION_X, RESOLUTION_Y);
}

and finally, after the last scene

void Transition::drawFinal(float time)
{
externalBind(tex);
glBegin(GL_TRIANGLE_FAN);
glColor4f(1, 1, 1, 1.0f-time/length);
glTexCoord2f(0, 1);
glVertex2f(0, 0);
glTexCoord2f(1, 1);
glVertex2f(800, 0);
glTexCoord2f(1, 0);
glVertex2f(800, 600);
glTexCoord2f(0, 0);
glVertex2f(0, 600);
glEnd();
}

The “length” variable is the amount of time that it takes this transition to run, and the passed in “time” variable is the time since the transition started. “tex” is just a GLuint variable in the class that stores the texture when necessary.

RESOLUTION_X and _Y are the size of the screen, and FADE_TEXTURE_SIZE_X and _Y are the size of the texture that the current scene will be drawn to. If it’s too much lower than the resolution you’re working at, then it will look blurry.

I’m using 512x512 for my 800x600 screen, and I’m bogging down to 60fps, which is acceptable. It also looks really nice, and I’m liking the effect.

What hardware are you using?

I tested the accumulation buffer on Radeon 9600XT, Radeon 9500 (i think) and Geforce 4.

The first two can use it in realtime, the last one needed a few seconds for each frame.
Don´t know about Geforce FX though.

Jan.

Man, I wish I had a 9600… or a 9800 for that matter.

I’m using a Radeon 7500 mobility, 32 megs, on an AGP2x bus.

Since the accumulation buffer is actually a huge amount of floating point calculations on a float-stored frame buffer, I would assume that only DX9-class hardware could handle it.

That would explain why the GeForce4 couldn’t handle the accumulation buffer, while the 9600/9500 models could.

Though now that I know this, I’ll keep in mind that the accumulation buffer is feasible for real time effects on 9x00 or FX chipsets, for when those chips all become mainstream.

See, this is for a game that I’m making, and I tend to aim for as mainstream hardware as I can. I put very few “fancy” features in my games, although with simple additive blending and particle systems I find that you can make quite stunning effects even in 2D with so-called “low-end” hardware.

I use glCopyTexSubImage2D for doing my water reflectoins, but I’m wondering if there are any new extensions that create a render target (like in directx) without requiring you to write to the primary buffer and copy to texture, but instead render to an offscreen buffer that is also texture. Sadly I don’t keep up with all the new Opengl extensions coming out these days.

It is possible to do render to texture in opengl (with pbuffers), I made several programs with source you can download at http://www.chez.com/dedebuffer .

However it is a bit messy and mainly available on windows (WGL stuff) , there is a much cleaner extension which is on the works. It is EXT_render_target.

http://www.opengl.org/about/news/archive2004/apr04.html?render+target#first_hit