Faking an accumulation buffer in OpenGL es

So so far all OpenGL work I have been doing involves working with only one frame at a time. However my new goal is to get particle trials going (sorta like with fireworks). Unfortunately I am trying to do this on mobile devices that run OpenGL ES 2.0, and this version does not have an accumulation buffer.

SO I was thinking perhaps their is a way to do the following in OpenGL es 2.0

  1. Draw the current frame to some sort of storage container (perhaps a buffer)
  2. Take the previous frame/s (stored to buffer) and fade it a bit.
  3. Put step 1 ontop of step 2. And display that.
  4. Save the result of 3 to a buffer that can be accessed next frame.

So, basically what would happen is each frame is saved and then faded, and mixed with the new one each frame, thus creating a simple trailing effect.

Unfortunately I dont know any of the buffer related functions in OpenGL so I am really stuck. What functions should I look at using? How would you achieve this effect.

Thank you Much!

What you need is a framebuffer object (FBO). See glGenFramebuffers, glBindFramebuffer, glFramebufferRenderbuffer and glFramebufferTexture2D. This allows you to direct rendering into textures and/or renderbuffers.

Typically, you’d use a texture for the colour buffer and renderbuffers for the depth and/or stencil buffers, as core OpenGL ES 2.0 doesn’t provide any texture formats suitable for depth or stencil buffers (although these are available via extensions such as OES_depth_texture, a renderbuffer is fine if you don’t actually need to use the data as a texture).

Having rendered into a texture, you can then render onto the default framebuffer using that texture. Having the fragment shader set the alpha component to a value less than one and choosing the appropriate blending mode will result in the default framebuffer containing a blend of the new frame and its previous contents.

The main limitation of this approach compared to an accumulation buffer is that there may not be any texture or renderbuffer format with more bits than the default framebuffer. But that probably doesn’t matter for your particular application.

[QUOTE=GClements;1280446]What you need is a framebuffer object (FBO). See glGenFramebuffers, glBindFramebuffer, glFramebufferRenderbuffer and glFramebufferTexture2D. This allows you to direct rendering into textures and/or renderbuffers.

Typically, you’d use a texture for the colour buffer and renderbuffers for the depth and/or stencil buffers, as core OpenGL ES 2.0 doesn’t provide any texture formats suitable for depth or stencil buffers (although these are available via extensions such as OES_depth_texture, a renderbuffer is fine if you don’t actually need to use the data as a texture).

Having rendered into a texture, you can then render onto the default framebuffer using that texture. Having the fragment shader set the alpha component to a value less than one and choosing the appropriate blending mode will result in the default framebuffer containing a blend of the new frame and its previous contents.

The main limitation of this approach compared to an accumulation buffer is that there may not be any texture or renderbuffer format with more bits than the default framebuffer. But that probably doesn’t matter for your particular application.[/QUOTE]

Is this the right way to render to a texture?
glGenTextures(1, &FBOtex)
glBindTexture(GLenum(GL_TEXTURE_2D), FBOtex)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_LINEAR)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), FBOtex, 0)

You need to create the storage for the texture with glTexImage2D before you can attach it to a framebuffer. You don’t need to supply any data (i.e. the data parameter to glTexImage2D can be NULL), but you do have to define the dimensions and internal format. Also, the filter and repeat modes don’t matter for writing.

Do I want to be generating this texture storage integer every frame, or should I make it once and just overwrite it?

As I understand things now:

  1. Create texture object

Now in loop
2. Set OpenGL to render to a offscreen buffer
3. Set that buffer to write to the texture
4. Draw any data previously in the texture just faded
5. Draw stuff using your point and line shaders etc.
6. Switch to a shader that can handle textures
7. Render the entire texture (that should be a homogenate of all of them) to the front buffer.

Also I have realized it would be best to anti-alias things. And I figure that that would be best done when it gets rendered to the front buffer, ad not to anti alias the image multiple times? I could always just apply attenuation to my shaders as well.

Is this about right?

The latter.

In retrospect, you should create two FBOs (each with its own texture); using the default framebuffer isn’t reliable (the contents aren’t guaranteed to be preserved between frames).

After binding the first FBO, clear it then render the scene normally. Once the scene has been rendered, use the texture as a source and render it to the second FBO with blending (the second FBO is never cleared). This will result in the second FBO containing a mix of the new scene and what was there before. Finally, the second FBO should be rendered directly to the window (this can be done by rendering a textured quad, similarly to the previous operation, or by using glBlitFramebuffer).

Essentially, the first FBO takes the place of the default framebuffer while the second FBO takes the place of the accumulation buffer.

In summary:

Initialisation:

For each FBO:

  • glGenTextures
  • glBindTexture
  • glTexImage2D
  • glBindFrameBuffer
  • glFramebufferTexture2D

Each frame:

glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo1)
glClear
glDraw* // scene

glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo2)
glBindTexture(tex1)
glEnable(GL_BLEND)
glBlendFunc
glDraw* // full-screen quad

glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0)
glBindFrameBuffer(GL_READ_FRAMEBUFFER, fbo2)
glBlitFramebuffer

[QUOTE=GClements;1280508]The latter.

In retrospect, you should create two FBOs (each with its own texture); using the default framebuffer isn’t reliable (the contents aren’t guaranteed to be preserved between frames).

After binding the first FBO, clear it then render the scene normally. Once the scene has been rendered, use the texture as a source and render it to the second FBO with blending (the second FBO is never cleared). This will result in the second FBO containing a mix of the new scene and what was there before. Finally, the second FBO should be rendered directly to the window (this can be done by rendering a textured quad, similarly to the previous operation, or by using glBlitFramebuffer).

Essentially, the first FBO takes the place of the default framebuffer while the second FBO takes the place of the accumulation buffer.

In summary:

Initialisation:

For each FBO:

  • glGenTextures
  • glBindTexture
  • glTexImage2D
  • glBindFrameBuffer
  • glFramebufferTexture2D

Each frame:

glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo1)
glClear
glDraw* // scene

glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo2)
glBindTexture(tex1)
glEnable(GL_BLEND)
glBlendFunc
glDraw* // full-screen quad

glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0)
glBindFrameBuffer(GL_READ_FRAMEBUFFER, fbo2)
glBlitFramebuffer[/QUOTE]

Ok I think i got the every frame code correct (no way to check yet). I am stuck on the starting code. Do I want to bind “tex1” to both frame buffers? If not then their would be three textures “tex1”, “fbo1tex”, and “fbo2tex”, should I keep track of “fbo1tex” ?

Two textures, one bound to each FBO (tex1 to fbo1, tex2 to fbo2). If the final blit is done using glBlitFramebuffer() (rather than as a textured quad), only tex1 is also used as a source (and thus tex2 could be a renderbuffer rather than a texture).

Note that the blending between the buffers in the second glDraw* call only accesses tex1 directly. Blending inherently computes dst=func(src,dst); the destination framebuffer (tex2 bound to fbo2) is read implicitly.