Render to volume textures

Hi.

Can someone tell me what the current state of hardware render to volume texture is?

Does any hardware exists, which can potentially do it? I ask, because I saw that ATI released a driver marked 2.0, and I figured it render to volume would be a feature of gl 2.0.

By the way, I’m assuming that ext_render_target adds support for rendering to volume textures, but i might be wrong?

I ask because I abandoned gl a year ago because of buggy glsl implementations and ugly render to texture code - Which could
be getting better now.

Hi,

a time ago, the same problem encountered to me while implementing an 3d navier stokes solver that operates entirely on the GPU.

For the velocity and pressure field I decided to use for every z coordinate a 2d texture. Realized through initializing only one pBuffer (not texture) and copying with glCopyTexImage the content to the textures. For me this works with very little overhead on a Geforce FX with floatingpoint textures.

But the specific aim of the project was to advect a density field in the navier stokes velocity field and draw it with volume rendering techniques.
For the sake of performance for volume rendering native 3d textures are preferable.
So I choosed to do so, implented through copying the content in the volume texture, after creating by rendering to the pbuffer, with glCopyTexSubImage3D().

Finally I get it to work and it turned out that the implentation was very efficient in the means that the demo was complete fillrate limited and the glCopyTexSubImage stuff didn`t cause any major overhead.

I hope that helps you, JC

I’m also implementing a 3d navier stokes solver - And have it all running, using 2d textures.

However, while pbuffers and copyteximage can solve the problem, it desirable to use a volume texture, because the copy can be avoided, and because the filtering capabilities could be utilized.

Edit: And by using 3d textures, we save the pixel shader instructions that simulates 3d textures.

I have´t meant to simulate 3d textures. Although it is possible to represent an 3d tex in a 2d tex, it has turned out for me that the performance is not acceptable due to the additional pixelshader instructions.

To represent the 3d tex in Zresolution x 2d texs, should only cause an bind texture and draw call overhead. Why should it be more expensive to do a 2d texture look up as an 3d?

As mentioned before the glCopyTex overhead can be neglected compared to the fillrate limit.

For performance comparisation, my implementation runs for 64x64x64 on 16 bit rendertargets at approx. 6.3 FPS on an Feforce FX 5900. The Poissoneuqations for pressure and viscosity are solved with 20 steps.

Originally posted by jmpCrash:

To represent the 3d tex in Zresolution x 2d texs, should only cause an bind texture and draw call overhead. Why should it be more expensive to do a 2d texture look up as an 3d?

But how do you handle the advection step? I’m using the method presented by Jos Stam, which requires me to read from an arbitrary z-layer. I can’t see how you would do that using your representation - My guess is your not using the same approach for advection.

But anyways, render to 3d texture would give me the benefits listed above, and therefore, I still wonder if/when it will be possible.

Now, I know what you mean.

You can assume that the position x-u(x,t) lies between x and the neighbourhood gridpoints.

If that is not real the simulation tends to be instable!

Finally this situation can be reached by deceaseing the timestep size.

It is not optimal but possibly to implement and fast!

An other option would be to bind also the 2. neightbourhoud layers or more …
But this would only allow an acceptable performance if it would be possible to bind textures in texturearrays. I did not test that yet.

Originally posted by jmpCrash:
[b]Now, I know what you mean.

You can assume that the position x-u(x,t) lies between x and the neighbourhood gridpoints.

If that is not real the simulation tends to be instable![/b]
Actually, using stam’s approach, it is unconditionally stable, so the velocity can be larger.

Originally posted by jmpCrash:
[b]
Finally this situation can be reached by deceaseing the timestep size.

It is not optimal but possibly to implement and fast!

An other option would be to bind also the 2. neightbourhoud layers or more …
But this would only allow an acceptable performance if it would be possible to bind textures in texturearrays. I did not test that yet.[/b]
You have a point, and your implementation is probably faster than mine - However, I still believe that volume textures would neglect this difference, and allow stam’s method to work with the same performance as you are currently getting.

I could be wrong, of course, but we won’t know that until someone does an implementation using volume textures.

One solution would be to copy the layers to an 3d texture, like I described in my first post for the density field.

But for the velocity field you would need an floating point 3d texture with three components, which I haven`t managed getting it to work on my hardware.

But on ATI hardware it could be possible to have 3d floating point textures with the GL_ATI_texture_float extension.

Good Luck!

Back on the original question, about hardware rendring to volume texture, see the newly spec http://www.opengl.org/documentation/extensions/EXT_framebuffer_object.txt

    void glFramebufferTexture3DEXT(enum target, enum attachment,
                                 enum textarget, uint texture,
                                 int level, int zoffset);