PDA

View Full Version : Rendering directly to texture memory



stecher
11-10-2000, 08:50 AM
I understand that direct3d has the ability to render directly to texture memory, but OpenGL does not.

The only way I have been able to accoplish this in OpenGL is to render to the frame buffer, then copy the image from the frame buffer into texture memory. Unfortunately the copy simply kills performance.

Does anyone know if there is a better way to render into texture memory in OpenGL? Or if any vendors, such as nvidia, are planning on creating an extension to support this?

LordKronos
11-10-2000, 09:09 AM
I actually asked someone from nvidia about this. There is currently a proposal on the table for an openGL render-to-texture extension. However, the problem with such a thing is that (as I understood it) performance is actually worse when rendering directly to a texture. Because textures are swizzled in memory, it can be problematic to render to directly. Either the renderer has to be able to render in swizzled format or the texture has to be swizzled in place after the render-to-texture is complete.

[This message has been edited by LordKronos (edited 11-10-2000).]

mcraighead
11-10-2000, 09:56 AM
We are working on _several_ solutions to this problem.

Hopefully it won't be long until I can give details.

- Matt

Pauly
11-10-2000, 06:37 PM
Is pBuffer_EXT really that hard to implement in OpenGL? If DX drivers can do SetRenderTarget whats stopping a similar function in OpenGL?

mcraighead
11-10-2000, 06:59 PM
Nothing's stopping it. As I said, we're working on it.

- Matt

Tom Nuydens
11-10-2000, 11:57 PM
I didn't find the performance hit associated with glCopyTexSubImage2D() all that big... I have an app that renders 256x256 textures at 50-60 fps on my TNT2. For 128x128, this goes up to almost 200 fps.

You're not using glCopyTexImage2D() instead of glCopyTexSubImage2D(), are you? I think I've read somewhere that in some cases, not using TexSubImage() can cause the driver to reallocate memory for the texture every time you do it.

Pauly
11-11-2000, 12:18 AM
Nothing's stopping it. As I said, we're working on it.
<Sound of Paul's hands rubbing together http://www.opengl.org/discussion_boards/ubb/smile.gif >

mcraighead
11-11-2000, 12:28 AM
Ah, but it's still too slow right now if you want to do high-res dynamic cubemaps and those kinds of effects. A single 512x512 copy takes you down below real-time framerates, too, by your numbers...

- Matt

Antoche
11-11-2000, 04:42 AM
What about the RENDER_TO_BITMAP option when creating a window ? I believe i saw something like that in the msdn for CreateWindow(). Maybe we can use that ?

Voytec
11-12-2000, 05:39 AM
For MS Windows applications, a DIB section GDI object may be used as a placeholder for
a textures' source image and later accesed by OpenGL API functions directly. This solution is very flexible, because you may do the OpenGL rendering and/or GDI rendering and/or directly manipulate the image bits and then subimage the texture.

rIO
11-13-2000, 12:38 AM
Rendering to a DIB could be also usefull to do such 2D effects like blurring or wathever on the DIB data.

I'm sure there is a complete example on the MSDN CDs.

It's also possible to do rendering to a DIB using OGL and do the "postprocessing" & GDI work using DirectX (this is the solution I use, using DX3).

rIO

memo
11-13-2000, 01:35 AM
I did it using RENDER_TO_BITMAP and DIBs. It works fine, the only problem I had, was beeing unable to create a GL rendering context, offscreen, with Alpha. So after rendering offscreen (on the bitmap) I had to copy the pixels to another location, manualy add alpha component, then use glTexture2D.