Render To Texture Problems

first of all: I’m able to render my scen to a texture. but now there are two questions running through my mind:

  1. how can I blur this texture afterwards? (in realtime?) what knowledge is necessary to do that?
  2. is there a faster way to render to a texture (e.g. an extension or somtehing like that and if yes, what are this function’s parameters?

To blur you can use this texture and render it to another buffer or texture with multiple texture taps to produce a blurred effect.

Faster than what? Render to texture is accelerated on a lot of hardware, do you find it unexpectedly slow? You don’t describe in detail how you accomplish render to texture.

I render the texture by using glCopyTexImage2d(), so I thought there might be an extension that runs faster on a particular graphics board. (e.g. ATI Radeon 9500/9700 or Geforce 3-FX…)

and another question:
could you explain what you mean by “multiple texture taps”? perhaps my english is too bad to understand that.

I believe there are 2 extensions to allow us to render to a texture more quickly than glCopyTexImage2d().

WGL_ARB_buffer_region
WGL_ARB_pbuffer

Detailed Info At: http://oss.sgi.com/projects/ogl-sample/registry/

Of the 2, pbuffers (or pixel buffers) sound the most promising. And since it is ARB you dont have to make vendor specific code. I have never used these extensions, but I have used glCopyTexImage2d() for shadow maps and agree it is simply too slow.

Does anyone have any insight into the speed of these extensions in comparison?

thanx a lot!

You’re on the right track with pbuffers, although they don’t necessarily speed things up compared to copytex; you have to be careful about matching pixel/texture formats etc.

I don’t think Buffer Region has anything to do with it. It’s used for managing the backing store for “dirty rectangles” so you can more easily perform partial scene updates.

-Won

i’m still uncertain on how to blur a texture at runtime… plz help!

Originally posted by Proton:
[b]

WGL_ARB_buffer_region
WGL_ARB_pbuffer

Detailed Info At: http://oss.sgi.com/projects/ogl-sample/registry/
[/b]

You want wgl_arb_render_texture. It removes the copy from the pbuffer.

FYI - copy to texture is not considered render to texture.

This page may interest you:
http://developer.nvidia.com/view.asp?IO=gdc_oglrtt

[This message has been edited by dorbie (edited 04-08-2003).]

glCopyTexSubImage2D() might not be render TO texture, but it generates rendered textures. I think the distinction is subtle enough to be easily lost on a few people :slight_smile:

WRT speed, if your texture and your frame buffer have the same formats, and the texture isn’t too big, then the speed should be just fine, even with CopyTexSubImage.

To blur the texture, enable GENERATE_MIPMAPS and then use a texture LOD bias for your MIP mapping; alternately, render with the same texture bound to all 4 texture units, and offset the texture coordinates half a pixel each way using the texture coordinate matrix; blend them all together appropriately (probably divide each by 4 and add them all together).

Guys!
On linux we don’t have famous RTT but I think
that it can be done using pbuffers, PDR and glTexImage2D. I done this but framerate is unacceptable low. but it free times higher than w/o PDR on my Geforce3. Or PDR is fully functional on GeforceFX? And if someone knows is sharelists avalable on linux in 43 drivers?

Using that method I’m not surprised that it’s slow. You first download the texture over AGP, then upload it again, every frame. Use glCopyTexImage2D() instead.

As for how to blur, as dorbie said, it’s just a matter of sampling the texture several times. Here’s a simple blur shader I wrote in DX9 HLSL. Shouldn’t be too hard to transform into GL_ARB_fragment_program.

const float2 offsets[12] = {
-0.326212, -0.405805,
-0.840144, -0.073580,
-0.695914, 0.457137,
-0.203345, 0.620716,
0.962340, -0.194983,
0.473434, -0.480026,
0.519456, 0.767022,
0.185461, -0.893124,
0.507431, 0.064425,
0.896420, 0.412458,
-0.321940, -0.932615,
-0.791559, -0.597705,
};

float4 main(float2 texCoord: TEXCOORD0) : COLOR {
float4 sum = tex2D(renderTexture, texCoord);

for (int i = 0; i < 12; i++){
sum += tex2D(renderTexture, texCoord + sampleDist * offsets[i]);
}

return sum / 13;
}

jwatte, it is not the same but yes the distinction is lost :-). Copy used to be the only option, now we have a choice it should be drawn, I think it could become increasingly significant.

It is helpful when posting a question like this to distinguish, afterall the ARB and vendors considered render to texture extensions for a reason. When you’re not satisfied with copy performance (assuming your formats are OK), your only hope of a boost on any hardware (not all) is render to, real render to, as opposed to copy :slight_smile: As the NVIDIA presentation I linked to says, it is potentially optimal.

[This message has been edited by dorbie (edited 04-09-2003).]

to Humus:
you suggest to use glCopyTexImage2D(). but how to make it work with PDR? or we don’t need to use it? this function read pixels from GL_READ_BUFFER and I think that pixels transfered from video memory to system memory and after that to texture memory or not?

glCopyTexSubImage2D() will copy directly from your frame buffer to your texture image, without passing Go, and without collecting 200 milliseconds of wasted time :slight_smile:

Also, it’s said to be faster to use glCopyTexSubImage2D() rather than glCopyTexImage2D(), assuming you always render the same size, and the texture you render into is pre-allocated with TexImage2D() (it’s OK to pass NULL for the ‘bits’ argument of TexImage2D, by the way, if you intend to later overwrite the texture anyway)

dorbie, I agree that render-to-texture and rendered-texture are different, and I took a short-cut in my previous answer to address the original question.

On linux we don’t have famous RTT but I think
I’d rather call it infamous.

We’d better all keep on praying for the superb buffers…