dependent textures- need 16 bit of precision

I’m using GL_DEPENDENT_GB_TEXTURE_2D_NV (wich is generated procedurally- by rendering some stuff-WGL_ARB_render_texture) to perturb another texture. There is a problem with 8 bit precision of course, but I don’t know how to convert RGBA image from pbuffer to HILO texture or something with 16 bit of precision.
(I have GeForce3)
Anyone have any idea?

I think you can copy to a HILO texture, but it would be pointless because the framebuffer would keep it clamped at 8bit/channel.

HILO textures can only be used for dot products. You may be able to get better results with a DS/DT texture or one of its variants, but I’m not sure.

You may be able to get better results with a DS/DT texture or one of its variants, but I’m not sure.
Unfortunately, you can’t render into DS/DT texture. You can’t do CopyTexImage with this format either. The same applies to HILO. Effectively, the only texture formats you can render-to are RGBA and signed RGBA. Yes, it’s lame.

abartosz, it is possible to get 16-bit precision in situation you describe. However, the hardest part is to render the coordinates into framebuffer (OR RTT) with 16 bit precision first. In some cases it can be easy, in others it can be very difficult, or just too slow to be used in practice.

So, if you solve above problem, then you may try this:

Let’s assume in previous passes you rendered into texture so that it contains now:
R - Higher 8-bits of t coord
G - Higher 8-bits of s coord
B - Lower 8-bits of s coord
A - Lower 8-bits of t coord

now use these texture shaders:

t0: TEXTURE_2D
t1: DEPENDENT_AR_TEXTURE_2D previous=t0
t2: DOT_PRODUCT             previous=t0, texcoords: (0, 1, 1/256)
t3: DOT_PRODUCT_TEXTURE_2D  previous=t1, texcoords: (0, 1, 1/256)

where:
t0: is bound to your RTT
t3: is bound to your “dependant” texture
t1: is bound to simple 256x256 texture which copies A, R components into G, B respectively.

[This message has been edited by MZ (edited 03-25-2003).]

Thanks MZ!
I don’t think that trick will work in my case- I’m rendering a few octaves of Perlin’s noise, and sum them together. If I have 16 bits values splited into two 8 bits components, I won’t sum them…

Anyway- does GeForceFX have such a limits?
I hope it’s programmers-friendly :slight_smile:

Cheers!