texture blending

the problem is this.
i need to blend (standard alpha blending) two or more textures in realtime, and using a single pass algo.

i need it for a terrain renderer, where i can say where there is grass and where there are stones.

i could use multitexturing, but i didn’t found a way to efficentily specify the blending factor.
i thought that alpha channel could help me here, so i set up things like this:

tu1 = rgb texture, “grass”
tu2 = rgba texture, “stones”

by manually specifying the alpha component of tu2, i can actually blend the two textures with high freedom… but low efficency, since the various textures are just tiles, so they are repeated over the whole landscape.

is there a way to let me blend 3 textures toghether? something like:

tu1 = rgb, “grass”
tu2 = alpha, large scale, low res blending map
tu3 = rgb, “stones”

…no, i think this won’t work.

another approach i’m thinking is to use the mip-mapping capabilities.

since i can specify mipmaps of any size, i could specify a texture object wich the various mip levels are my textures, suppose 4 of them.
then i could use the LINEAR_MIPMAP_LINEAR filter…
now the problem is: how can i specify the mip level independly by that automatically selected by gl?

i need suggestions ! (even weird ideas like the above )

PS:
i know 3D texturing, wich is definitely what i need, but i know that is not supported on nvidia/matrox cards.
am i right?

DMY

u might wanna take a looksy here http://members.nbci.com/myBollux

i know 3D texturing, wich is definitely what i need, but i know that is not supported on nvidia/matrox cards.
am i right?

Just curious : How can it help you?

You have to provide more info:
Can you use per-vertex alpha? (since you have mentioned 3D textures - I assume, you feel ok about pex-vertex blending factor).
and what about lighting?

zed:
i’ll give a look, thanks.

serge:
how can 3D texturing help me?

i need to blend some textures toghether in RT, so when 3D texturing will become available, i’ll go that way.

with 3D texturing, to identify a texel you have to specify 3 coords (s,t,r in gl):
the idea is to have solid texture made up by 4 or more layers of seamless textures.

the engine will then specify s and t coords to tile the textures over the landscape, and also will specify r coordinate to select the material depending basically on height.

about alpha, with solid texturing is not needed.
instead is needed if i use other rendering schemes, like multipass or maybe multitexturing…

ok, you’re talking about multitexturing.

yes, the blending factor now is designed to be specified on a per-vertex basis.

so, i made various tests with multitexturing, but i didn’t get any successful result in blending the two textures, using alpha as blending factor.

maybe i missed something, when i first read specs on combinerS… i’ll look again.

have you made tests on this application of multitexturing ?

DMY

dmy,

I don’t understand why you’re so reticent to use multiple passes, but here’s a suggestion for NVIDIA hardware:

  • Make both the grass and stone textures RGBA and use f(grass.alpha + stone.alpha) as your lerp factor.

The f() function would be some function you could make with register_combiners. For example:

const0 = ( 0, 0, 0, .5);
{ // combiner 0
alpha
{
discard = expand(tex0);
discard = expand(tex1);
spare0 = sum();
scale_by_four();
}
}
{ // combiner 1
alpha
{
discard = spare0;
discard = const0;
spare0 = sum();
}
}
out.rgb = spare0.a * tex0 + (1-spare0.a)*tex1;
out.a = zero;

Something like that would give you patches of stone and grass that vary as a function of both textures, which could look much less periodic than your current approach, but is not as independent as using 3 textures (which would be easy with multiple passes, BTW).

Thanks -
Cass

well… is not that i’m reticent in respect to multipass algos… instead, i like how they works, and also the elegantness of code they can produce.
simply, i can waste time on this, and i wish to do it with a single pass approach.
see it as a personal reseach.

however, the approach you suggested seem like using a texel-level blending mask, wich is embedded as a alpha-layer into the texture.

if i’m right, it is not what i need actually, because the system i’m writing produces in realtime unlimited landscapes, via perlin functions.

so the textures can’t have a alpha cannel that is directly coupled to the terrain layout.

if i were to use a texture alpha channel as a “blending-controller”, i’d have to change it’s alpha image (every texel) every tile replication, something i don’t wish to do.

the ideal thing is to have the two textures ar rgb, and a third grayscale texture wich is the “blending controller”, wich tells how much percentages of components have to be blended from the two textures to the final fragment.

then, only the blender (let’s call it so) has to be larger than the tiles, and can be easily changed, accordingly to the landscape characteristics, and camera movement.

DMY

[This message has been edited by dmy (edited 01-09-2001).]

Well, I guess you need 3 textures, then – or you’ll need to do this multipass with blending.

i think what youre trying to do is impossible without doing multiple passes. its maybe possible with register combiners in one pass though Cass would know that better than me (BTW that shader parser looks quite sweet ). have a look at the combine + combine4 extension demo’s on my site. and just for u the textures used are grass and stones