Detail texturing on the GPU ?

Hello,

I’m currently doing some vertex based detail texturing for my terrain engine. Basically, I’m blending the textures on unit 0 and 1 using vertex alpha. The equation is c = CTex0 * vertex_alpha + CTex1 * (1-vertex_alpha). Simple enough, and it works fine.

I’m calculating the vertex alpha value on the host CPU based on viewpoint / vertex distance. I have to stream (VAR) it to the 3D card every frame, since the alpha values will obviously change. That bothers me, since I can’t use video mem to store my terrain (which is 100% static otherwise).

Now my question: is there any way to calculate the eye-vertex distance, linear map it to a certain range, and copy it to the vertex alpha, entirely on the GPU ? Is it possible using a vertex program, or is there another (more hardware independent) way ?

Thanks alot !

  • Alex

You can certainly do it with a vertex program. However, you might be able to make clever use of lighting (through a light’s alpha channel) to compute the alpha you need.

IF the vertex is soo far away that is has 0% detail texture, does it still use 2 texture units?

IF it does, then why not create custom mip-map for the detail texture which mips down to 0 effect, and just let the hardware apply it to the geometry itself, and dont bother updating any vertex alpha values, and you dont need VP’s either.

If you have Tri-linear filtering on, this will look pretty good.

For instance if you used the add-signed method of adding detail textures, your lowest mip level would be just 127 all over, resulting in no change.

Nutty

Thanks for your suggestions.

Nutty, I tried this interesting idea, though with the small modification, that I have an ‘attenuation mipmap’ only on the alpha channel. I use this one to blend between two normal texture mipmaps. Works fine, but it doesn’t look that good. The problem is, that since mipmapping isn’t really distance dependent, the results are somewhat unexpected on sharp angle geometry and when using anisotropic filtering. The other problem is that far away geometry (with presumably 0% detail texture) is sent through a different code path, using both texture units for texture layering instead of detail blending, improving fillrate performance. The transition between both renderers is very noticeable. The nice thing about it though: it’s 100% accelerated on every 3D card. I will keep it as a fallback option.

I like the idea of using a lightsource. Rather unconventional :slight_smile: But I’m not sure, if it can be tweaked into doing what I want. The geometry uses vertex colour lighting (radiosity), and generates no vertex normals. I would only need the attenuated ambient alpha component of the lightsource. Can I disable diffuse + specular altogether, set ambient colour to black with full alpha, but still having my original vertex colours preserved ? And all this without vertex normals ! Well, normally the ambient component doesn’t need normals, but I think this is abusing the lighting feature high above it’s normal use, so won’t this totally screw up on me ?

  • Alex

Ambient won’t work anyway because it’s independant of distance. You need a proper light. As for normals, just submit one normal, that’s perfectly legal. It will get used for everything (remember the old saying that OpenGL is a state machine).

> Ambient won’t work anyway because it’s independant of distance.

Well, the specs say that attenuation is done on all 3 components of the light, including ambient.

> As for normals, just submit one normal, that’s perfectly legal. It will get used for everything (remember the old saying that OpenGL is a state machine).

Right. That would take care of the normal problem.

But, thinking of it, it won’t work either. If I want my vertex colours preserved, I need to use a color material, and need to specify a diffuse colour of (1,1,1,1). But then, the diffuse dotproduct will be taken into account, and screw up the vertex colours. Or is there a way to disable the effect of the diffuse dotproduct, or the diffuse components altogether ? If there was a way to tell OpenGL to only use the attenuated ambient term, then the thing would actually work pretty well.
AFAIK, modern 3D cards calculate the radial eye distance of each vertex anyway, for fog operations. Any chance to use that, and pump it into the alpha component, and use that in the texture combiners (kind of ‘alpha fog’) ?

  • Alex

Try using the emissive component in the lighting equation. Set emissive to (1, 1, 1, 0) to get a white base with alpha of 0, and then use an ambient color of (0, 0, 0, 1) and factor in attenuation to get a varying alpha.

j

OK, the emissive colour thing did it. Thanks for pointing that out !

I have to do some performance tests, though. I wrote a quick vertex program to do the effect, and this one is extremely fast on GF3. I’m not sure about what to use on GF2 and other cards w/o HW VP support. The light trick works now, but it seems to come with a big performance hit. I will do some timing using the light trick vs. optimized CPU distance computing & streaming.

Slightly OT question here: can I have different parts of my vertex attribute arrays (vertices, texcoords) in vram, but still have the color-array VAR’ed over AGP ?

  • Alex