PDA

View Full Version : texture*Lod() functions



Tom Nuydens
04-27-2004, 08:29 AM
GLSL spec, section 8.7
The built-ins suffixed with “Lod” are allowed only in a vertex shader. For the “Lod” functions, lod is directly used as the level of detail.Both 3DLabs' GLSL Validate tool and NVidia's GLSL implementations accept a fragment shader with texture*Lod() calls just fine. Rightfully so, I might add, because these functions are a valuable tool for shader antialiasing. Can anyone comment on why this peculiar restriction exists in the spec?

-- Tom

evanGLizr
04-27-2004, 09:37 AM
Originally posted by Tom Nuydens:

GLSL spec, section 8.7
The built-ins suffixed with “Lod” are allowed only in a vertex shader. For the “Lod” functions, lod is directly used as the level of detail.Both 3DLabs' GLSL Validate tool and NVidia's GLSL implementations accept a fragment shader with texture*Lod() calls just fine. Rightfully so, I might add, because these functions are a valuable tool for shader antialiasing. Can anyone comment on why this peculiar restriction exists in the spec?

-- TomI believe the intended wording (if any) is:

Only the built-ins suffixed with “Lod” are allowed in a vertex shader

This is because in a vertex shader there's no automatic lod calculation (there's no polygon going on, so derivatives for lod cannot be calculated), so you always have to provide it.

This ties with what the beginning of the section says However, level of detail is not computed by fixed functionality for vertex shaders, so there are some differences in operation between vertex and fragment texture lookups. Although it also says: If it is mip-mapped and running on the vertex shader, then the base texture is used. Go figure.

I agree with you that being able to specify the LOD in the fragment shader is an important feature, specially since textures may be generic memory buffers not necessarily accessed with texture coordinates (summed area tables, hierarchical buffers, etc).

Zengar
04-27-2004, 10:36 AM
I would explain it as follows:

GeForceFX allow you to use explicid LOD in their fragment shaders(forgot the instruction, but it is there, I'm shure). Radeons can only scale the existing LOD(they have also no d- functions). ARB had to agree on a minor spec that could be possibly implemented by any existing card.

Correct me if I'm mistaken with NV_fragment_program spec. It's been quite a while since i'we used it...

Korval
04-27-2004, 11:34 AM
The spec does allow you to use LOD functions in the fragment program, as evanGLizr pointed out. All the line is saying is that you have to give vertex shader textures an explicit LOD, for fairly obvious reasons.

Now, if Radeons don't do this yet in hardware, then the use of these functions might throw it to software (or crash, since it's still not a final implementation yet). So be careful about using these functions.

Tom Nuydens
04-27-2004, 10:41 PM
Originally posted by evanGLizr:
Only the built-ins suffixed with “Lod” are allowed in a vertex shaderThat does make a lot more sense, although as you say, that whole section is worded kind of confusingly.

Korval, I'm aware that what I'm doing will make Radeons run my shader in software, but it's either that or suffer horrible aliasing ;)

Thanks,

-- Tom

jeickmann
04-28-2004, 04:00 AM
What about using Lod-Bias (i think it became core in 1.4, but I've never used it)

Jan

Tom Nuydens
04-28-2004, 07:22 AM
My particular shader really requires that I specify the LOD myself to get good antialiasing. I experimented with biasing, but couldn't manage to get good results at all.

-- Tom

-NiCo-
05-03-2004, 07:45 AM
Hi Tom,

I don't know about Ati cards, but the next generation of fragment shaders for Nvidia cards (NV_fragment_program2) will support an explicit lod texture lookup through a new instruction called TXL

You can find a pdf file explaining some of these new extensions at http://download.nvidia.com/developer/presentations/GDC_2004/gdc_2004_OpenGL_NV_exts.pdf

-NiCo-
05-03-2004, 08:01 AM
For now you can bind the same texture to different texture units and constrain the LOD by setting the TEXTURE_LEVEL_BASE and TEXTURE_LEVEL_MAX to the same value.

by using GL_LINEAR_MIPMAP_NEAREST or GL_NEAREST_MIPMAP_NEAREST

the d (which selects the mipmap) in equation (3.23) of the OpenGL 1.5 spec is always between levelbase and q

where q is defined as min{p,levelmax}
and p is defined as max{n,m,l}+levelbase with n,m, and l greater or equal to zero.

So setting levelbase so that it equals levelmax will always result in d=levelbase=levelmax

Don't forget to restore the base level when updating the texture if you're using automatic mipmap generation

N.

Tom Nuydens
05-03-2004, 08:18 AM
Originally posted by -NiCo-:
I don't know about Ati cards, but the next generation of fragment shaders for Nvidia cards (NV_fragment_program2) will support an explicit lod texture lookup through a new instruction called TXLThat's exactly what the texture2DLod() function in GLSL does, and it runs just fine on my NV30 card.

-- Tom