tex2Dlod

HLSL has a useful function ( tex2Dlod(…) ) which is used to manually select the mip level for mipmapped textures.

apparently, in GLSL, this function only exists for vertex shaders…

i’m trying to find some way to do this in my fragment shader

can anyone help?

The optional third parameter (“bias”) can be used to change the calculated lod.

that would be fine if i knew which level was to be automatically selected, but i need to guarantee which level i use…

maybe i’ll just have to use Cg…

I am not sure if Cg has that(perhaps in a recent update)?
Even if it does I would be curious how it compiles to assembly as even on the DX side of the fence, mipmap level selection seems to be shader model 3 only…?(or perhaps they do someting with the bias?)

I found texture2DLod() to work just fine in fragment shaders:
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=11;t=000172

Originally posted by Tom Nuydens:
I found texture2DLod() to work just fine in fragment shaders:
http://www.opengl.org/discussion_boards/cgi_directory/ultimatebb.cgi?ubb=get_topic;f=11;t=000172

The spec language is NOT in error:

The built-ins suffixed with “Lod” are allowed only in a vertex shader.

Unextended OpenGL Shading Language does NOT permit texture2DLod in a fragment shader. To extend OpenGL Shading Language, add the preprocessor directive:

The initial state of the compiler is as if the directive

#extension all : disable

was issued…
NVIDIA’s current implementation appears to continue to have a bug where all extensions are enabled by default.

Also, try OpenGL Shading Language Validate 1.5.

http://developer.3dlabs.com/downloads/glslvalidate/

You’ll find it now correctly fails on texture2DLod in a fragment shader. This was fixed in the Front-End Compiler Open Source in January 2005, “texture look up functions appropriately split between vertex and fragment shaders.”

FWIW, a documented extension that permitted texture functions suffixed with LodEXT in the fragment shader could be valuable.

-mr. bill

i know the functionality exists in shader model 3 - it’s in microsoft’s ASM instruction listing:

msdn ps3 instruction set

why did 3d labs elect to allow no asm usage in glsl? it can be a really useful feature of hlsl and cg…

Originally posted by carl_lewis:

why did 3d labs elect to allow no asm usage in glsl? it can be a really useful feature of hlsl and cg…

I’m not from 3DLabs, but I know one or two good reasons that could explain that:
a) GLSL has nothing to do with ASM.
b) GLSL is much more complex than ASM (think in loops and branches) and there are many GLSL features that does not exist in ASM.
c) GLSL MUST NOT depend on another shader extensions.
d) 3DLabs is the only vendor that follows the GLSL philosophy, and compiles it directly to machine code, without go through ASM before.

d) 3DLabs is the only vendor that follows the GLSL philosophy, and compiles it directly to machine code, without go through ASM before.[/QB]
Bollocks, the glsl philosophy is portable high level shaders. Compiling to ARB assembly or not is irrelevant, notwithstanding the fact for some vendors ARB assembly is the native instruction set, or damned close.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.