Does not Support EXT_paletted_texture on GeForceFX?

Hi everyone.

Excuse my poor english…

My program uses EXT_paletted_texture and
EXT_shared_texture_palette on GeForce4Ti to look up texture color and opacity.

But GeForceFX may not support these extension.
( Checked by glGetString(GL_EXTENSIONS) )

Is this driver bug ? or never support on GeForceFX?

Besides, There is other method for texture lookup( NV_texture_shader , DEPENDENT TEXTURE ).
But I do not use Texture-Shader-Type Lookup because it occurs some artifacts by it’s post-filtering mechanism.
( I want to use pre-filter texture lookup. )

No it doesn’t support paletted textures

You can easily emulate a paletted look-up by using dependent reads into a texture that’s 256 texels wide and 1 pixel tall, and is set to filter mode NEAREST.

Using a post-filtering lookup and nearest filtering of the texture will not yield the desired pre-filtering on its own. You will still need to implement filtering, i.e. bilinear filtering, in the fragment program after the texture lookup. I rember there are examples on how to implement a bilinear filter in the extension specifications (under floating point texture - I think)

Best Regards,

Niels

Thank you.

I try to lern and implement fragment type program.

Does anybody know if the paletted_texture extension is discontinued ? Is this happending only on GeForce FX or is it still available on GeForce4 ? Will it come back in later driver versions ?

Klaus

Yes, you have to implement whatever filter kernel you want. They even provide the LRP instruction to make it easy :slight_smile:

Originally posted by ysatou:
[b]GeForceFX may not support these extension.
( Checked by glGetString(GL_EXTENSIONS) )

[/b]

Damn! This is the worst news i ever heard (fixed pipeline kicked + badly emulated was already sucking btw…)

What about GL_LUMINANCE ? what are supported hw internal formats?

:((

Originally posted by jwatte:
Yes, you have to implement whatever filter kernel you want. They even provide the LRP instruction to make it easy :slight_smile:

Well, sure it’s possible to implement pre-filtered classification in a fragment program. Just classify 8 nearest-neighbor samples in a 3D texture using a 1D dependent texture and do 7 linear interpolations. However, there will be a slight performance problem :wink:

Does anybody know whether there a reason for discontinuing paletted textures on GeForceFX/4(?) ? At least they were available till drivers 42.x.

Filtering for LUMINANCE8 textures is still available. As soon as you go to higher precision (eg. LUMINANCE16) you have to implement filtering yourself. The only filtered high-precision format seem to be HILO textures.

Klaus,

That’s, what, 13 fragment instructions for the filter? On an 8 pipe card, running at 300 MHz “effective fragment instructions” and assuming there’s no texture access latency, you’d get about 184615384 textured fragmens through per second. At 1024x768 output resolution, that’s over 200 frames per second.

Seems to be high-performance enough to me.

In my opinion 180 megafragments is very slow - especially for volume rendering applications which might be one of the main application areas for paletted textures.

Didn’t we already have the time of GTS-gigatexel-shaders some years ago ?

For volume rendering you simply cannot afford 8 texture lookups and 8 dependent texture lookups for simple pre-filtered classification.

BTW: 8+8+7=23 operations.

[This message has been edited by KlausE (edited 04-18-2003).]

[This message has been edited by KlausE (edited 04-18-2003).]

That’s, what, 13 fragment instructions for the filter? On an 8 pipe card, running at 300 MHz “effective fragment instructions” and assuming there’s no texture access latency, you’d get about 184615384 textured fragmens through per second.

First, I’m not sure where 13 opcodes is coming from.

Secondly, “assuming there’s no texture access latency” is a poor assumption.

Thrid, consider that a GeForce4 could use palatted textures at the full speed of non-palatted ones. It’s not like they implemented it in texture shaders; they had dedicated hardware for it. This method, without question, is slower than palatted textures.

At 1024x768 output resolution, that’s over 200 frames per second.

If you’re drawing every pixel only once. With any kind of overdraw, let alone antialiasing, you can expect this to drop significantly.

It’s fairly easy to get close to 1:1 fragment shading, assuming the hierarchical early Z tests do their job.

Assuming no texture latency is not so bold an assumption as you might think. The 8-bit texture will clearly show excellent locality and thus should cache extremely well. The dependent read seems like it would depend a lot on how different each of the samples were. Even so, a 256 pixel texture isn’t that big – it may conceivably fit in “near” texture “cache” memory on modern cards. Especially if they’re sized to support 16 separate texture targets for DX9…

Last, I’d be interestedin in seeing whether the GeForce 4 Ti would actually run paletted filtered 3D textures as fast as 2D textures. My intuition tells me it wouldn’t (but I don’t have one within arm’s reach to whip out the test case).

Anyway, I’m just trying to show that it’s not The End Of The World As We Know It just because the paletted texture extension isn’t supported anymore. But I suppose a perfectly valid alternate solution would be simply to spend the 4x VRAM and store it as RGBA8. These cards come with a minimum of 128 MB, and only last year, 32 MB was the norm.

Palletted textures will not be supported on GeForceFX. While the functionality obviously has uses, it consumed a disproportionate amount of real estate relative to the number of applications that made use of it.

Hi Cass,

at least a definitive answer from Nvidia… I was never a big fan of pre-filtered classification. However, for backward compabiliy reasons, having that feature would be good.

Paletted textures were supported till drivers 42.x. At least the silicon seems to be there … found a better use of the silicon ? :wink:

Anyway, I’m just trying to show that it’s not The End Of The World As We Know It just because the paletted texture extension isn’t supported anymore. But I suppose a perfectly valid alternate solution would be simply to spend the 4x VRAM and store it as RGBA8. These cards come with a minimum of 128 MB, and only last year, 32 MB was the norm.

Maybe someone had plans for that 4x more memory, like double res-ing all their textures. That’s a far better use for a texture that palattes well that up-resing them to 8-bits per pixel.

Also, that, pretty much, means that the only decent texture compression option avaliable is DXT. While it is a good format, some textures palatte better than they DXT. It was always nice to have the option of using palatted textures.

Next thing you know, they’ll be dropping DXT support and telling you to decompress them in the fragment shader

I understand that some textures work better with a palette than with DXTC. I suppose they just made the call that DXTC is sufficient.

Also, I’m sure there are some scientist-y types that use 8 bit volume data and want to map that to a color ramp or something. However, that kind of data may actually filter fine pre-lookup, so you only get one dependent read after LRP-ing the pre-lookup gradient data.

Luckily, DXT1 compresses even better than 8 bit paletted data, so those high-res images should be no problem in that format :slight_smile:

Originally posted by cass:

Palletted textures will not be supported on GeForceFX. While the functionality obviously has uses, it consumed a disproportionate amount of real estate relative to the number of applications that made use of it.

Yeah I see, since alot of “professional” applications use palettized textures you want to force the users to buy the more expensive Quadro version which just happen to be a plain FX.

Its a kind of magic!

You’d better run before they find you for figuring out thier horrible secret!

Originally posted by Korval:
Next thing you know, they’ll be dropping DXT support and telling you to decompress them in the fragment shader

They better not!!