PDA

View Full Version : Does not Support EXT_paletted_texture on GeForceFX?



ysatou
04-09-2003, 04:50 PM
Hi everyone.

Excuse my poor english..

My program uses EXT_paletted_texture and
EXT_shared_texture_palette on GeForce4Ti to look up texture color and opacity.

But GeForceFX may not support these extension.
( Checked by glGetString(GL_EXTENSIONS) )

Is this driver bug ? or never support on GeForceFX?

Besides, There is other method for texture lookup( NV_texture_shader , DEPENDENT TEXTURE ).
But I do not use Texture-Shader-Type Lookup because it occurs some artifacts by it's post-filtering mechanism.
( I want to use pre-filter texture lookup. )

xfang11790
04-09-2003, 04:58 PM
No it doesn't support paletted textures

jwatte
04-09-2003, 07:43 PM
You can easily emulate a paletted look-up by using dependent reads into a texture that's 256 texels wide and 1 pixel tall, and is set to filter mode NEAREST.

Husted
04-09-2003, 10:21 PM
Using a post-filtering lookup and nearest filtering of the texture will not yield the desired pre-filtering on its own. You will still need to implement filtering, i.e. bilinear filtering, in the fragment program after the texture lookup. I rember there are examples on how to implement a bilinear filter in the extension specifications (under floating point texture - I think)

Best Regards,

Niels

ysatou
04-14-2003, 10:27 PM
Thank you.

I try to lern and implement fragment type program.

KlausE
04-17-2003, 02:56 PM
Does anybody know if the paletted_texture extension is discontinued ? Is this happending only on GeForce FX or is it still available on GeForce4 ? Will it come back in later driver versions ?

Klaus

jwatte
04-17-2003, 06:56 PM
Yes, you have to implement whatever filter kernel you want. They even provide the LRP instruction to make it easy :-)

Ozzy
04-17-2003, 09:44 PM
Originally posted by ysatou:
GeForceFX may not support these extension.
( Checked by glGetString(GL_EXTENSIONS) )



Damn! This is the worst news i ever heard (fixed pipeline kicked + badly emulated was already sucking btw..)

What about GL_LUMINANCE ? what are supported hw internal formats?

:((

KlausE
04-18-2003, 04:39 AM
Originally posted by jwatte:
Yes, you have to implement whatever filter kernel you want. They even provide the LRP instruction to make it easy :-)

Well, sure it's possible to implement pre-filtered classification in a fragment program. Just classify 8 nearest-neighbor samples in a 3D texture using a 1D dependent texture and do 7 linear interpolations. However, there will be a slight performance problem ;-)

Does anybody know whether there a reason for discontinuing paletted textures on GeForceFX/4(?) ? At least they were available till drivers 42.x.

Filtering for LUMINANCE8 textures is still available. As soon as you go to higher precision (eg. LUMINANCE16) you have to implement filtering yourself. The only filtered high-precision format seem to be HILO textures.

jwatte
04-18-2003, 11:28 AM
Klaus,

That's, what, 13 fragment instructions for the filter? On an 8 pipe card, running at 300 MHz "effective fragment instructions" and assuming there's no texture access latency, you'd get about 184615384 textured fragmens through per second. At 1024x768 output resolution, that's over 200 frames per second.

Seems to be high-performance enough to me.

KlausE
04-18-2003, 12:00 PM
In my opinion 180 megafragments is very slow - especially for volume rendering applications which might be one of the main application areas for paletted textures.

Didn't we already have the time of GTS-gigatexel-shaders some years ago ?

For volume rendering you simply cannot afford 8 texture lookups and 8 dependent texture lookups for simple pre-filtered classification.

BTW: 8+8+7=23 operations.

[This message has been edited by KlausE (edited 04-18-2003).]

[This message has been edited by KlausE (edited 04-18-2003).]

Korval
04-18-2003, 02:57 PM
That's, what, 13 fragment instructions for the filter? On an 8 pipe card, running at 300 MHz "effective fragment instructions" and assuming there's no texture access latency, you'd get about 184615384 textured fragmens through per second.

First, I'm not sure where 13 opcodes is coming from.

Secondly, "assuming there's no texture access latency" is a poor assumption.

Thrid, consider that a GeForce4 could use palatted textures at the full speed of non-palatted ones. It's not like they implemented it in texture shaders; they had dedicated hardware for it. This method, without question, is slower than palatted textures.


At 1024x768 output resolution, that's over 200 frames per second.

If you're drawing every pixel only once. With any kind of overdraw, let alone antialiasing, you can expect this to drop significantly.

jwatte
04-18-2003, 08:46 PM
It's fairly easy to get close to 1:1 fragment shading, assuming the hierarchical early Z tests do their job.

Assuming no texture latency is not so bold an assumption as you might think. The 8-bit texture will clearly show excellent locality and thus should cache extremely well. The dependent read seems like it would depend a lot on how different each of the samples were. Even so, a 256 pixel texture isn't that big -- it may conceivably fit in "near" texture "cache" memory on modern cards. Especially if they're sized to support 16 separate texture targets for DX9...

Last, I'd be interestedin in seeing whether the GeForce 4 Ti would actually run paletted filtered 3D textures as fast as 2D textures. My intuition tells me it wouldn't (but I don't have one within arm's reach to whip out the test case).

Anyway, I'm just trying to show that it's not The End Of The World As We Know It just because the paletted texture extension isn't supported anymore. But I suppose a perfectly valid alternate solution would be simply to spend the 4x VRAM and store it as RGBA8. These cards come with a minimum of 128 MB, and only last year, 32 MB was the norm.

cass
04-18-2003, 10:58 PM
Palletted textures will not be supported on GeForceFX. While the functionality obviously has uses, it consumed a disproportionate amount of real estate relative to the number of applications that made use of it.

KlausE
04-19-2003, 05:26 AM
Hi Cass,

at least a definitive answer from Nvidia... I was never a big fan of pre-filtered classification. However, for backward compabiliy reasons, having that feature would be good.

Paletted textures were supported till drivers 42.x. At least the silicon seems to be there ... found a better use of the silicon ? ;-)

Korval
04-19-2003, 08:42 AM
Anyway, I'm just trying to show that it's not The End Of The World As We Know It just because the paletted texture extension isn't supported anymore. But I suppose a perfectly valid alternate solution would be simply to spend the 4x VRAM and store it as RGBA8. These cards come with a minimum of 128 MB, and only last year, 32 MB was the norm.

Maybe someone had plans for that 4x more memory, like double res-ing all their textures. That's a far better use for a texture that palattes well that up-resing them to 8-bits per pixel.

Also, that, pretty much, means that the only decent texture compression option avaliable is DXT. While it is a good format, some textures palatte better than they DXT. It was always nice to have the option of using palatted textures.

Next thing you know, they'll be dropping DXT support and telling you to decompress them in the fragment shader http://www.opengl.org/discussion_boards/ubb/frown.gif

jwatte
04-19-2003, 11:40 AM
I understand that some textures work better with a palette than with DXTC. I suppose they just made the call that DXTC is sufficient.

Also, I'm sure there are some scientist-y types that use 8 bit volume data and want to map that to a color ramp or something. However, that kind of data may actually filter fine pre-lookup, so you only get one dependent read after LRP-ing the pre-lookup gradient data.

Luckily, DXT1 compresses even better than 8 bit paletted data, so those high-res images should be no problem in that format :-)

HS
04-19-2003, 01:37 PM
Originally posted by cass:

Palletted textures will not be supported on GeForceFX. While the functionality obviously has uses, it consumed a disproportionate amount of real estate relative to the number of applications that made use of it.

Yeah I see, since alot of "professional" applications use palettized textures you want to force the users to buy the more expensive Quadro version which just happen to be a plain FX.

Its a kind of magic!

NitroGL
04-19-2003, 01:53 PM
You'd better run before they find you for figuring out thier horrible secret!

PigeonRat
04-19-2003, 03:43 PM
Originally posted by Korval:
Next thing you know, they'll be dropping DXT support and telling you to decompress them in the fragment shader http://www.opengl.org/discussion_boards/ubb/frown.gif

They better not!! http://www.opengl.org/discussion_boards/ubb/eek.gif

HS
04-20-2003, 01:28 AM
Originally posted by jwatte:

Luckily, DXT1 compresses even better than 8 bit paletted data, so those high-res images should be no problem in that format :-)

Unfortunatly, scientists are very uncomfortable with the idea to compress scientific data, especialy when the compression is not lossless.

No no, just one more reason not to buy Geforces for the lab.

Ozzy
04-20-2003, 02:34 AM
Any infos about GL_LUMINANCE support on FX then?
for grayscale textures for instance..

jwatte
04-20-2003, 06:57 AM
scientists are very uncomfortable with the idea to compress scientific data


I understand. That's why I said that, for gradient volume type data, you can probably filter BEFORE the palette look-up, and then do a single dependent read. The hardware does this for you, so it's only two TEX instructions.

1) Read the 3D texture (which could be in a high quality format, such as 16-bit). This filters the sample, which ought to be the right thing for a typical sampled volume.

2) Use the output of the 3D sample to do look-up in a wide texture. This texture could be 2048 pixels wide, which gives you 8x the resolution in your color ramp than what you'd get out of paletted texture, and filtering to interpolate between points if you wish.

HS
04-20-2003, 09:36 AM
Thanks for the advice, I will try that and compare the results.

I am just not happy to rewrite all my apps that use that extension. That'll teach me not to use vendor specific (or in this case quasi specific) extensions in the future.
I could replace the old Geforce2's with Geforce4's but I wanted to skip over the so called DX8 generation...

Madoc
04-20-2003, 11:52 PM
Aaargh! It's been done (still waiting for my FX from Dawn to Dusk...)! AFAIK there have been plans to drop paletted texture support for some time but developer feedback prevented it.
I'm surprised to hear of "few application that use it". I actually thought it to be very commonly used for the excellent compression-quality ratio it provides for many textures. I certainly know some people that will be very displeased with support for this being dropped.

fresh
04-27-2003, 08:06 AM
Palletized textures certainly have their uses, like normal maps. Try compressing those with DXTC. There are also quite a few cases where a texture looks better with an 8bit palette than DXTC. I don't think it's wasting too much silicon real estate, what a bunch of BS. Every past video card since the Voodoo had support for it, so it can't be that hard considering we have hardware a zillion times as powerful with 100x the transistors. Even the PS2 supports 4 and 8 bit textures.

Korval
04-27-2003, 12:24 PM
First of all, the PS2 is an outdated POS; do not compare it to modern video cards.

Secondly, I agree with the wish to have palatted textures as an option, but I don't agree that it is trivial to implement. Sure, ATi found a way to keep palatted textures around, but the internals of the GeForce FX and the 9500+ series are very different. It wouldn't surprise me that palatted textures take non-trivial room in the texture unit hardware. These two cards perform texturing in very different ways; ATi's texture hardware and pixel pipes are laid out in a relatively conventional fashion, while nVidia opted for the ability to switch from 8 pipes with 1 texture per pipe to 4 pipes with 2 textures. This probably complicates the texturing hardware.

Admittedly, maybe nVidia should have dropped the register combiners instead of palatting, since they could transparently implement RC's with their fragment programs.

cass
04-27-2003, 01:37 PM
Originally posted by Korval:
Sure, ATi found a way to keep palatted textures around...

Really? Not according to Tom Nuydens's page: http://www.delphi3d.net/hardware/extsupport.php?extension=GL_EXT_paletted_texture

Cass

HS
04-27-2003, 02:22 PM
Originally posted by cass:
Really? Not according to Tom Nuydens's page: http://www.delphi3d.net/hardware/extsupport.php?extension=GL_EXT_paletted_texture

Cass

And your point is what? Your cards have the "silicon" but you dropped the support for the extension out of the blue.

For me thats in no way better then not having it at all. I am strong armed to rewrite my code by Nvidia. I dont see any point of supporting NVIDIA anymore.

Thanks Cass if I needed anymore point to drop support for Nvidia (like they did on
me) you just provided it.

A once die-hard Nvidia believer...

zeckensack
04-27-2003, 03:52 PM
Hell, what was that?

Just for the record, I've used Radeon cards from R100 to R300 and went through all drivers ever officially released since y2k. And I've never seen a Radeon driver exposing EXT_paletted_texture.

And regarding that "they force us into buying Quadros" stuff: find me a QuadroFX on this list (http://www.delphi3d.net/hardware/extsupport.php?extension=GL_EXT_paletted_texture) if you can.

KlausE
04-27-2003, 04:26 PM
ATI never supported paletted_textures.

NVidia supported paletted_textures on FX 5800 and Quadro FX till the 42.x drivers. No support on 43.x drivers ...

Korval
04-27-2003, 05:29 PM
I could have sworn I saw the extension somewhere in my card's extension list. Granted, I haven't used it yet, but I thought it was there.

cass
04-27-2003, 08:25 PM
Originally posted by HS:
And your point is what? Your cards have the "silicon" but you dropped the support for the extension out of the blue.

For me thats in no way better then not having it at all. I am strong armed to rewrite my code by Nvidia. I dont see any point of supporting NVIDIA anymore.

Thanks Cass if I needed anymore point to drop support for Nvidia (like they did on
me) you just provided it.

A once die-hard Nvidia believer...

Sorry you feel that way, HS.

mtarini
04-28-2003, 01:02 AM
I _would_ have an use for pre-filtered paletted textures (especially if the palette size was larger than just 256):

With paletted textures is terribly easy to shade bumbmaps in a memory efficient, cheap, easy, extremely-portable-even-on-older-laptops way.

For more details, see
[url]http://vcg.iei.pi.cnr.it/bumpmapping.html[\url]

It is an old technique, but not entirely outdated: for example, it makes the still-more-flexible CPU do all the lighting computations, rather than the GPU (no, this does not overburden the CPU as normal-to-rgb color computations are made only for few palette entries).

So, my vote, for what is worth, is:

if EXT_paletted_texture, as it seems, is not a big burden to implement, then...

...pleeease Cass don't drop it (anymore)!

Moreover, bigger palette sizes are better! is 4096 asking for too much?

http://www.opengl.org/discussion_boards/ubb/smile.gif



[This message has been edited by mtarini (edited 04-28-2003).]

zeckensack
04-28-2003, 05:01 AM
Each texture sampler needs to be able to read four palette entries 'for free' each cycle (because they usually do bilinear 2D texture samples 'for free' these days).

Technically that's 8*4 dependant reads per cycle just for what I'd call "legacy support". I'm by no means a circuit engineer, but it seems reasonable that sooner or later this becomes a major obstacle, especially if you want clock speed headroom.

I also think that the palette LUT having to be shared across all texutre samplers may be a problem. You either need a small piece of SRAM with a whole freaking lot of fast read ports, or you need to replicate it all over the chip.

V-man
04-28-2003, 09:19 AM
Isnt it funny that this old extention is not in the core?
Maybe it was introduced in the 1.0 or 1.1 days and I thought it was supported by everyone back then.

I dont know about everyone else, but I would like to know the reasons for why something isnt getting support. Is it temporary or what?

fresh
04-28-2003, 10:14 AM
Originally posted by Korval:
First of all, the PS2 is an outdated POS; do not compare it to modern video cards.


Yeah, my point exactly. Why does the "outdated POS" PS2 have it, but modern cards don't. You're telling me that hw engineers can implement things like pixel shaders 2.0 with full 128bit colours, but not full speed palettized textures? BULLSH*T. As I said, even the ancient 3Dfx Voodoo had it.

And how the hell is the PS2 "outdated"? Last I checked, every publisher is still releasing games for it. There are a hell of a lot more PS2's out there than all of ATI's and NVidia's "modern" cards added together and squared.

Point is, palettized textures do still have their uses.

Korval
04-28-2003, 12:17 PM
You're telling me that hw engineers can implement things like pixel shaders 2.0 with full 128bit colours, but not full speed palettized textures? BULLSH*T. As I said, even the ancient 3Dfx Voodoo had it.

It's a cost/benifit tradeoff. The, presumably minor, benifit of using palatted textures is mitigated by having to implement them in increasingly complex hardware. Considering that nVidia's primary market (ie, games) doesn't care, it's not terribly surprising that the FX lacks palatting. It's not so much that palatting is hard, but it does cost space. And space dedicated to a relatively fixed-function process that could be emulated via fragment programs (and you do have 1024 opcodes and all the texturing you could want) may not, in nVidia's calculous, warrent the cost.


And how the hell is the PS2 "outdated"? Last I checked, every publisher is still releasing games for it. There are a hell of a lot more PS2's out there than all of ATI's and NVidia's "modern" cards added together and squared.

First of all, the fact that something is still being used doesn't mean it isn't outdated. Look how long it took for motherboards to lose those ISA slots (and for people to stop making ISA cards). Motherboards still have serial ports, even though USB is significantly better.

Secondly, palatting (as a form of texture compression) is something of a legacy left over from the days of 2D graphics. Yes, we can still make use of it today, and we can mourn it's passing from dedicated hardware, but with the push to programmability, it was bound to happen.

My main problem with losing palatting is that there is no alternative other than S3TC or 32-bit textures. If S3 doesn't do a good job, or if S3 just doesn't compress the data very well, you have to move to 32-bit textures. And that is just not a reasonable idea. Palatting was always a nice fallback.

fresh
04-28-2003, 01:08 PM
The pixel shader route isn't that great. You have to take into account that if you want to do any kind of filtering, you have to code it up yourself. P8 support couldn't possibly cost that much extra space in this day and age of 100m+ transistors.

I really think they made the wrong decision with dropping support for it. But hey, what can you do...

HS
04-29-2003, 11:44 AM
Originally posted by zeckensack:
Hell, what was that?

Just for the record, I've used Radeon cards from R100 to R300 and went through all drivers ever officially released since y2k. And I've never seen a Radeon driver exposing EXT_paletted_texture.

I am aware that Nvidia cards where the only one worth mentioning that supported the extension, thats why I stuck to them (besides the excellent drivers).

For me they dropped the support "out of the blue" which puts me in a very delicate situation. Obviously the hardware has the "sillicon" to do what I need.

This is the 2nd time Nvidia dropped an extension (I did not cared to much about the GL_NV_evaluators) but the question would be what do they drop next?

I dont care about games but I care about software live cycles, I have too...

Did you ever tried to explain your customer that your software doesnt work anymore because his drivers are too new?


Originally posted by zeckensack:
And regarding that "they force us into buying Quadros" stuff: find me a QuadroFX on this list (http://www.delphi3d.net/hardware/extsupport.php?extension=GL_EXT_paletted_texture) if you can.

Yes, you are right and I stand corrected. Does it change anything? No.

[This message has been edited by HS (edited 04-29-2003).]

zeckensack
04-29-2003, 12:03 PM
HS,
you'll have to find a new solution then, for cards that don't expose EXT_paletted_textures.

There are pretty straightforward ways for the driver to give you that functionality if the silicon is really gone. NVIDIA obviously didn't chose this route (yet?).

(assuming that you really need paletted and can't make do with single channel textures and/or fragment program tricks)

Both driver 'emulation' and an app driven solution share a disadvantage: tripled texture memory footprint (and bandwidth reqs).

The app solution goes something like this:
1)Predecode textures at load time (to RGB888).
2)Reupload textures on palette changes.
3)Be lazy. Don't redecode a texture on palette change if you're going to bind a different texture anyway.

You're losing 66% storage on FX cards. This is not perfect, but it can be made working without changing your data files.

You may then proceed to blame the performance drop (if any) on NVIDIA's decision, but you'll have to accept that the functionality is gone.

HS
04-29-2003, 12:33 PM
Zeckensack you dont get it (no offense).

I KNOW that I have to change my code (thats why I am so pissed off), and Jwatte already showed me the way I need to go. Its working ok so far but I dont need Nvidia based hardware for it though...

I can upgrade the lab to 9500's without a problem, but what you dont get is that customers dont understand that they have to upgrade they hardware to something that supports fragment programms then the harware did just fine a month ago (Geforces2).

I dont need any advice how I can fix it with 'current' hardware (I did not even asked for it, but I am greatful for jwatte's advice).

The life cycle of a game is a couple of month, I dont care, the live cylcle for scientific software is a couple of years. I cant stand a company that drops extensions like underware.

[This message has been edited by HS (edited 04-29-2003).]

zeckensack
04-29-2003, 12:52 PM
He, sorry. Didn't mean to sound patronizing or anything, just that old "anticipate possible questions and answer them beforehand" habit.

I may sound boneheaded again, so please correct me if I'm wrong, but ...
For any Geforce below FX level, Delphi3D indicates that EXT_paletted_texture is still in the 43.45 drivers (the current ones I think).

It doesn't look like they're dropping it for the Gf1/2/3/4 tech level. Just the FXes won't get it. Which would mean, you need not worry about having to change your software, as long as you keep the same hardware, right? http://www.opengl.org/discussion_boards/ubb/smile.gif

edit: here (http://www.delphi3d.net/hardware/viewreport.php?report=693)

[This message has been edited by zeckensack (edited 04-29-2003).]

HS
04-29-2003, 01:43 PM
Originally posted by zeckensack:
It doesn't look like they're dropping it for the Gf1/2/3/4 tech level. Just the FXes won't get it. Which would mean, you need not worry about having to change your software, as long as you keep the same hardware, right? http://www.opengl.org/discussion_boards/ubb/smile.gif


So I can I sue you if they dont keep it?

No, as far as I am concerned they are unreliable and I dont thrust them anymore.

I will rewrite my code (I have to) but only based on extensions that are available on more then just one vendors hardware (I rather deal with a buggy implementation then with a non existence one).

Tom Nuydens
04-29-2003, 11:36 PM
Welcome to the real world.

Most of us can't use any extensions without also providing fallbacks for systems that don't have these extensions. Having control over the HW your app will be run on is a real luxury -- stop whining and count yourself lucky that it's just this one extension that you have to worry about.

-- Tom

dorbie
04-30-2003, 06:16 AM
You should be GLAD hardware vendors are prepared to occasionally drop features from generation to generation. The alternative is a snowball effect where features accrete over time and even features that are rarely used end up slowing development & innovation, wasting die space and increasing the cost of your card.

The biggest loss for this particular feature IMHO is some scientific applications but it's already been shown that the extra programmability can be used to replace the extensionit. I'd rather have the programmability than the extension and get my hands on the card sooner.

V-man
04-30-2003, 10:06 AM
Originally posted by HS:
(I rather deal with a buggy implementation then with a non existence one).

I think you are going insane over nothing really special.

It's definitly guaranteed that more extensions will drop. And dont be surprised if register combiner, and texture shaders dissappear in the next generation.

paletted texture are a little different. I think it would be better to have it in the GL core, but I guess some vendors stood in the way, and now it lost on NV platform because it takes up to much space (I find it hard to beleive).

Korval
04-30-2003, 11:00 AM
it lost on NV platform because it takes up to much space (I find it hard to beleive).

I don't find that too hard to believe.

In order for palatte "decompression" to be reasonably fast, you'll likely need to have to have dedicated cache space on-chip (not merely video RAM, but uber-low latency, expensive, and bulky SRAM) with those palattes in it.

A single 256-entry 32-bit palatte requires 1K of memory. That's not much for regular video, but for SRAM, that's 48+K transistors (6-transistors per-bit, + extra for memory controller logic). And this is horrible for multi-textured palatting, as each access must fully upload a new palatte. As such, with hardware that can bind 16 textures, you must have 16K worth of palatte memory, minimum. Which corresponds to 768 transistors just for the palatte memory itself.

Note that this 16K of memory is very expensive. Also note that this 16K of memory can do nothing else. It isn't like an on-chip texture cache; it provides only one function.

You wouldn't want to put palattes in the texture cache either. Let's say you had a 32K texture cache. You drop 1K for each palatted texture. Also, it complicates the cache lookup logic for the texture fetch unit, as it has to cache 32-bit entries and 8-bit ones (the palatte and the palatted texture, respectively).

Imagine being inside nVidia's hardware development facilities. They have a choice between providing 32K of texture cache (that all textures can use), or 16K of texture cache and the requisite 16K of palatte entry space. Which is the most reasonable to choose, considering nVidia's target market?

Granted, I don't understand why new drivers should remove support for the extension on older cards that actually have the hardware.

dorbie
04-30-2003, 11:01 AM
"Too much space" is a subjective concept that's heavily dependent on the value of a feature.

It's not hard to believe when you consider that it's a lookup pre filter and therefore potentially has to go somewhere in your texture fetch hardware, mostly legacy software uses it and that stuff runs like a bat out of hell on new hardware with alternate code paths for RGB texturing. On top of this you have texture compression, boat loads of bandwidth and more texture memory than you can shake a stick at.

You're right, there's no use getting your panties in a wad over this, bigger and better extensions may be deprecated in future. Let's hope so. Lobbing grenades at NVIDIA for having the guts to do the right thing is futile.

HS
05-01-2003, 03:20 PM
If you would have read the posts you would know that the FX exposed the extension since the newest drivers.

That doesnt have anything to do with "whining" or "advancement" its just "marketing" you all just let happen no questions asked...

[This message has been edited by HS (edited 05-01-2003).]

dorbie
05-02-2003, 06:03 AM
Marketing? Didn't it occur to you that there's some engineering involved here? Your claim is built on many unsubstantiated suppositions. As for just letting it happen, cass has a lot of credibility IMHO.

There's one terse post with no details that claims it was exposed on FX with all other evidence and posts suggesting it wasn't. I don't know for sure it was and I don't know if it worked, how it was implemented or what other issues it impacted. What are NVIDIA forcing you to do here, buy a GeForce4? The conspiracy doesn't hold water.

Anyway, I'm not an apologist for NVIDIA, in general OpenGL cannot be allowed to simply accrete features ad nauseum with no legacy stuff being deprecated. Now and then you have to take out the garbage or you end up... well, living in a garbage dump. In some respects this general point has nothing to do with any specific feature.

With available memory and performance I have less sympathy for someone using this for compression than I do for scientific apps like material classified volume rendering that might want the prefilter LUT and prodigious fill performance.

It's ironic that you blame marketing, when I know that it's the folks in marketing that often lobby to keep features like this in and engineers who want to toss them out. Marketing's function is not as simple as you seem to assume.


[This message has been edited by dorbie (edited 05-02-2003).]

HS
05-02-2003, 12:46 PM
I guess this topic has been stressed enough.

I give up...

[This message has been edited by HS (edited 05-02-2003).]

JD
05-03-2003, 01:35 AM
EXT extensions don't quarantee that the functionality is going to be in the hw. You need to code an alternative path in case the extension isn't supported. I personally don't like working with palleted textures so I don't care much about them. They will save costs and time for ihvs so that translates to cheaper video card for me.

mcraighead
05-03-2003, 09:15 PM
Originally posted by fresh:
P8 support couldn't possibly cost that much extra space in this day and age of 100m+ transistors.

You might be surprised at how much it can cost to support some features.

- Matt

PeteBernert
05-04-2003, 12:07 AM
I am not happy about a dropped palettized texture support as well.

Sure, you can find workarounds for it, but
even a clever texture generation/caching (considering the big vram available) or an high texture upload speed will _not_ be a real replacement for palettized textures.

I am into PSX (yup, Playstation 1) emulation on the PC, and this very old console only had 2 MBytes of vram, but it was able to use 4 and 8 bit color look up tables (palettes).

I've coded an OpenGL based PSX graphic emulation plugin, and most times 32 (or more) MBytes of vram are enough to cache all psx texture areas. But (big _but_) most times doesn't mean _every_ time. Consider a game which is using 100 small (32x32) rgba textures. Now, to do a cool flashing effect, all that textures are changing their colors, let's say 256 different pallettes are used on each texture.

No problem at all with palettized textures, you simply upload the new palette data, done.

Without the extension, you either upload the complete texture data again and again (slow), or you try to cache each and every (rgba converted) texture in vram. In my little example you would only need over 100 MByte of vram to do so. Double the number of palettes used, and not even 128 MB will be able to rescue you. Great.

So, in the past, GeForce cards were good suited for psx emulation (ATI cards, for example, had a very choppy gameplay with certain psx games, due to the missing palettized texture support).

But now, horray, the GF FX will suffer the same problem. Or to put it into extreme: a GF FX will be beaten by an old GeForce 1 card. Or, even more extreme: an old PSX1 gpu will beat the FX.

Humus
05-04-2003, 12:47 AM
Personally I'm happy to see it go. It's one of the least worthwhile extensions to support. There are more extensions I hope will be dropped soon too. GL_S3_s3tc, GL_SGIS_multitexture, GL_WIN_swap_hint, GL_EXT_vertex_shader etc.

m2
05-04-2003, 01:48 AM
JD: do you have any idea whatsoever what you are talking about? A cheaper card? The thing already has 100+ million transistors, and you are saying it's going to be significantly cheaper because you are shaving off less than 1% of them? Sure, it reduces complexity and and might eventually lead to better process yields, but come on, please, do you really expect that the street price is going to be affected by that? You run-off-the-mill FX-class card is selling arround US$200. How much of that is NVIDIA's cut in your opinion? How much of that is there because the price of the chip? Your card might become US$1 cheaper, with a bit of luck.

Humus: is that your take at sarcasm? It didn't come out that well, you know?

Matt: do, by all means, enlighten us. I mean, I can't imagine this sort of thing to be that much of an industrial secret, can it? I'm not asking for the secret sauce, I'm just wondering why something that's used has been dumped without a good replacement. Just look at the proceedings from SIGGRAPH, VIS, GHWS and a couple of others from the last, what, four years at least? We have been pushing the products of your company even it treats us as crap most of the time. Please. (This is a personal opinion and it does not necessarily reflect those of my current employer).

[edit 1: tease Matt]

[This message has been edited by m2 (edited 05-04-2003).]

Humus
05-04-2003, 12:30 PM
I meant exactly what I wrote. The sooner we get rid of left over legacy extensions, the better. Sometimes its harder to drop old stuff because of backwards compatibility though, but at some point you have to just get rid of stuff and go forward.

As for the paletted textures, yes, there used to be some cool effects that could be achieved by it, but today you can do all that and way more by using fragment programs. As a form of texture compression it was never that efficient, S3TC not only compresses better but also has better quality and performance. Probably cheaper to implement in hardware too.

As for the other extensions I mentioned, GL_WIN_swap_hint is more or less a null-op on todays hardware anyway, GL_EXT_vertex_shader is completely redundant since there is a GL_ARB_vertex_program and I don't think there are enough applications supporting the EXT to be worth spending driver time maintaining support for it either. The only reason to support GL_SGIS_multitexture is probably Quake2 support. I give it a year or two, then it should go. GL_S3_s3tc, is this extension supported by anything? It's so backwards done too, should go immediately IMO.

C++
05-04-2003, 12:53 PM
Hey, EVERYBODY!
Go to NVIDIA`s hot topic!

Korval
05-04-2003, 02:21 PM
As for the paletted textures, yes, there used to be some cool effects that could be achieved by it, but today you can do all that and way more by using fragment programs. As a form of texture compression it was never that efficient, S3TC not only compresses better but also has better quality and performance. Probably cheaper to implement in hardware too.

First of all, S3TC is not a better form of compression. Not categorically, at least. It does compress some textures better, but it does have times when it doesn't compress so well. Also, if you use more than 1-bit of alpha, you're texture will be 8-bits per pixel; the same as palatted textures.

The others you mention are obviously depricated and not to be used or retained (except for SGIS_multitexture, as there are still some Quake 2 engine games in common use these days). The difference between palatted textures and the others is that there is still a use for palatted textures. As a compression technique, palatting can give higher image quality than S3TC. As mentioned before, S3 destroys normal maps, while palatted compression can achieve virtually lossless results.

Also, the argument that you can achieve palatted textures in fragment programs is no excuse for removing it from the texturing hardware. Doing it with fragment shaders makes it prohibitively expensive. You're looking at 8 texture ops (for bilinear-filtering), plus the appropriate computations to achieve the appropriate results. Not only that, computing the appropate blending factors for bilinear filtering is hardly trivial.

Humus
05-04-2003, 06:01 PM
In my experience S3TC pretty much always gives better quality than paletted textures. On standard color maps that is, though not on normal maps as you said where s3tc is useless. Most color maps compressed to s3tc will still subjectively look very good (though you can find the errors if you measure or look for them), while many if not most of the palette compressed textures have visible defects, IMO anyway.

I wasn't talking about emulating the actual palette lookups as such. I was talking about the typical palette rolling effects, like the flashing object or whatever someone mentioned earlier in this thread. Effects where using a palette could be useful. These effects can all be achieved by other means, and are generally fairly simple to implement. I can't even remember the last time I thought a palette would be helpful.

Also, given that paletted textures never were widely supported I don't think the loss will hurt, people would have had to code a separate path anyway.

JD
05-05-2003, 12:49 AM
m2, every penny saved counts. When you're dealing with high volume shipments those savings for nvidia can be substantial. Because of this they can lower the price on their chips and more video card makers enter the market causing competition and further lowering of the prices which translates to my savings. Some should realize that extensions are not permanent and will be dropped in the future as the api is evolving. You need to get your head around the fact that you need to code alternative paths. I have one arb tex.env. path and one reg.comb. path. It's even worse under d3d as you don't have a nice cheat sheet as in gl. Gl core functions are the only ones that you can depend on not changing in the future. That's why it takes eons for the arb to promote an extension to the core. The extension must be widely used to be promoted to the core and since paletted textures weren't nvidia dropped them and arb didn't want them either. Quake2 should be able to drop to multipass if the sgi multitexture ext. is taken out. Since everyone nowdays have at least gf2, quake2 can ran fast in multipass on that hw I think. By the time they drop that extension everyone will at least have gf3/4 which can run q2 really fast. I think my thinking is reasonable.

V-man
05-05-2003, 03:44 AM
It wont result in cheaper cards because the idea is to get rid of these little guys and concentrate on the big concepts.

It's possible that it will come back in the future. I dont think this feature will be forgotten, but vertex_shader, vertex_weigthing, ATI_fragment_program, NV_register_combiners and many others have become obselete and can be easily removed from the extension list in a few years.

Why talk about Q2? Why is it important?

Humus
05-05-2003, 05:42 AM
Well, it was brought in because it uses the GL_SGIS_multitexture extension which I proposed should be removed. The game should run fine without the extension anyway.

Mihail121
05-05-2003, 05:49 AM
Well guys,the things are clear.We can talk and talk and talk but nothing will change the dumm(for me personaly) fact that FX doesn't support EXT_paletted_texture with the default drivers.This is maybe one of the most irrational things that NVidia has ever done.Why we have to tripple the image size and request huge amounts of memory when we can simply update our indexed image with 768???I trully hope that NV will add support for this GREAT extension in the future drivers...

dorbie
05-05-2003, 11:32 AM
It's not about one feature, it's about the gazillions of other demands that require die space development, testing and add risk, and the related decisions to include one or exclude another. What's in, what's out, what's the market share, is there a fallback, will the apps work, what's the competition doing, is the benefit really worth the die space or is it just legacy cruft, is this new feature worth including or is it just some academic's ego trip?

As for being treated badly by NVIDIA or ATI. They treat us all great and we don't even know them. Daym, they spend hundreds of millions on R&D and all we have to do is walk into Fry's and hand over a few hundred bucks and we get a product we would have had to spend $1million 5 years ago just to get a fraction of the performance & features. I guess there's no pleasing some folks.

Get used to the new environment, enjoy it, get over your loss.

[This message has been edited by dorbie (edited 05-05-2003).]

dorbie
05-05-2003, 11:45 AM
SGIS_multitexture is probably easy to support. It's just multicasting of texture coordinates AFAIK. It was almost the ARB spec, so essentially ARB is a subset of SGIS (that could have been the ARB choice anyway). Just imagine for a second that they included multicasting in the ARB spec in the first place, (more reason for it now than ever if you ask me), and you really don't have an extension there at all ;-)

Mihail121
05-05-2003, 11:50 AM
You are mistaken my dear friend.We can't just get used to whatever someone sais.The EXT_paletted_texture is one of the best solutions ever.Not only it saves memory but it's also very fast.Can you imagine programming without multitexturing???No!Can you imaging programming without memory???No!And EXT_paletted_texture helps you save memory.Why we have to store one image when we can store 3???Or with other words:why we have to tripple the size of the memory taken???

PeteBernert
05-05-2003, 12:06 PM
>Get used to the new environment, enjoy it,
>get over your loss.

Sorry, but I don't agree to this 'customers should be grateful sheeps and worship the decisions of big companies' point of view.

Of course my little voice will not change the decision to drop this extension on the GF FX in any way, but at least I will not be quiet if I think an useful feature bites the dust.

So prolly from now I have to answer questions from my users, asking if to get an ATI 97/9800 or GF FX for psx emulation: "get the ATI card, it's cheaper and less noisy anyway, and nVidia dropped an important extension, so feature-wise for psx emulation they are the same". If nVidia can ignore the demands of hundred thousants of die-hard psx emulation users, fine.

dorbie
05-05-2003, 12:17 PM
I guess a better question would be why you need the equivalent of ~384MB of RGB texture or why you even care when you consider the very effective color cell style compression supported. You can complain it takes more memory but in taking this away NVIDIA has doubled the size of available memory in the process, while improving paging performance), and infact have done so several times since this extension was introduced. You only have so many pixels on screen, at some point you need to get smart about texture useage.

If you're going to say I'm mistaken let's be specific. I didn't pull your extension, the benefits of it are clear to me, the real question is was it worth the space, and that's by no means as clear a decision for the rest of us as it is for you.

As you're trying to beef up your memory interface for most normal applications and increasing the prefilter taps used at peak fill for the majority of applications perhaps one of the things you don't want to do is complicate the fat end of your memory fetch with legacy functionality when you've been going to considerable expense to solve the problem in other ways.

dorbie
05-05-2003, 12:26 PM
Pete et.al. you completely misrepresent my posts, I agree with you in this regard, vote with your feet, by all means, I'm sure NVIDIA will miss your $5-$40 contribution to their top line. Where are you going to go for your palette support?

This could hurt them if it's a bad choice, I doubt it is, especially if the benefit is reduced risk, reduced cost and increased performance, or do you think they throw features out for fun?

Swearing against NVIDIA.....now what's the difference between them and ATI? NVIDIA used to support a feature you liked but ATI never have? Classic customer loyalty in the PC business.

I'm not a sheep and I don't worship big companies, but it's amusing watching some of these posts. Yup I have a different view from you, but I'm intentionally neutral in the NVIDIA v ATI wars. May they continue to leapfrog each other and if they have to put a bullet in your pet feature to do so, so be it. Adios EXT_palette, R.I.P. now where's my wallet, I have a graphics card to buy.

I love the smell of progress in the morning, smells like victory.


[This message has been edited by dorbie (edited 05-05-2003).]

PeteBernert
05-05-2003, 12:57 PM
>I'm sure NVIDIA will miss your $5-$40
>contribution to their top line. Where are
>you going to go for your palette support?

Sadly enough, it seems that both of the two top-dogs consumer gfx cards manufacturers decided to drop it, so basically I cannot change from one to another to get it back, right.

Anyway, in the past I used Geforce cards as main development platform, therefore my emulation plugins were kinda optimized for them, but if I see no more (or at least less and less) reasons to get my next new card from nVidia, then this will easily change to ATI's benefit.

>This could hurt them if it's a bad choice, I
>doubt it is, especially if the benefit is
>reduced risk, reduced cost and increased
>performance, or do you think they throw
>features out for fun?

Sure, and if they want to reduce the risk and cost even more they can easily do simple VGA cards again, I am sure they have the old blueprints of the ancient Riva cards in some drawer.

>Swearing against NVIDIA.....now what's the
>difference between them and ATI?
>NVIDIA used to support a feature you liked
>but ATI never have?

That's exactly my point. What is now (from my tunnel vision, of course) the difference between them? What is the feature which tips my decision to buy a nVidia card, instead an ATI one? Worser FSAA? More noise? More heat? Tell me.

>Classic customer loyalty in the PC business.

I am only loyal to my needs, not to any company. And if a company I've liked for years makes enough bad decisions (in my eyes), I will look after another company's products. And of course I will tell every user of my Zinc/Impact/PSX plugins which road I will take.

dorbie
05-05-2003, 01:16 PM
Your prerogative, however you may even agree with NVIDIA & ATI if you could know the true cost of your needs, who knows. I can always dream. I don't see this as a single feature issue, it goes way beyond that. It's the cost of progress, at least from my POV.

PeteBernert
05-05-2003, 08:04 PM
>however you may even agree with NVIDIA & ATI
>if you could know the true cost of your
>needs

I find it interesting that you always come back to one point, that it seems to be hyper-complicated and very expensive for nVidia to support that extension. I wonder from where exactly do you have this information? If it's based simply on your common sense, then you are treating a mere assumption as a fact, just for arguing.

The only facts I do know:
- nVidia was able to support it on GF1,GF2,GF3 and GF4 cards
- nVidia isn't supporting it on FX cards (for whatever reasons).

So my personal response to that is:
If nVidia wants again my money and support, they have to work harder. I know they can do it.

Tom Nuydens
05-05-2003, 11:42 PM
If you don't like what NVIDIA did to your pet extension, then by all means get yourself an ATI. None of us here will stop you or tell you you're wrong. But it's not going to change anything, because you still won't have the damn extension!

I've said it before and I'll say it again: your app should not be dependent on extensions with as little hardware support as EXT_paletted_texture. Your unwillingness to write a second code path for cards that don't have the extension is, IMO, a sign of laziness and/or amateurism.

-- Tom

zeckensack
05-06-2003, 01:47 AM
Pete,
aren't your plugins open source?
I have reasonably high performance and bullet proof palette emulation going (although in a slightly different context: Glide).
I might be willing to donate some of that work, if you tell me where to look.

PeteBernert
05-06-2003, 02:38 AM
Tom,

we can easily now exchange some insults and start cursing each others, but honestly I don't feel like that right now, sorry.

I, as a professional and experienced coder (harhar), have of course a fallback or workaround for each and every extension I am using, including one for a missing EXT_paletted_texture.

But you, as a professional and experienced coder, also have to admit that every of those fallbacks tends to be either slower or produce a worse image quality.

So I only can state that by dropping the EXT_paletted_texture extension the most optimal path in some part of my work has been cut by nVidia. Which will result that FX cards will work worse (choppy gameplay) than GF1-GF4 cards in some situations (I estimate that 5% of all psx games will suffer by this... but hey, 5%, that's only around 100 games, fine).

PeteBernert
05-06-2003, 02:53 AM
zeckensack,

my cdrom plugin, my DirectSound/OSS sound plugin, and my software gpu plugin are open source right now, but I haven't found the time yet to release my hw/accel OpenGL/D3D plugins as OS.

Thank you for your offer, but I think I also have a very good solution for emulating palettized textures, my only concern are
a few situations were the game is throwing zillion of ever-changing palettes to each and every texture used, frame by frame by frame.

On a sitenote: why did you need palette emulation with Glide? The only reason I could think of is the missing palette alpha channel with old Voodoo1 cards, otherwise all Glide cards had perfect palettized
texture support, mmm?

zeckensack
05-06-2003, 03:24 AM
I need it, because ATI cards do not support paletted textures. But these too need to be able to play Glide games http://www.opengl.org/discussion_boards/ubb/biggrin.gif

Tom Nuydens
05-06-2003, 03:30 AM
Pete, sorry for sounding harsh, but I get the impression that people are getting way too worked up over this. It's very unrealistic to expect NVIDIA (or any other company) to spend time and money supporting something that has absolutely no commercial interest for them. PlayStation emulators do not sell video cards -- Doom3 does.

That said, I'm glad to hear that you do have a fallback. Are you expanding to RGB and uploading a new texture every time the palette changes, or are you using a dependent texture read (in a fragment program)? Surely the latter approach can't be that slow and inefficient compared to the optimal paletted texture path?

-- Tom

V-man
05-06-2003, 04:11 AM
For those who havent read the thread fully, this was said by Cass

---------------------
Palletted textures will not be supported on GeForceFX. While the functionality obviously has uses, it consumed a disproportionate amount of real estate relative to the number of applications that made use of it.
-----------------------

There you go. Now shut your freaking pie hole! (Just kidding)

There are a few other posts that you might wanna read (Korval's explanation)

Someone mentioned that some previous drivers supported the ext on the FX. Maybe you should do a benchmark and we will see if it actually has some performance benifits (or not).

Humus
05-06-2003, 04:30 AM
Originally posted by PeteBernert:
I find it interesting that you always come back to one point, that it seems to be hyper-complicated and very expensive for nVidia to support that extension. I wonder from where exactly do you have this information? If it's based simply on your common sense, then you are treating a mere assumption as a fact, just for arguing.

The only facts I do know:
- nVidia was able to support it on GF1,GF2,GF3 and GF4 cards
- nVidia isn't supporting it on FX cards (for whatever reasons).

To quote Matt from nVidia above, "You might be surprised at how much it can cost to support some features". Given the way paletted textures work it's a fairly reasonable assumption to make that it can be quite expensive. Yes, it may have worked before, but as you smack on new features on a card or clock it higher it may become incredibly hard to keep it up to speed. For instance, remember the original Athlon, they began with a L2 cache running at half the clockspeed. Then as they ramped up clockspeed they had to scale back L2 cache speed. I suppose they could have kept it up to the same speed if they really wanted to, but it would be a lot more expensive in terms of transistors.
Consider the work done in a standard texture lookup for a while, then compare to a paletted texture lookup. The standard lookup takes the texcoord, grabs the four closest texels from the texture and four texels from the mipmap below then blends these together. For a paletted texture however, it must first lookup all those 8 samples as above, then it must look up the corresponding palette value for all those 8 samples and then blend. For this to run at the same speed as normal texture lookups you'd have to have a very low-latency, multiport and high bandwidth cache for the palette. Doesn't sound cheap to me.
Why did it work fine it previous generations? According to some information (which aren't official, but sound likely) NVidia used to extract texel values and store these in the cache in previous generation hardware. This is what caused DXT1 textures to only get 16 bit interpolation, as they were stored in 16bits in the cache, all kinds of textures were expanded and stored in simple texel formats in the cache. It's a simple scheme that works fairly well, but the bad thing about this though is that you're wasting a lot of cache space. The GFFX however to my knowledge takes the same approach as other vendors use to do, let the cache mirror the memory instead of expanding texels before storing them in cache. This way you can either make the cache smaller (and save loads of transistors) or make it more efficient (and improve performance). The bad thing is that it may make features like paletted texture prohibitly expensive to support, or if they were to be supported would have to have too low performance and thus not really be worth it.

PeteBernert
05-06-2003, 09:16 AM
ok, ok, I give up http://www.opengl.org/discussion_boards/ubb/smile.gif

Of course I understand the complexity to support different texture concepts in the same piece of hardware (good explanations, Humus), but also of course I tend to bitch around somewhat when I see hours of my work wasted. Same situation was when MS dropped the stretched surface blit support, and the color keying, in DX8. Perfect solutions for special things I had to do... and suddenly gone, sniff.


>Tom:
>Are you expanding to RGB and uploading a new
>texture every time the palette changes, or
>are you using a dependent texture read (in a
>fragment program)?

Tom, no, I don't use shaders/combiners as fallback. Basically I use an (imho) clever texture caching sheme, which reduces the texture uploads to an absolute minimum, by storing as much needed palette variations of the used textures as possible in vram. To be honest, that works so well, that I even do that on cards which are supporting palettized textures. My plugins only use the extension (if available) in certain situations, whenever my main caching tends to fail (when it's getting flooded with paletted textures by certain game engines). If the extension is not available, well, in such situations a brute force texture data uploading sheme kicks in, right.

Major drawbacks I see with a fragment program fallback:
a) I don't like the thought to use another extension (which is even different from vendor to vendor) to fall back to. And I also have to consider that my DX and OpenGL/Mesa ports are staying as similar as possible.
b) it wouldn't mix well with my current gpu plugin engine. I wouldn't only need to emulate the texture palette lookup, but also all of my kinky texture enviroment setup, something which I have already well optimized (without shaders) for my tasks.

Anyway, in time of course I will need to use a different approach (yeah, most likely shaders), considering that in my next projects again texture color look up tables have to get emulated. But that's a different story, so I don't worry much about missing palettized textures for that, I have to admit.

DarkCutler
08-15-2003, 05:34 AM
Great,
at the moment im downloading
and testing the new NVidia driver 45.23
for my GeForce FX 5800 and i saw that
NVidia now supports the GL_EXT_paletted_texture and GL_EXT_shared_texture_palette
extension again which im using in my
OpenGL based isometric "2D" Engine to minimize the needed texture memory.
I dont now whether it is although supported by the GeForce FX 5200 and 5600.
And the extension GL_IBM_rasterpos_clip
is new.

http://www.opengl.org/discussion_boards/ubb/wink.gif thx NVidia http://www.opengl.org/discussion_boards/ubb/wink.gif

JD
08-16-2003, 12:53 PM
Abducted engine uses it, maybe that's why.

zeckensack
08-16-2003, 01:46 PM
Originally posted by DarkCutler:
[B]Great,
at the moment im downloading
and testing the new NVidia driver 45.23
for my GeForce FX 5800 and i saw that
NVidia now supports the GL_EXT_paletted_texture and GL_EXT_shared_texture_palette
extension again which im using in my
OpenGL based isometric "2D" Engine to minimize the needed texture memory.An NVIDIA employee has publicly stated that it's not in the hardware. Which means that it's emulated in software. Guess what? You won't be saving texture memory by using it.

al_bob
08-16-2003, 02:59 PM
Or maybe, just maybe, it might be using the shader hardware; after all, the code to do that has been posted on these forums multiple times.