High performance Intellisample problem

My app renders the scene first with a texture colour gradient, it then reads the frame back and uses the colour to locate the pixel’s position on the polygon. It works with intellisample set to Quality or Performance. But when I set it to ‘high performance’ it returns a garbage position. Intellisample appears to be applying some kind of filter to my texture, the colours are being changed slightly. Both AA and AF are switched off.

According to the information I’ve read intellisample does lossless colour and z compression so it shouldn’t change the colours, but it is.

It makes no sense to me that high performance is filtering but quality isn’t. imo neither of them should be touching my pixels when I have AA/AF off, but particularly not the ‘high performance’ mode.

Presumably I will be able to switch intellisample off using multisample extension. Anyone know for sure?

[This message has been edited by Adrian (edited 07-13-2003).]

Are you getting a 16 bit format instead of a 24 bit format for your pixels, perhaps?

Good point but I just checked and I am getting a 24 bit colour buffer. Maybe it’s changing the texture format though…

[This message has been edited by Adrian (edited 07-13-2003).]

Of course “Intellisample” messes with your texture filtering. If not for that purpose (read: winning more benchmarks), why should it exist?

Marketing fluff is just that.

The intellisample document does mention adaptive texture filtering but it says that it is done for AF and trilinear filtering. AF is off and the texture is not mip mapped. I need to override everything on the ‘performance & quality’ panel.

What is “Intellisample”? Obviously it is a marketing term (which could be known based on a name that varries no real information, but sounds like something legitimate), but who’s hardware does it belong to? And what effects does it pertain to?

I need to override everything on the ‘performance & quality’ panel.

That can’t guarenteeably be done. If the user decides to override your settings (with a higher or lower one), there’s nothing you can do about it if the driver doesn’t want you to.

This means that any app that reads back colour from a rendered texture and uses it as a memory pointer or a location ID will not work when intellisample is set to high performance. And there’s nothing we can do about it. Great.

Out of curiosity does ATI have something similar?

This may be forced texture compression.

Think about it, you don’t have mipmaps, and you don’t have a filter. Lod bias wouldn’t change anything, neither would an approximation to your non-existant filter.

This is either a case of forced texture compression or forced mipmapping, with the former being far more likely.

> This means that any app that reads back
> colour from a rendered texture and uses
> it as a memory pointer

That has never been a requirement nor a
guarantee of the OpenGL specification.