NV: supported internal texture formats

The NV presentation GPU programming guide contains a table of supported internal texture formats for the GeForce 6 series GPUs (page 35, section 4.4). Is there a similar table for older NV GPUs? What about ATI cards? What does the column “blend” indicate?

I don’t know about older nv cards or ati, but the column blend indicates that you can enable blending for those pixelformats. glEnable(GL_BLEND)

Blending was previously only supported for ‘displayable’ pixel formats like RGB8, RGBA8, etc.
GF6800 GPUs for example also support 16 bit floating point blending.
This means you can use hardware accelerated blending on a pbuffer with fp16 pixel format.

Greetz,

Nico

I suppose for the purposes of texture upload performance, it is best to not know about the internal hardware support and just determine the best one(s) empirically at runtime.

However, I too am interested in knowing the supported formats on all ATI and older hardware. Is there a definitive table anywhere?

Looking around I see a reference to a 404 site, and some DirectX tables but no big table of GL formats.

Can anyone point toward more references?

I assume you mean even farther back than the FX series, which is covered in section 5.6?

Yes, I would like to know the supported internal formats going back to GeForce 2MX, and ATI Rage 128.

In th table on page 46 (section 5.6) there are only N’s for A8. To what internal format will be an alpha texture converted? A8L8?

Originally posted by -NiCo-:
[b]I don’t know about older nv cards or ati, but the column blend indicates that you can enable blending for those pixelformats. glEnable(GL_BLEND)

Blending was previously only supported for ‘displayable’ pixel formats like RGB8, RGBA8, etc.
GF6800 GPUs for example also support 16 bit floating point blending.
This means you can use hardware accelerated blending on a pbuffer with fp16 pixel format.

Greetz,

Nico[/b]
I add these 2 things:
When I create Render-To-Texture I am able to specify 32bit per-channel (I got a 9800 Pro) and use this precision with fragments programs.
The problem is passing data with this precision when I create RTT. At max with GL_RGBA16 data is passed with 16 floating-point precision. Then is elaborated with 32bit precision.

Byez, Emanem! :smiley: