OT: floating point colors and old games

The R300 and NV30 are supporting floating point colors in 32, 64 and 128 bits, right?

Are the integer versions gone ie. 32bit integer?

And can you use those new colordepths in Q3 for example (yeah, I know Q3 would probably not benefit a lot from 128bit FP colors)

Thanks!

You might be right, but you have to be a bit more careful about saying it.

The new cards will support 32 bit or 16 bit per RGBA component floating point color buffers and per-pixel math. This would be what many people would call “128” or “64” bit color.

The old 8-bit per component integer support (32 bit color) is not “gone”, it will still be there in addition to these new formats.

Quake can probably use such color depth if you wish (there is a setting to “use desktop color depth”). However, I don’t think there would be anything to gain as all the source textures are 32 bit and there is little per-pixel math being done where precision could be lost. You’d probably lose a lot of performance, though.

– Zeno

What could ‘use desktop color depth’ gain you? As far as I know, the desktop will still be 8-8-8 (or maybe 10-10-10) even when using 16-16-16-16 and 32-32-32-32 backbuffers.

However, there is enough doubt in my mind that I would like someone to answer definitively for me. Will these new backbuffer formats be possible for front buffers? How will windows handle these?

I do not want any speculation! Would someone who owns a Radeon 9700 or similar please answer me. It would clear things up for me considerably.

EDIT: A large part of my doubt comes from the fact that ATI does not list anything other than 256, 64K, and 16.7 million colors in its list of display modes. This indicates to me that floating point is all behind the scenes stuff and that a 128bit backbuffer is dithered to 32bit.

Quake 3 would most likely choke if you set r_colorbits to 128 or 64 because I believe that it assumes that the front and back buffers have the same number of colorbits. Since it will not be able to find a 64-bit or 128-bit display mode, it will fail, even if there are 64 and 128 colorbit pixel formats.

[This message has been edited by Nakoruru (edited 08-28-2002).]

Originally posted by Nakoruru:
[b]I do not want any speculation! Would someone who owns a Radeon 9700 or similar please answer me. It would clear things up for me considerably.[b]

I have one, but I don’t think any of that will work unless you have DX9 installed. Not only that, but I don’t think you can set the desktop color depth that high anyway (thus the need for DX9).

I’m not sure about OpenGL though, that’s still a grey area with my 9700 right now.

Ok so are we saying we will definately have to make a code change to use 64 or 128bit colour depth and we don’t know what that code change is yet? It’s not an option in display properties then?

I don’t understand why we have to wait for DX9. (which incidently has been delayed again) http://www.theinquirer.net/?article=5168 Can’t we have an extension now

[This message has been edited by Adrian (edited 08-29-2002).]

The 64 and 128 bit-modes are only supported in the backbuffer on the r300 (probably the NV30 to).
The “frontbuffers” are still 16/32 bits, however R300 has implemented 10bits rgb components for the frontbuffer (10,10,10,2).

Even though Q3 doesnt use a lot of textures, it still uses 2-4 passes (some custom maps use even more passes) and alpha blended smoke and fog might benefit from a higher precision backbuffer.

If you have to write specific code to enable a 64 or 128bit backbuffer, its really sad. I hope they will make the 64/128bit modes available as standard color-modes and that when selecting 64bit color mode, the backbuffer will be 64bit FP and the frontbuffer will default to 10/10/10/2 RGBA-bit.

Most applications (business and what not) are written for integer buffers. Unless if the ATI will be converting to floating point format in real time, it will probably be running like any old card.

This is a little strange if you ask me. What will be the range of the numbers anyway : 0.0 to 1.0?

I guess that the GL way will be wgl_ARB_pixel_format (oh boy!)

V-man

The reason I do not think that Quake 3 will run with the new buffers is not because there will be any problem just asking wglChoosePixelFormat for 64 or 128 color bits with 16 or 32 bits per component, but because Quake 3 will probably try to find a display depth of 64 bits when it enumerates display modes. There will not be a 64 or 128 bit display mode, so it won’t work. This is because I bet that Quake 3 assumes that you need to set the display mode to match the pixel format color bits.

It will not let you set it in Windowed mode because it assumes that you want backbuffer colors bits to match the currect display depth.

What would be correct would be to have seperate display color bits and pixel format color bits. You can specify them seperately in both OpenGL and Direct3D 8, its just that most code is written to assume that they have to be the same and for the most part they are correct, but only because the hardware does not allow it. The API however has never disallowed it.

As for the Matrox Parhelia supporting 10-10-10-2; How do you set the display mode to support this front buffer? The DEVMODE structure in windows does not give you anyway to distinguish between 8-8-8-8 and 10-10-10-2 because it just has BitsPerPel as a member, but no per-component description. It would seem that you could not use ChangeDeviceSettings to set this mode for OpenGL.

Also, it explictly says in the Direct3D 8 documentation that only R5G5B5, R5G6B5, R8G8B8X8 are accepted as display formats (front buffer formats). Does the Parhelia also accept R10B10G10X2?

Unlike ATI, Matrox does list their best display mode as having ‘billions’ of colors (2^30 I guess). I am just wondering how they use the Windows API to actually set that mode. Maybe its just marketdroids talking about the backbuffer.

EDIT: After reading more about the Parhelia, I do not doubt that it has a 10-bit per channel display mode. Its just that the Windows GDI API has no way to distinguish between two different 32-bit pixel formats, so how do you actually put the Parhelia into 10-10-10-2 mode?

[This message has been edited by Nakoruru (edited 08-29-2002).]

Doesn’t the Parhelia have an option in the display setting to enable 10-10-10-2 (just as a swap out for the standard 8-8-8-8 mode)?

What could ‘use desktop color depth’ gain you? As far as I know, the desktop will still be 8-8-8 (or maybe 10-10-10) even when using 16-16-16-16 and 32-32-32-32 backbuffers.

Ahh, sorry. For some reason I was under the impression that the new cards (NV30, at least) could do a floating point color buffer. I’ll try and find the source of my misinformation.

– Zeno

They can do floating point color buffers, but I do not think that they can ‘display’ them.

If you look at the new OpenGL extensions from nVidia, the floating point buffers are only available as pbuffers.

Since the same documents claim that the nVidia opengl drivers are more capable than Direct3D, I would guess that the situation is the same there. i.e., you can only create floating point render targets. This means that they cannot even be used as backbuffers for displayable rendering contexts.

Nitro,

So, a checkbox on an extended property sheet for the Matrox tells the driver to use 10 bits instead of 8 when switching to 32-bit color?

Most things that use GDI should be fine, but I bet that will screw up some DirectDraw applications.

GDI doesn’t even support 10-10-10-2 formats, I believe… I think it just expects 8888.

Compositing desktops are the future. Every window is a pbuffer and can have its own color depth, gamma control, etc. You probably won’t see “floating-point windows” until this happens.

  • Matt

I have to admit that I do not know enough about GDI to know if it would work. I figured that it was extensible enough to handle a new color format. Meaning that most applications would not have any more problem blitting Devices Contexts to the screen in 10-10-10-2 than they do blitting to a 1-1-1-1 screen. Guess not.

So, is matrox’s 10-10-10-2 mode completely useless until the next major version of windows (or a Mac)? ChangeDisplaySettings is a GDI function.

[This message has been edited by Nakoruru (edited 08-30-2002).]

Originally posted by Nakoruru:
[b]Nitro,

So, a checkbox on an extended property sheet for the Matrox tells the driver to use 10 bits instead of 8 when switching to 32-bit color?[/b]

Yeah, I think so, but it only applies to 3D applications. I don’t have one, so I’m not quite sure.

How would it know its a game? I guess it changes the video mode again after you select a pixel format and make the context current. Seems dubious to me.

Don’t forget overlays… it’s quite conceivable to have an 8888 desktop and a 10-10-10-2 overlay. The interesting question is what happens when you hit PrintScreen.

  • Matt

Originally posted by mcraighead:
[b]GDI doesn’t even support 10-10-10-2 formats, I believe… I think it just expects 8888.

Compositing desktops are the future. Every window is a pbuffer and can have its own color depth, gamma control, etc. You probably won’t see “floating-point windows” until this happens.

  • Matt[/b]

Actually, compositing desktops are the present. Jaguar.

-mr. bill

As far as I’m concerned, if Windows doesn’t support it, it doesn’t exist.

It will be a cold day in hell before you’ll get me to use an Apple operating system. (And an even colder day before I’ll touch that Linux thingy. Well, I don’t know. I asked myself at one point in time whether I hate Mac or Unix more, and I couldn’t decide. I am certainly no fan of either.)

  • Matt

> an Apple OS

How about a nice, warm UNIX lovingly crafted by Berkeley hippies, and then just accidentally sold with a colorful fruit on the box? :slight_smile:

Hate is such a strong word. My Linux mail/web/gateway/name server does things I’d have to pay thousands to do on Windows, and then it’d still do it less well and require more machine to do it. But would I do word processing on X windows? Not a chance.

When it comes to real-time audio, the classic MacOS (not the version lovingly hand-crafted by Berkeley hippies) has an edge, because all the big apps actually run their audio pumps at interrupt time, leading to lower latencies than any Windows set-up I’ve seen. Lack of protection has its uses :slight_smile: Although once you go below 3 ms, the point starts to become moot.

Oh, wait, am I trying to advocate a “to each his own” in a potential OS flame war? What am I thinking? Please, un-read the above comments.