PDA

View Full Version : How to do Linear Z buffer



JelloFish
06-05-2003, 07:52 AM
Hi Everyone,

Iím looking to try and do a linear Z buffer depth buffer but see no way to do it other than special commands in a vertex program.

Iíve attempted to use a projection matrix where w is equal to z. But there seems to be a problem that OpenGL expects z to be in -1 to 1 range after the vertex program is finished, except you want w to range from 0 to 1 for those same depths. So it becomes impossible to have equal values for both.

The only way I can think of doing it is through a vertex program where you multiply result.position.z by result.position.w before the end of the vertex program.

Iíve also heard that D3D supports linear Zbuffer, does OpenGL support some command where the final z/w can be avoided?

Thanks any advice is appreciated.

dorbie
06-05-2003, 10:01 AM
You could try just setting w to 1.0 but it'll screw up your perspective correction.

roffe
06-05-2003, 12:14 PM
Originally posted by JelloFish:

Iím looking to try and do a linear Z buffer depth buffer but see no way to do it other than special commands in a vertex program.



I also, in my own naive way, tried to work this out. Not in a vertex program though. To do this I remapped the z generating row in the projection matrix so it after perspective division became az+b by assigning vertices with w=z^2 instead w=1.
The obvious problem is how to deal with translation. If I separated mv and proj matrix transformations it worked. The solution "only" defeated the whole purpose of using homogeneous coordinates.I later learned that there are very good reasons to a non-linear zbuffer. Which kind of answered my inital questions on why non-linear.

My 2 cents.


[This message has been edited by roffe (edited 06-05-2003).]

V-man
06-05-2003, 01:23 PM
Originally posted by dorbie:
You could try just setting w to 1.0 but it'll screw up your perspective correction.


Nah, it's better to counteract w by doing
z*=w;

dorbie
06-05-2003, 02:51 PM
Ahh, of course. What was I thinking :-)

jwatte
06-05-2003, 06:15 PM
On the subject of the "non-linearity" of Z buffers: Z buffers give you volume areas (depth resolution quanta intersects pixels) that are uniform size _in projected space_. Thus, there's a decent argument that they're the right solution as-is. J

ust like an object needs to get bigger in X and Y to fill a full pixel when receding into the distance, it should also need to get bigger in Z to do the same. Difference being, you usually have 24 bits of Z resolution, but only 10 bits or so of X/Y :-)

dorbie
06-05-2003, 08:31 PM
Hmm, but in practice the non linearity of Z is determined by the near and far clip planes and depth resolution, whereas the size of an object is determined by the field of view and resolution.

It's the whole ratio of near to far weighting depth resolution that's at the root of the problem and that goes back to your projection matrix.

Up until recently the math has been too convenient and fast to give a darn, but now it's less relevant. You can pull whatever value you like from whatever field you like so long as you can do the fragment arithmetic to make the perspective interpolation correct. This very much changes the rules. It's time to be less clever about the whole thing ane pick the values you want, not the values that are convenient for 2D linear interpolation. Smart people have understood this for a while.

All this nonsense is really about; given the new capability of hardware what should we stuff in the depth buffer for best effect.

If you ask me I'm thinking floating point exponent (think fractional exponent) with 0 = near and 1 = far linear. But the devil's in the details when you start looking at precision issues when yanking values off the matrix. But this is ONLY where you have less storage precision than you have numerical precision and I'm not really serious about this part :-).

The rules have changed, we have space for fp hardware and 8 bits per pixel framebuffer is irrelevant. There must be a categoric answer to this question in the new order. Given that you have limited transform precision, and you can do whatever fragment arithmetic is within reason, and zbuffer size is not an issue, what's the best value to grab, manipulate and store. Some analysis about depth storage falls short because they assume that you have infinite precision until you manipulate & cast to the limited precision depth buffer. This just isn't the modern paradigm.

Start with what precision comes out of projection, try to preserve as much of it as possible and realize that worrying about depth buffer size is a joke. Assume you have all the depth buffer precision you need, don't try and be a smartass about saving 8 bits of a 32 bit field and let's assume for a sec you have fp compare interpolation and divide at full precision. Now, what's the best you can preserve with perspective correction from your matrix transform.

When you answer that, then THAT'S what your depth buffer should be (if it isn't now it will be in 2-5 years), and to heck with all the wacky schemes to do anything else. 1-z is the last thing any sane person would do under any those circumstances.

Just a suggestion :-)

dorbie
06-05-2003, 08:39 PM
P.S. this answer does not change unles syou start adding precision to your modelview and projection transformation. Ultimately you're gated by the fp eyespace z coordinates, you can't do better than that. So start with 32bit IEEE eyespace Z, if you're trying to do better than that or trying to store fp 0.0 at the far plane then you're seriously out of luck, at least for resolving one object w.r.t. another after transformation.

Hmm, I think we have an answer, 32 bit IEEE eyespace z.

cass
06-05-2003, 10:16 PM
The original question is not clear to me.

The standard OpenGL z-buffer is an occlusion buffer whose interpolant is linear in window space.

The w-buffer (which OpenGL was never extended to support "natively", AFAIK) is an occlusion buffer whose interpolant is linear in eye space.

[Note that computing parameters that are linear in eye space requires perspective correction - and such parameters generally require a fair amount of per-pixel floating point math.]

You can do any variety of linear/nonlinear/piecewise z you want with depth replacing fragment programs, but you'll pay a signficant performance penalty for using them.

That's because most modern GPUs are optimized around the idea that most geometry is linear in window space z - and there are lots of optimizations you can make around that (good) assumption.

Thanks -
Cass

dorbie
06-05-2003, 11:29 PM
The fundamental observation or at least hope / article of faith with the W buffer was that you could finally afford the perspective correction per pixel.

The schemes to squeeze the last nuance of depth precision into some 24 bit or lesser format are pretty strained and look downright silly with suggestions like 1-z stored in eyespace. My post was just an appeal (or observation) that kind of led full circle all the way back to the obvious, a full precision eyespace Z (not a w buffer (what was I thinking) but true eye z pre projection) interpolated with perspective correction, obviously requiring a few extra bits of precision while doing the math. Any scheme implemented isn't going to do any better for obvious reasons, although maybe you could use your hardware more wisely. Those are the eyespace numbers that come out of the modelview and that (or worse) is all you have to work with.

I was really just drawn to this longwinded monologue rediscovering the obvious by the thread and the preamble in another depth buffer related thread.

Naively replacing z with a linear depth buffer will cause some serious artifacts without perspective corrections and for example coplanar faces that don't share vertices will incorrectly occluded. Hmm.. a 32 bit 1D floating point texture texgen'd along eyespace Z fed into the depth buffer output register in a fragment shader. That would do it I think, unfortunately it's a bloody big texture without texture filters, so forget it (some day). At least you can pick the texture format of your choice that has a decent filter. In a 1D texture you could apply any depth ramp you like (even the fractional exponents I mentioned earlier). All 100% perspective correct and totally functional. Makes you glad some clever hardware has been applied to the texture interpolators even if the same gates haven't been spent on the depth buffer fragment interpolation. The devil is very much in the implementation details, but at least it's clear that the vaunted 'difficult' part of the problem is tractible and has been solved elsewhere in hardware where it's seen as a priority.


[This message has been edited by dorbie (edited 06-06-2003).]

cass
06-06-2003, 12:41 AM
I'm not sure I was entirely clear on this point before, but lots of occlusion culling and depth buffer compression techniques make use of the assumption that z is linear in window space.

You can remap z however you like in your fragment program, but it will cost you more than just the extra shader computation.

I'm relatively agnostic about the whole this-mapping-is-better-than-that-one debates. I'm happy with 24 bits. http://www.opengl.org/discussion_boards/ubb/smile.gif

Cass

dorbie
06-06-2003, 04:24 AM
Interesting point, but why is 24 bits still the magic number? Surely this is really a legacy value imposed by what was considered to be a generous per pixel ammount on laughable hardware 20 years ago. How about having a whopping 48 bit z? With a fixed point representation generated from a floating point input you could certainly use it. I suppose 'compression' schemes have already done this (probably with less than 48 bits) and gone back to some non linear storage prior to the compare & store at some smaller precision, or after compare as long as the depth read does the conversion back to fixed point (no the compare has to be done with the whacky representation due to rounding).

Here's a crazy idea, how about a depth LUT (or gamma or whatever). You do interpolation at some decently high precison, then you convert to the pseudo representation prior to storage, but allow that conversion algorithm to be weighted with a lut or exponent (akin to gamma). It's like the above scheme but let's you go with any distribution of precision. I'd like to see you get glPolygonOffset working with that scheme :-).

OTOH it's making a meal of the whole thing, just go with the high precision full depth buffer. The effort ain't worth the memory you're trying to save these days... ah precision = bandwidth.

[This message has been edited by dorbie (edited 06-06-2003).]

V-man
06-06-2003, 04:51 AM
Originally posted by cass:

I'm not sure I was entirely clear on this point before, but lots of occlusion culling and depth buffer compression techniques make use of the assumption that z is linear in window space.


What do you mean? the z-buffer is stored compressed in memory all the time?
And how to disable it if I want to?

PS: the idea here is to not store a perspective z into the depth buffer but to store a orthographics z in there. The depth testing will work fine. I have a paper on this.



[This message has been edited by V-man (edited 06-06-2003).]

dorbie
06-06-2003, 05:11 AM
V-Man ortho z will need perspective correct interpolation. For example projected eye space texture gives you this.

SergeVrt
06-06-2003, 05:36 AM
Originally posted by JelloFish:
Iím looking to try and do a linear Z buffer depth buffer but see no way to do it other than special commands in a vertex program.



try http://developer.nvidia.com/docs/IO/1331/ATT/ZBuffering2.pdf
and
read here http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html

cass
06-06-2003, 06:35 AM
V-man,

There's no way for users to disable buffer compression or early occlusion hardware. They're interesting hw features in that they have no place in the logical operation of OpenGL, but they're really important for increasing effective memory bandwidth and effective fragment and pixel processing rates.

I understand the desire to store a perspective-corrected eye-space Z term in the depth buffer (which is non-linear in window space). That's what w-buffering is. Just be aware that while you will probably get better precision distribution, you lose the nice property of window-space linearity - and that may hurt performance.

Dorbie,

I just mean that 24-bit fixed-point z gives artifact-free depth resolution for all of the [near,far] ranges I ever use. I realize opinions differ on this. Especially flight sim types developers. http://www.opengl.org/discussion_boards/ubb/smile.gif


Thanks -
Cass

dorbie
06-06-2003, 06:46 AM
Hmm... someone's been talking.

I just thought it was pretty interesting to explore the limits of what you could theoretically get, and confuse myself in the process. It would be nice do dispense with all the whacky depth buffer schemes and say you're using all you can get from the modelview (z being pre projection eye z).

The whole business of directly interpolating and storing a larger fixed point format would be very interesting, but I have no intuition for the hardware costs. Of course it'd just be a larger depth buffer.

Can you tell us what precision is in the interpolators now before the conversion to 'compressed'? Or do you interpolate the 'compressed' representation?

[This message has been edited by dorbie (edited 06-06-2003).]

cass
06-06-2003, 07:19 AM
Originally posted by dorbie:
Can you tell us what precision is in the interpolators now before the conversion to 'compressed'? Or do you interpolate the 'compressed' representation?


There are a lot of factors that affect the precision of actual interpolators. Particularly non-visible factors, like method of interpolation and maximum resolution and sub-pixel precision.

We don't publish buffer compression details, but as long as the compression is lossless, the effects are invisible - except for their performance characteristics.

Thanks -
Cass

dorbie
06-06-2003, 07:33 AM
Hmm.... you imply that this is real compression. My understanding is that these types of compression simply change the distribution of precision using some pseudo floating point representation. So if you took the MSB of your interpolation result and compared & stored it that would be an 'uncompressed' Z but it's lossy, obviously. In a compressed scheme you take the same number and mangle it with some kind of bastardised float and compare / store, it too is lossy but it preserves the depth information that's more interesting, so it doesn't really make sense in any conventional sense to call this lossless, although it's no more lossy than a traditional approach.

Interpolating the compressed format is no different conceptually w.r.t. lossiness, you just cast sooner.

I appreciate that these things are often dependent on many things, I was just looking for 'register' size more than an absolute metric of meaningful bits.

JelloFish
06-06-2003, 12:42 PM
For clarification,

We are reading in the depth buffer and trying to use it to perform post scene effects. However since as distance of an object increases z values get closer to 1 faster. It doesnt seem like the pixelshader math exists to work with the nonlinear distance values. Ideally I would want to use a special projection matrix to get z to end up being linear but it didnt seem possible.

Also after making everything have a linear z in the vertex program most scenes look fine(even better), except for a small amount of scenes where the polygons are large and are sometimes off screen. I don't think the resulting z/w values are interpolating the way I would expect them to. Perhaps this is a problem with something not being perspectively correct as some of you have mentioned.

dorbie
06-06-2003, 01:30 PM
I expect you will also find that cases like this are wrong:




\
\
\\ <---*eye
\\
\
\

OR this actually (I think):



\
\
\\ <---*eye
\\
\
\



But for the most part stuff might look passable.

Option 1:
Try a 1D 16 bit texture ramp texgen in eye Z mapped into the depth buffer as I suggested earlier. It costs you a texture unit though.

Option 2:
For your depth effect use render to texture (I suspect you do already) then use a dependent texture read in your effects fragment program to relinearize the depth buffer. Again a 1D texture ramp, this time with the correcting mapping to linearize the readback value to linear eyespace z.

I think both will work, one costs a texture unit & texgen the other assumes a readback or render to texture effects implementation and costs a dependent texture read on your effects pass.

Take your pick.


[This message has been edited by dorbie (edited 06-06-2003).]

JelloFish
06-06-2003, 03:06 PM
Thanks for the suggestions

Option 1 would be difficult since we perform mulple post passes after the scene is finished.

Option 2, this is what I would have preferred but it seems that dependant textures dont work with depth textures. (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009506.html)

One options ive been toying with is the copy depth to color extension. But it seems like this would really slow down the game alot.

dorbie
06-06-2003, 03:38 PM
I would have thought a depth to color would be possible without breaking the bank. Try 16 bit z to 16 bit luminance (is that supported?).

Another option might be to simply use a variation on Option 1 but write color components or even destination alpha writing linear z into the component from a texture(with 16 bit destination components) in one of your passes and copy that to texture. You may able to use a color component inexpensively if you have something like a depth buffer pass early on, or alpha if it's available and doable. Depending on the effect you're after 8 bits linear may even be enough instead of some higher precision formats. I dunno what you're after (depth of field, volume fog or some other cool thing). The required precision would vary with these applications.

[This message has been edited by dorbie (edited 06-06-2003).]

JelloFish
06-06-2003, 04:25 PM
I wouldnt doubt that the 8bits would be more than enough for what we want to use it for, but to loop through our objects and render an extra 150'000 polygons seems way out of the question.

I really should try the depth to color extension, maybe its not as bad as I would imagine. Just seems like alot of expensive extra copying. What it needs is the ability to use DEPTH_STENCIL_TO_RGBA_NV as a parameter to glCopyTexSubImage.

I think i'm going to test that extension and post again.

dorbie
06-06-2003, 04:32 PM
Why not just texgen to destination alpha(modified Option 1) if 8 bits is enough and read it directly back to color. It should be a breeze.

[This message has been edited by dorbie (edited 06-06-2003).]

JelloFish
06-06-2003, 05:00 PM
Sorry forgot to post about that, my alpha channel is being used. Both in src_alpha operations and in dst_alpha operations.

Can you use a different alpha channel for blending that what you put into the color buffer?

JelloFish
06-06-2003, 05:29 PM
So COPY_DEPTH_TO_COLOR didn't go so well. Doing a copy pixels seems to be even slower than a glReadPixels operation. Plus I had to enable 24 bit depth buffer and 8bit stencil(costing a 10% of my FPS) since 16bit depth copying seems "fairly uninteresting" to the people who wrote the extension.

I have a feeling I must be doing something wrong to be getting such horrible performance, literally taking 10 seconds for each copypixels call. Stangely nothing seems to show up on screen other than the existing frame buffer, so I suppose it is quite likely I am not doing something correctly.

I'm not sure what it might be ive never tried any copypixel operation before. But AFAIK all my pixelzooms and raster pos's seem to be setup correctly.

dorbie
06-06-2003, 05:33 PM
No, but if you could hold off on using destination alpha for the first pass you could use it then and clear after your readback.

You could also look into auxiliary buffers, but I haven't used them. I think you can write to one of these puppies in your fragment shader independent of the rest of the stuff you're doing. You'd just send the texture unit output there, then readback. Like I say I've never used this, so I could be talking out my ass on this particular point.

[This message has been edited by dorbie (edited 06-06-2003).]

JelloFish
06-06-2003, 05:38 PM
Originally posted by cass:

I'm relatively agnostic about the whole this-mapping-is-better-than-that-one debates. I'm happy with 24 bits. http://www.opengl.org/discussion_boards/ubb/smile.gif


Even 24 bits breaks down when you don't have nice geometry in the distance. Almost all z-fighting was eliminated with a linear Z, I just wish I could have solved some of the extreme anomalies that were occuring. Most of the small anomolies that existed up close where ignorable considering how much better the stuff in the distance looked. Being able to scale how fast you loose depth precision without changing your near plane is certainly a smart feature to give different applications the control needed to perfect their scenes.

[This message has been edited by JelloFish (edited 06-06-2003).]

JelloFish
06-06-2003, 06:25 PM
Originally posted by dorbie:

You could also look into auxiliary buffers, but I haven't used them. I think you can write to one of these puppies in your fragment shader independent of the rest of the stuff you're doing. You'd just send the texture unit output there, then readback. Like I say I've never used this, so I could be talking out my ass on this particular point.


Ya aux buffers sounds like the right way to do it. But I really have no idea how to output to one of those using register combiners, time for some researh I guess.

JelloFish
06-06-2003, 06:27 PM
Originally posted by dorbie:
No, but if you could hold off on using destination alpha for the first pass you could use it then and clear after your readback.


Ya I guess If it would be relatively cheap compared to everything else to only draw twice the objects that use the alpha component.(once to output a linear depth in alpha), and again to do the correct rgba pass. That might only be 50k polys.


[This message has been edited by JelloFish (edited 06-06-2003).]

dorbie
06-06-2003, 06:32 PM
Yuo can still do some RGB in the first pass, AND z linear alpha at the same time using an additional texture unit, hopefully you can work things out that way.

jwatte
06-06-2003, 06:45 PM
I believe there are two gains in the current Z implementation:

1) Hierarchical Z approaches
2) Compressed Z approaches

I believe they are conceptually orthogonal (though probably can get extra efficiencies by being combined in clever ways).

Here's my mental model of hierarchical Z:

A coarse grid (say, on 8x8 or 16x16 basis) stores the highest and lowest values found within a block, possibly using some lower precision like 16 bits (with appropriate rounding). At that point, Z testing can be done for many cases in a simple operation that throws away an entire block (64 or 256 pixels -- you'd probably even get decent gains at 4x4)

Here's my mental model of Z compression:

A block of Z values (say, 4x4 or 8x8) is compressed using some mechanism that could be lossless if Z is "well behaved". If lossless compression cannot be accomplished, then uncompressed Z is stored in memory. When the memory controller reads in the data, it decompressed on the fly, if the block is compressed. You have to reserve memory for a full, uncompressed block for the entire framebuffer, because the compressibility of each block can change quickly. The win is that the memory controller needs to read much less data if the block is compressed, and thus you get a speed-up as long as actual transfer is your bottleneck.

Possible synergies: Use hierarchical Z values to drive the interpolation for compression, a la DXT5 compression. Use the hierarchical Z data to determine whether the block is compressed or not.

Another possible Z compression model would be to pick one value, and store some number of derivatives off this value, and then store per-pixel some offset from this implied surface, very similar to ADPCM for audio.


I'm pretty sure that I _don't_ have all the details right here, but these models have, so far, served me well in predicting behavior, so I stick to them :-)

dorbie
06-06-2003, 10:54 PM
Jwatte, I think your mental model model of 1 is pretty close and I hope is tied to some region based rasterization that rejects blocks of fragments at some resolution, I think you said this. I think the real optimization as it relates to linear screenspace z would come from a linear subdivision and lerp of min & max depth for the coarse z regions on rasterized primitives.

The mental model of 2 is less clear. MY mental model is corrupted by knowledge of what SGI called 'compressed' Z. There's also the whole issue of variable size representation. "compressed" sounds good to a software guy, but despite my laymans knowledge of hardware issues I've learned at least to be cautious about anything that implies variable sized representations and possible reallocation. Oh well, I could waffle on more about guaranteeing lossless in a worst case scenario but why bother, z is never worst case. Old Chinese proverb say; when your best guess is "chocolate donut", it's time to stop feeling the elephant.

[This message has been edited by dorbie (edited 06-07-2003).]

jwatte
06-07-2003, 07:05 AM
Note that I didn't say variable-size allocation. My intuition is that that would be "exciting" to implement in hardware :-) What I'm envisioning is something where a block either is compressed, or isn't, but you reserve the full size for the block.

However, the Z buffer reader (or writer) circuitry can read only 1/4 or 1/8 of the "reserved" space for the block, in the case that the block is compressed. This is a SPEED gain, but NOT a storage gain, which I think is somewhat unintuitive for someone who has traditionally used "compression" to mean "saves bytes" :-)

Any hardware guys care to comment? I'm fishing for education here!

V-man
06-07-2003, 09:45 AM
What if your application doesn't render from front to back and just throws polygons randomly. Wouldn't the average case of this situation be similar to just storing the z buff uncompressed?

I imagine that in the hierarchical and even all other lossless compression schemes will give a hit when the buffer needs to be updated.

ATI has Hyper Z solution. I read in one NV doc something about a color and z compression unit being faster in such and such GPU. I guess everyone is doing it.

jwatte
06-08-2003, 08:58 AM
Even if you throw polygons in a random order, I think the idea is that there will be fairly decent-sized polygons (or areas) of the screen where a plane plus some delta (the "ADPCM" method) could fully represent the block. Each block is probably fairly small (4x4, 8x8, that kind of size). You still get a bandwidth win for the blocks that compress, and there's no change for the blocks that don't (such as along polygon edges, I'd imagine).

V-man
07-06-2003, 08:33 AM
I have uploaded my stuff at
http://snow.prohosting.com/vmelkon/zprecision.html

This thing uses 2 methods to deal with the problem.

1. using ARB_vp as we discussed here

2. my own personal trick --> glhMergedPerspectivef()

I'm wondering if #2 will work on everyone's cards.

dorbie
07-06-2003, 04:33 PM
The problem is ortho z isn't perspective correct under linear 2D interpolation, it won't work in hardware, this is a very similar idea to one discussed earlier in this thread i.e. taking eye space z and passing it straight through, you do an ortho transform to screen z which really just linearly remaps the eye z coordinate 0.0 to 1.0 between the clip planes, so basically these are pretty much different shades of the same idea. To make this correct you need to do perspective correct fragment interpolation & cass pointed out that this makes things like fast coarse z hardware implementation difficult.

You need to test with an appropriate scenario, see my ascii art above or you risk looking OK in some cases but not actually being correct.

My take on this was to do away with all this business of how much precision we have and take whatever comes out of the modelview and interpolate it, since any concept applied later couldn't undo the limitations of that transformation. I deliberately wanted to avoid any scaling & mapping because it loses precision. It was naive but still worth a thought.

The other issue is storage ropresentation if you go for a linear mapping as you have, I was suggesting using a float in my straw man scheme, but if you use fixed point IMHO that would have undesirable consequences, non linearity of depth precision is a good thing for perspective scenes having it tied to the near far can be a bad thing esp when far/near is high, but that doesn't mean linear is desirable, so storage becomes important when you consider what you want to do with z in your scenario.

The debate & questions over what representation & precision etc. evaporates if you take the eyespace z from the modelview and simply store it as a float, but apparently it's just not practical.

V-man
07-06-2003, 08:18 PM
Originally posted by dorbie:
taking eye space z and passing it straight through

You can't do that because it will *break* GL. This is because a user is free to apply ANY tranform he likes to the projection matrix.

Yes, I know he said there is a major penalty but it's kind of wierd. That would mean you would get worst performance just by switching a scene to an orthographic projection.

I think the method will work fine, just like it does for ortho projections. There should not be artifacts.

The w-buffer I think just stores z values (or -z values into it), and I have no idea if it's float or not. These ideas are not far off from each other, but since we don't have a w-buffer... we can do this instead.



The debate & questions over what representation & precision etc. evaporates if you take the eyespace z from the modelview and simply store it as a float, but apparently it's just not practical.

Let's just say that we can take the window z values mapped from near to far (not remapped 0.0 to 1.0) and store them as floats.

Why not do this instead? And store them as floats. 32 bpp floats or more.
I have never understood why someone would wan't to convert a float to integer and store that instead of the float.

/*edit*/ damn quotes!

[This message has been edited by V-man (edited 07-06-2003).]