PDA

View Full Version : Deferred Shading



LaBasX2
10-01-2004, 04:02 PM
Hello,

I'm experimenting a bit with deferred shading and I would like to save a fragments's world position in a texture map. One possibility is to save the x, y and z components as floats but that is pretty memory-expensive.
I have seen in some papers that it is also possible to save only the fragment's z-buffer value. But how can you recover the original world position using fragment programs?

Thanks in advance!

Humus
10-01-2004, 05:40 PM
Something like gluUnproject() (http://pyopengl.sourceforge.net/documentation/manual/gluUnProject.3G.html) should do.

LaBasX2
10-03-2004, 05:29 AM
Thanks Humus, I'll look into this.

Another thing. How is it possible to pack data in a floating point buffer using GLSL? I couldn't find any predefined functions.

LaBasX2
10-05-2004, 03:52 AM
Packing 4 unsigned bytes in a float would be easy using bit operations but these aren't available in GLSL at the moment. Is there no other way to achieve this?

V-man
10-05-2004, 06:24 PM
In GLSL, it's reserved for future use.

On nvidia it's possible to use Cg extensions to pack/unpack. (EXT_Cg_shader)

PS: I think there is method that involved multiplying by a set a numbers to acheive this. Forgot how it went.

LaBasX2
10-06-2004, 07:18 AM
In one of Humus' demos, I found a way to pack a float [0, 1] in 4 unsigned bytes:


// Pack float
PARAM packFactors = { 1.0, 256.0, 65536.0, 16777216.0 };

MUL float, float, packFactors;
FRC ub, float;

// Unpack float
PARAM extract = { 1.0, 0.00390625, 0.0000152587890625, 0.000000059604644775390625 };

DP4 float, ub, extract;That's pretty nice, but I still don't know how to do it the other way round, pack 4 unsigned bytes in one float.

Thanks!

zed
10-06-2004, 12:08 PM
num = A + B<<8 + C<<16 + D<<24;

LaBasX2
10-06-2004, 04:21 PM
Thanks zed!
But the problem is that these bit shifting operations are not supported in GLSL.

zed
10-06-2004, 06:35 PM
* 256 * 65535 * (2^24)

just wanna add ( i know its obvious but) these are the inverse of what humus used

Won
10-06-2004, 07:13 PM
So these don't really work all the time. The packing of a float into four bytes assumes that the value is in the range [-1, 1]. Instead, it should probably try to encode/decode the exponent bits.

Packing the bytes into a float using multiplies are going to only use the 23 mantissa bits. And then, you have to worry about the implicit significant bit. To losslessly pack 4 bytes into a float is considerably more involved -- you need to bring the sign bit and exponent bits into play.

In other words: it probably isn't worth it unless you can use the inbuilt functions since they almost certainly do it better than you can with arithmetic. Doesn't the NV glslang implementation expose the entire Cg standard library? I don't think you need to do Cg_Shader unless the shader code itself is written in Cg.

-Won

roffe
10-07-2004, 01:08 AM
Maybe the RGBE format is an alternative.
http://www.graphics.cornell.edu/~bjw/rgbe.html

I don't know about GLSL but Cg supports frexp/ldexp. Link has sample code.

V-man
10-07-2004, 09:21 AM
Originally posted by Won:
So these don't really work all the time. The packing of a float into four bytes assumes that the value is in the range [-1, 1]. Instead, it should probably try to encode/decode the exponent bits.
Yes, it seems to work only in the range [0.0, 1.0) when I tested.

With 1.0, you will get 0.0 with those numbers.
Not enough precision maybe?