Deferred Shading

Hello,

I’m experimenting a bit with deferred shading and I would like to save a fragments’s world position in a texture map. One possibility is to save the x, y and z components as floats but that is pretty memory-expensive.
I have seen in some papers that it is also possible to save only the fragment’s z-buffer value. But how can you recover the original world position using fragment programs?

Thanks in advance!

Something like gluUnproject() should do.

Thanks Humus, I’ll look into this.

Another thing. How is it possible to pack data in a floating point buffer using GLSL? I couldn’t find any predefined functions.

Packing 4 unsigned bytes in a float would be easy using bit operations but these aren’t available in GLSL at the moment. Is there no other way to achieve this?

In GLSL, it’s reserved for future use.

On nvidia it’s possible to use Cg extensions to pack/unpack. (EXT_Cg_shader)

PS: I think there is method that involved multiplying by a set a numbers to acheive this. Forgot how it went.

In one of Humus’ demos, I found a way to pack a float [0, 1] in 4 unsigned bytes:

// Pack float
PARAM packFactors = { 1.0, 256.0, 65536.0, 16777216.0 };

MUL	float, float, packFactors;
FRC	ub, float;

// Unpack float
PARAM extract = { 1.0, 0.00390625, 0.0000152587890625, 0.000000059604644775390625 };

DP4	float, ub, extract;

That’s pretty nice, but I still don’t know how to do it the other way round, pack 4 unsigned bytes in one float.

Thanks!

num = A + B<<8 + C<<16 + D<<24;

Thanks zed!
But the problem is that these bit shifting operations are not supported in GLSL.

  • 256 * 65535 * (2^24)

just wanna add ( i know its obvious but) these are the inverse of what humus used

So these don’t really work all the time. The packing of a float into four bytes assumes that the value is in the range [-1, 1]. Instead, it should probably try to encode/decode the exponent bits.

Packing the bytes into a float using multiplies are going to only use the 23 mantissa bits. And then, you have to worry about the implicit significant bit. To losslessly pack 4 bytes into a float is considerably more involved – you need to bring the sign bit and exponent bits into play.

In other words: it probably isn’t worth it unless you can use the inbuilt functions since they almost certainly do it better than you can with arithmetic. Doesn’t the NV glslang implementation expose the entire Cg standard library? I don’t think you need to do Cg_Shader unless the shader code itself is written in Cg.

-Won

Maybe the RGBE format is an alternative.
http://www.graphics.cornell.edu/~bjw/rgbe.html

I don’t know about GLSL but Cg supports frexp/ldexp. Link has sample code.

Originally posted by Won:
So these don’t really work all the time. The packing of a float into four bytes assumes that the value is in the range [-1, 1]. Instead, it should probably try to encode/decode the exponent bits.

Yes, it seems to work only in the range [0.0, 1.0) when I tested.

With 1.0, you will get 0.0 with those numbers.
Not enough precision maybe?