Reading from gl_FragColor?

I need to implement my own custom blend function, since there is no existing blend function that will do what I need.

(I am dealing with input and output vector normals encoded as RGB pixel values)

So, within my fragment shader, I need to be able to read the existing fragment’s RGB values, mix them with some input RGB values and then write the result into the fragment.

If memory serves, you are not allowed to read the .r, .g, .b or .a members of the gl_FragColor object.

Also, if memory serves, it works anyway under Nvidia cards. But ATI cards just returns 0.0 for the current values stored in the gl_FragColor object.

Am I correct in this?

If so, what other alternatives are there?

make a copy of src image to a texture.

then it is a matter of doing dst = “incoming blend srctex”

I was afraid you would suggest that.
It sounds like it would be too expensive.
The copy would have to happen for every object drawn.

Is having an FBO’s texture selected as an input texture as well as the rendering target illegal?

I suspect it is…

It’s not illegal, according to the spec it just produces undefined results.

With the pipelined nature of videocards, you can not expect to be able to read from and write to the same surface with both performance and correctness.
There have been discussions on possible “blend shaders” that would allow to put custom code at the blending stage. But I guess the hardware is not yet ready for this.

In the meantime you might use render to texture (if you have only some dozens of objects overlapping), each object (sorted by distance or something) to a different FBO, then do the ‘blending’ all in one stage, sampling from as many FBO you can for a given pass.

Can you be more precise about the context for this custom blending ?

The RGB values are dot3 normals, encoded as RGB values.

R = (X / 2.0) + 0.5;
G = (Y / 2.0) + 0.5;
B = (Z / 2.0) + 0.5;

The blend function would need to decode both sets of input colors into 3D vectors. Then add the vectors together into a single vector. Then encode the vector back into RGB values in the output fragment.

For something as simple as that, you can use a FP16 RGBA render-target (FBO), and just use additive-blending.

For really custom blending, maybe 2 FBOs (whatever format), doing ping-pong computations and only copying updated parts by drawing flat textured triangles (one of the FBOs serves as a texture to fill into the other FBO)

I have never played with those before. Do they store floating point numbers outside the 0.0 - 1.0 range?

How many video cards support them? (In FBOs)

Radeon 9550 and later,
GF6x00 and later afaik. (on GF7x00 it’s definitely present).
It’s a standard on shaderModel4 cards.

Search-up on HDR rendering, there 16-bit floats are used - so it will show how to set-up the FBO and render to it.

Blending on a FP16 render target is supported on Geforce 6xxx and above.
On Radeon 9550 and anything in that generation, blending is not supported but you can create FP16 render target and render to it.
I don’t know which ATI cards support FP16 blending.

So, I wouldn’t be able to do additive blending on a Radeon 9550?

I can’t remember if I implement additive using glblendfunc() or not. I don’t have the code in front of me to refresh my memory.

Is there a list of what cards support FP16 blending ? Including intel ?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.