Shared blend channel for multiple color attachment

Consider a deferred renderer with a alpha-blended road being drawn on a terrain, prior to lighting.

There are two color attachments, one for diffuse color and one for screen normal. The road will affect both of these. The edges of the road will be softly blended out. I set the blend mode for the road to alpha blending. Now we have to write the alpha value out to both gl_FragData[0] and gl_FragData[1], even though it’s the same value. In fact, I have to write the alpha value out for every single color attachment, meaning that about 25% of my texture bandwidth is being wasted. If I could make every color attachment use the output of gl_FragData[0] for the alpha value, I could store other data in the alpha channels of gl_FragData[1, 2, 3, etc.].

Here’s a video of the usage:
http://vimeo.com/5700110

You mean something like this?

http://www.opengl.org/discussion_boards/…1694#Post251694

Yeah, I think so. I would like more control of blending so that I all color attachments can use a single alpha value for blending, so I can store other data in their alpha channels.

Basically, I want to do this with the road shader:

gl_FragData[0] = diffuse;
gl_FragData[1] = vec4(normal.xyz,specular);

And have both color attachment 0 and 1 use the “diffuse.a” value for blending. Then I can use the specular value later for lighting, and color attachment 1’s alpha channel isn’t wasted. I’m kind of surprised it wasn’t designed like this in the first place.

Here’s what I have to do right now:
gl_FragData[0] = diffuse;
gl_FragData[1] = vec4(normal.xyz,diffuse.a);
gl_FragData[2] = vec4(specular,0,0,diffuse.a)

The method proposed in the thread is actually even more powerful, because it allows full control over the blend function, without the need to render to ANY channel.

With deferred shading more control over the blend function is extremely important, but so far there is no solution. I suggested this about 6 months ago, but i doubt we will see this or something similar implemented soon :frowning:

Jan.

Jan’s suggestion definitely is nice sounding , my only be as to why it was not really picked up (pure unforgivable speculation) because current hardware cannot do it. Also, I think that current hardware cannot even do ARB_draw_buffers_blend , I am not 100% sure that modern hardware can’t but pretty sure. Something odd, in gl3.h, doing:

grep Blend gl3.h

yields:

/* BlendingFactorDest /
/
BlendingFactorSrc */
GLAPI void APIENTRY glBlendFunc (GLenum, GLenum);
GLAPI void APIENTRY glBlendColor (GLclampf, GLclampf, GLclampf, GLclampf);
GLAPI void APIENTRY glBlendEquation (GLenum);
GLAPI void APIENTRY glBlendFuncSeparate (GLenum, GLenum, GLenum, GLenum);
GLAPI void APIENTRY glBlendEquationSeparate (GLenum, GLenum);
GLAPI void APIENTRY glBlendEquationi (GLuint, GLenum);
GLAPI void APIENTRY glBlendEquationSeparatei (GLuint, GLenum, GLenum);
GLAPI void APIENTRY glBlendFunci (GLuint, GLenum, GLenum);
GLAPI void APIENTRY glBlendFuncSeparatei (GLuint, GLenum, GLenum, GLenum, GLenum);

note the glSomethingi functions, but the GL 3.2 spec does not have such functions. Those functions are in ARB_draw_buffers_blend extension (with the ARB suffix)… kind of odd that the functions are already there in gl3.h though.

Also, I think that current hardware cannot even do ARB_draw_buffers_blend , I am not 100% sure that modern hardware can’t but pretty sure.

ATI hardware can, which is why the extension exists. Per-draw buffer blend functions is an explicit feature of DX10.1, so DX10.1 ATI hardware should have no problem with it.

Might be that ATI can but nVidia GeForce 8 can’t, I remember that when DX 10.1 was released that nVidia did not support it really (but that was with regards to MSAA stuff, I think).

Might be that ATI can but nVidia GeForce 8 can’t

I know. NVIDIA isn’t the world; they are not “current hardware”.

The fact that ATI already has a DX11 part shipping and NVIDIA does not suggests that NVIDIA is pretty far from “current hardware” :wink:

I know. NVIDIA isn’t the world; they are not “current hardware”.

The fact that ATI already has a DX11 part shipping and NVIDIA does not suggests that NVIDIA is pretty far from “current hardware”

My nVidious nature will now show through: they are current hardware. How many chaps out there have a new AMD 5xxx card? Next year will be when that generation is “current”. How many have AMD 4xxx/3xxx or GeForce 8xxx,9xxx, or 2xx? Everyone not having crap Intel right? Except for the few of you now with a brand new, very pricey, AMD 5xxx card.

Even now, most games and CAD software don’t require D3D10/GL3 features, they might add a version with some visual extras with it, but still even almost 3 years after D3D10 was released to the masses, it is not a requirement, I wonder how long it will be for D3D11 capable cards to be required? Probably when Window 10(Ultimate Chair Throwing edition) ships.

I really don’t care what hardware could do it. If there is any way hardware can do it (and i assume “current” hardware can), then i would like to see an extension!

When there is a proper extension, THEN we can talk about making it core for GL 3.x.

So give me an extension! Even if it only runs on the latest hardware.

Jan.

they are current hardware. How many chaps out there have a new AMD 5xxx card?

The extension is supported by all 3xxx and 4xxx Radeon HD hardware. ATI has had DX10.1 parts on the market for a good year now. So no, NVIDIA isn’t current.

Yeah, people at ATI must be very happy about supporting APIs no one cares about (the reason). NVIDIA has CUDA and PhysX, both widely used, and the 3D vision – this is the gaming and high performance computing platform, so I don’t understand how ATI could be “current”. :wink:

BTW, if you actually want to know whether ATI R6xx-R7xx hardware supports what Jan suggested in the other thread, the GPU specs are available here:
http://www.x.org/docs/AMD/R6xx_3D_Registers.pdf (start at page 151)
http://www.x.org/docs/AMD/R6xx_R7xx_3D.pdf

My point is on the “current” hardware bit is this:

GL3 is supposed to correspond to D3D10 generation cards.

Now “half” of such cards do not support per channel blending (and I wish they did, it could be really useful in what I am doing).

As such, making it core in GL3 means that half cannot claim GL3 compliance. On the otherhand it makes an excellent feature to put into GL4 which is supposed to correspond to the generation coming out now: ATI 5xxx and later Geforce 3xx

As such, making it core in GL3 means that half cannot claim GL3 compliance. On the otherhand it makes an excellent feature to put into GL4 which is supposed to correspond to the generation coming out now:

Agreed. But OpenGL is not just a version number. There are extensions. Which is why it’s nice to be able to have extensions like ARB_draw_buffers_blend for the hardware that can handle it.