Target-FS output component count mismatch

Hi,

My question is whether GLSL allows you to use e.g. a vec4 output in the fragment shader even though you are attaching a 1, 2 or 3 component texture as the render target. In such case I would expect that the other components of the vec4 get discarded but I don’t know whether there will be some validation error because of it.

I want to use it this way because I want to have a fixed vec4 output in the fragment shader, no matter if I’m attaching e.g. an RGBA8, RG16F or an RGB9_E5 format texture as render target.

gl_FragColor is declared as:

out vec4 gl_FragColor;

and it works with any (non-integer) render target so I assume (by analogy) that user defined output in FS also can be vec4 and it must work with any (non-integer) render target.

If this wasn’t the case you would have to write four version of the same shader with only output type changed.

Sorry, I wasn’t precise enough. I meant in case of GL3+ core profile. There you declare the fragment shader output as you wish and as such I think it is possible to declare it as vec2 or vec3 as an example. Is it true also in this case?

Yes, I know you meant core profile.
It would be huge inconsistency if it wasn’t true (but AFAIK spec doesn’t describe this situation).

I always use:

out vec4 Color;

And it works on NV and AMD hardware with textures with 1, 2, 3 or 4 components.

Thank you for the clarification. Unfortunately I haven’t tried this yet, and to be honest, I was too lazy to try it out and usually I don’t like to rely on tests as actual functionality may vary between vendors and driver versions.

AFAIK spec doesn’t describe this situation

I noticed that as well, but maybe I’m missing something. Can anybody tell whether there is anything said about this in either the GL or the GLSL spec?

It’s there; it’s just not in a single place.

Remember: after outputting a color from the fragment shader, that color goes through a large sequence of steps. Pixel ownership, scissoring, blending, logic ops, etc. Only after all of that is the value written to the framebuffer.

If OpenGL could not handle size differences between the fragment output and the framebuffer, then all framebuffers in the fixed-function pipeline would have had to be RGBA.

If OpenGL could not handle size differences between the fragment output and the framebuffer, then all framebuffers in the fixed-function pipeline would have had to be RGBA.

I understand that, and seems logical, I was rather suspecting a possible GLSL specific restriction so that the driver validates that the shader output matches the render target. Obviously, such a matching would not just add a little runtime overhead, but as you all said, it would be cumbersome to write shaders otherwise.