Multi-layered rendering using GL_TEXTURE_2D_ARRAY

Hello,

I am trying to render things to the same FBO using a GL_TEXTURE_2D_ARRAY. I have made the FBO in the following way, I think it was pretty straight forward:

	// FRAMEBUFFER
		glGenFramebuffers(1, &(this->dynamicRenderFBO));
		glBindFramebuffer(GL_FRAMEBUFFER, this->dynamicRenderFBO);
		glDrawBuffers(1, &(this->dynamicRenderFBO_Attachments));

	// TEXTURE ARRAY
		glGenTextures(1, &(this->dynamicRenderTextureID));
		glBindTexture(GL_TEXTURE_2D_ARRAY, this->dynamicRenderTextureID);
		glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGBA, _MAX_DYNAMIC_RESOLUTION_, _MAX_DYNAMIC_RESOLUTION_, _MAX_DYNAMIC_LAYERS_, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
			glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
			glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
			glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
			glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

	// ATTACH TEXTURE TO FBO'S OUTPUT
		glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, this->dynamicRenderTextureID, 0);

However, assigning the proper layer index doesn’t work.

#version 450
#extension GL_NV_viewport_array2 : enable

uniform	int	outputIndex;

// LOCAL OBJECT (DISPLAY PLANE) SPACE 
layout (location = 0) in vec3 vertexPos;
layout (location = 1) in vec2 texCoord;

// PASSED VARIABLES
out vec2 tex_coord;

///////////////////////////////////////////////////////////////////////////////////////////////////////////
void main() {
	tex_coord = texCoord;

	gl_Layer = outputIndex;

	gl_Position = vec4(vertexPos[0], vertexPos[1], 0.1, 1.0);
}
#version 450

in vec2 tex_coord;

// TEXTURES
uniform sampler2D	dynamicLayerColorMap;

layout(location = 0) out vec4 sceneColorMap;

///////////////////////////////////////////////////////////////////////////////////////////////////////////
void main() {
	vec4 colorBuffer = texture(dynamicLayerColorMap, tex_coord);

	if (colorBuffer.a > 0.0) {
		sceneColorMap = vec4(colorBuffer.xyz, colorBuffer.a);
	} else {
		discard;
	}
}

This wont compile, and it gives this error I am not able to decode:

I’ve read a few articles on gl_Layer usage, but they all seem to be working either with old OGL (like 1.4 or 1.5), while I use OpenGL 4.3+.

How do I get this to work?

Is there any way to assign the output layer index in the fragment shader?

gl_Layer can only be set in a geometry shader. It applies to a primitive, not a vertex.

There’s really no point in using layered rendering if the layer is going to be specified via a uniform. A uniform can only be changed between draw calls, so you may as well just change the output layer by binding that specific layer to the framebuffer. Layered rendering only makes sense if you’re going to be selecting the layer(s) dynamically in a geometry shader.

No.

Is there any way to render to a layered texture at all while avoiding changing either FBOs or FBO attachments?

I will answer my own question.

Yes, it is possible. The solution is to add a passthrough geometry shader whose job is simply to provide the gl_Layer value.

The 3 shaders now go like this:

#version 450

// LOCAL OBJECT (DISPLAY PLANE) SPACE 
layout (location = 0) in vec3 vertexPos;
layout (location = 1) in vec2 texCoord;

// PASSED VARIABLES
out vec2 tex_coord_vert;

///////////////////////////////////////////////////////////////////////////////////////////////////////////
void main() {
	tex_coord_vert = vec2(texCoord.x, 1.0 - texCoord.y);

	gl_Position = vec4(vertexPos[0], vertexPos[1], 0.1, 1.0);
}
#version 450

layout (triangles) in;
layout (triangle_strip, max_vertices = 3) out;

uniform int outputIndex;

in	vec2 tex_coord_vert[];
out vec2 tex_coord_geom;

///////////////////////////////////////////////////////////////////////////////////////////////////////////
void main() {
	gl_Layer = outputIndex;

	int i;
	for ( i=0; i < gl_in.length(); i++) {
		tex_coord_geom = tex_coord_vert[i];
		gl_Position = gl_in[i].gl_Position;
		EmitVertex();
	}

	EndPrimitive();
}
#version 450

in vec2 tex_coord_geom;

// TEXTURES
uniform sampler2D	dynamicLayerColorMap;

layout(location = 0) out vec4 sceneColorMap;

///////////////////////////////////////////////////////////////////////////////////////////////////////////
void main() {
	vec4 colorBuffer = texture(dynamicLayerColorMap, tex_coord_geom);

	if (colorBuffer.a > 0.0) {
		sceneColorMap = vec4(colorBuffer.xyz, colorBuffer.a);
	} else {
		discard;
	}
}

The the scene outputs now get rendered to their own layers within a dynamic rendering output atlas.

See also AMD_vertex_shader_layer (2012), ARB_shader_viewport_layer_array (2015), NV_viewport_array2 (2015).

[QUOTE=CaptainSnugglebottom;1292156]


#extension GL_NV_viewport_array2 : enable

This wont compile, and it gives this error I am not able to decode[/QUOTE]

Of course you’ll have to check for extensions before you use them.

And while you’re at it: NV_stereo_view_rendering.

TLDR:

[ul]
[li]NV_viewport_array2 - allows you to broadcast a primitive across multiple viewports and/or layers, without a geometry shader.[/li][li]NV_stereo_view_rendering - adds to that the ability to set different gl_Position.x values for each viewport (a fast stereo cheat versus computing completely different lighting+shading+position values for each viewport).[/li][/ul]