Layered rendering

Hi,

OpenGL 4.1 new viewport features raised my interest for layered rendering.

I started my experiment with nVidia OpenGL 4.1 drivers but gl_ViewportIndex isn’t supported yet so I went back to OpenGL 4.0 implementation but still had issues and went back to OpenGL 3.3 capabilities but I still have some issues… :stuck_out_tongue:

On nVidia I have an invalid operation error at draw call which is discard. On AMD the color attachment 2 and 3 takes the values of color attachment 1 using the OpenGL 3.3 implementation.

I suspect an (some) error in my experiment:

Here is my framebuffer object setup:

	glGenFramebuffers(1, &FramebufferName);
	glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
	glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, TextureColorbufferName, 0, 0);
	glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, TextureColorbufferName, 0, 1);
	glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, TextureColorbufferName, 0, 2);
	glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT3, TextureColorbufferName, 0, 3);
	GLenum DrawBuffers[4]= {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3};
	glDrawBuffers(4, DrawBuffers);

TextureColorbufferName is a texture array 2d, nothing specific I guess:

	glGenTextures(1, &TextureColorbufferName);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D_ARRAY, TextureColorbufferName);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_BASE_LEVEL, 0);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAX_LEVEL, 1000);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_SWIZZLE_R, GL_RED);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_SWIZZLE_G, GL_GREEN);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_SWIZZLE_B, GL_BLUE);
	glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_SWIZZLE_A, GL_ALPHA);

	glTexImage3D(
		GL_TEXTURE_2D_ARRAY, 
		0, 
		GL_RGB, 
		GLsizei(FRAMEBUFFER_SIZE.x), 
		GLsizei(FRAMEBUFFER_SIZE.y), 
		GLsizei(4), //depth
		0,  
		GL_RGB, 
		GL_UNSIGNED_BYTE, 
		NULL);

And here is my test shader. It is supposed to fill each layer with a color.

Vertex shader:

#version 330 core

precision highp int;

// Declare all the semantics
#define ATTR_POSITION	0
#define ATTR_COLOR		3
#define ATTR_TEXCOORD	4
#define FRAG_COLOR		0

layout(location = ATTR_POSITION) in vec2 Position;

void main()
{	
	gl_Position = vec4(Position, 0.0, 1.0);
}

Geometry shader:

#version 330 core

precision highp int;

// Declare all the semantics
#define ATTR_POSITION	0
#define ATTR_COLOR		3
#define ATTR_TEXCOORD	4
#define FRAG_COLOR		0

layout(triangles) in;

flat out int GeomInstance;

uniform mat4 MVP;

void main()
{	
	for(int Layer = 0; Layer < 4; ++Layer)
	{
		gl_Layer = Layer;

		for(int i = 0; i < gl_in.length(); ++i)
		{
			gl_Position = MVP * gl_in[i].gl_Position;
			GeomInstance = Layer;
			EmitVertex();
		}

		EndPrimitive();
	}
}

Fragment shader:

#version 330 core

precision highp int;

// Declare all the semantics
#define ATTR_POSITION	0
#define ATTR_COLOR		3
#define ATTR_TEXCOORD	4
#define FRAG_COLOR		0

const vec4 Color[4] = vec4[]
(
	vec4(1.0, 0.0, 0.0, 1.0),
	vec4(1.0, 1.0, 0.0, 1.0),
	vec4(0.0, 1.0, 0.0, 1.0),
	vec4(0.0, 0.0, 1.0, 1.0)
);

flat in int GeomInstance;

layout(location = FRAG_COLOR, index = 0) out vec4 FragColor;

void main()
{
	FragColor = Color[GeomInstance];
}

Any idea on what could be wrong?
Thanks!

>> glDrawBuffers(1, DrawBuffers);

Shouldnt this be 4?

Good peak but that was just the result of an experiment on AMD implementation so I guess the problem comes from somewhere else.

don’t you need something like this (in the geometry shader):

layout(triangle_strip, max_vertices = 3) out;

EDIT: Not sure if I’m misunderstanding layered rendering, but what happens if you just call FramebufferTexture() instead of FramebufferTextureLayer() and use one draw buffer?

Otherwise, can’t you just miss out the geometry shader, and use 4 fragment output variables (or an array)?

The idea behind layered rendering is to redirect triangle on some specific colorbuffer instead of just one.

Why are you arguing with ‘hound’? He is precisely right: your FBO initialization is for MRT, but not for Layered rendering.

Use FramebufferTexture texture once, you don’t need FramebufferTextureLayer 4 times.

And specifying the output format for the geometry shader is a good idea too.

EDIT: oops, left the browser open with a reply half written and came back later so didn’t see Dimitry’s post.

Isn’t it a matter of either:

  1. binding the whole texture array or 3d texture to one attachment point (this is a layered attachment, and will use one colour buffer).
  2. directing to the correct layer in the geometry shader with gl_Layer.
  3. using one fragment output in the fragment shader.

…OR…

  1. binding one layer of the array / 3d texture to each attachment point (non-layered attachments, multiple colour buffers).
  2. no geometry shader necessary (the attachments aren’t layered, we’re using multiple buffers instead).
  3. fragment shader is drawing to multiple buffers, so needs multiple output variables.

???

4.4.7 Layered Framebuffers
A framebuffer is considered to be layered if it is complete and all of its populated attachments are layered.

@Hound: Layered rendering requires a geometry shader:

The layer number for a fragment is zero if geometry shaders are disabled

So I guess it’s option 1

4.4.7 Layered Framebuffers
A framebuffer is considered to be layered if it is complete and all of its populated attachments are layered.

Layered attachment is NOT the one attached with glFramebufferTextureLayer, it’s just a 2DArray/3D texture attached using glFramebufferTexture.
The paragraph 4.4.7 means (in particular!) that if you use a depth texture/render-buffer, it has to be layered as well.

How do you expect to set the layer parameter with glFramebufferTexture?

Or do you mean that we don’t need to specify explicitly each layer and we can access every layer of an image array 2 or 3D texture with gl_Layer using glFramebufferTexture?

Or do you mean that we don’t need to specify explicitly each layer and we can access every layer of an image array 2 or 3D texture with gl_Layer using glFramebufferTexture?

Yes. This is what layered rendering is. It can also use cubemaps.

I finally managed to make it works:

  • glFramebufferTexture
  • Output format
  • Layers == 1 attachment
    = Works

Thanks!

Could you provide some minimal setup/shader code ,please ?

I too want to try using layered rendering to speed up my Cascaded shadowmap rendering.

thank

ahahah exactly what I had in mind! :wink:

I’ll put one on my website soon!

Super cool ! :slight_smile: