Performance Issues with Renderbuffers

Hello,

I am currently have poor performance in my OpenGL program. I know that many of you will want me to quantify “poor.” My program currently renders a scene with 100 objects at about ~40FPS. If I set this scene up in Blender (using Blender Render) it runs at around >200FPS. I am unsure what type of render method that Blender uses, but my program is using deferred rendering. I believe that my over-usage of renderbuffers is causing the issue. I narrowed this down, because depending on which/how many renderbuffers I disable, performance increases to around ~140FPS. I am unfortunatly unsure of how to pass the information to the lighting shader without the use of renderbuffers. I was wondering if someone could take a look at my shaders to help me along.

I am currently combining other shaders with these, and it eliminates the use of two renderbuffers.

material.vert


#version 330 core
layout (location = 0) in vec3 vertex;
layout (location = 3) in vec2 texcoord;
out vec3 frag_vertex;
out vec2 frag_texcoord;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
void main (void) {
	gl_Position = (P * V * M) * vec4(vertex, 1.0);
	frag_vertex = vertex;
	frag_texcoord = texcoord;
}

material.frag


#version 330 core
in vec3 frag_vertex;
in vec2 frag_texcoord;
layout (location = 0) out vec4 out_texture_diffuse;
layout (location = 1) out vec4 out_texture_normal;
layout (location = 2) out vec4 out_texture_specular;
layout (location = 3) out vec4 out_texture_emissive;
layout (location = 4) out vec4 out_properties;
uniform sampler2D texture_diffuse;
uniform sampler2D texture_normal;
uniform sampler2D texture_specular;
uniform sampler2D texture_emissive;
uniform vec3 diffuse_color;
uniform vec3 specular_color;
uniform float diffuse_intensity;
uniform float specular_intensity;
uniform float specular_hardness;
uniform float translucency;
uniform int has_diffuse_texture;
uniform int has_normal_texture;
uniform int has_specular_texture;
uniform int has_emissive_texture;
void main (void) {
	out_texture_diffuse = ((1 - has_diffuse_texture) * vec4(diffuse_color, 1.0)) + (has_diffuse_texture * texture(texture_diffuse, frag_texcoord));
	if (out_texture_diffuse.a < 0.1) discard;
	out_texture_normal = ((1 - has_normal_texture) * vec4(0.5, 0.5, 1.0, 1.0)) + (has_normal_texture * texture(texture_normal, frag_texcoord));
	out_texture_specular = ((1 - has_specular_texture) * vec4(0.0, 0.0, 0.0, 1.0)) + (has_specular_texture * texture(texture_specular, frag_texcoord));
	out_texture_emissive = ((1 - has_emissive_texture) * vec4(0.0, 0.0, 0.0, 1.0)) * (has_emissive_texture * texture(texture_emissive, frag_texcoord));
	out_texture_specular *= vec4(specular_color, 1.0);
	out_properties = vec4(diffuse_intensity, specular_intensity, specular_hardness, translucency);
}

Thank You

What do you use renderbuffers for? You should see them as write-only FBO attachments, if you know you won’t sample the resulting image afterwards. It’s intriguing me that you use multiple renderbuffers, how many FBOs do you have? How does your rendering pipeline work?

have you tried to use textures instead of renderbuffers ?
fo the geometry pass, writre everything in separate texture attachments
then use those textures in the lighting pass

multiple render targets, or MRT:
1 frambuffer object can have multiple renderbuffers or textures as “color attachments”, but only 1 depth / stencil attachment
using multiple color attachments, the fragment shader should also have multiple output fragments:


layout (location = 0) out vec4 out_texture_diffuse;
layout (location = 1) out vec4 out_texture_normal;
layout (location = 2) out vec4 out_texture_specular;
layout (location = 3) out vec4 out_texture_emissive;
layout (location = 4) out vec4 out_properties;

with glDrawBuffers(num, attachments) you can control to which color attachments can the fragment shader write


unsigned int drawbuffers[] = {
	GL_COLOR_ATTACHMENT0,// out_texture_diffuse
	GL_COLOR_ATTACHMENT1,// out_texture_normal
	GL_COLOR_ATTACHMENT4,// out_texture_specular
};
glDrawBuffers(3, drawbuffers);

I meant that using renderbuffers discards some outputs of a rendering pass, something that still has to be computed in the fragment shader. Why do you need an attachment if you won’t use the data written to it? I know it’s better for depth/stencil attachments (and maybe blending?) if you won’t use the data afterwards but it seems to be inefficient in the case of a deferred renderer.

Hello, sorry for the late reply.

I wasn’t sure whether to use renderbuffers or textures, but I found some information (or more accurately misinformation) online telling me that renderbuffers were more efficient than textures. It’s hard to explain without giving you all of the source, but my renderer works in “stages.” All of the meshes get rendered in the “material” stage (I know I need to rename it in my code.) Then all of the lights get rendered in the “lighting” stage. Inbetween each stage I have to manually blit the renderbuffers to textures. Initially, I had actually used textures (rather than renderbuffers) in my program. Do you think that I should revert back to that? Also since I am learning OpenGL, I would like to have a look at what renderbuffers discard.

Thanks!

[QUOTE=john_connor;1283203]have you tried to use textures instead of renderbuffers ?
fo the geometry pass, writre everything in separate texture attachments
then use those textures in the lighting pass

multiple render targets, or MRT:
1 frambuffer object can have multiple renderbuffers or textures as “color attachments”, but only 1 depth / stencil attachment
using multiple color attachments, the fragment shader should also have multiple output fragments:


layout (location = 0) out vec4 out_texture_diffuse;
layout (location = 1) out vec4 out_texture_normal;
layout (location = 2) out vec4 out_texture_specular;
layout (location = 3) out vec4 out_texture_emissive;
layout (location = 4) out vec4 out_properties;

with glDrawBuffers(num, attachments) you can control to which color attachments can the fragment shader write


unsigned int drawbuffers[] = {
	GL_COLOR_ATTACHMENT0,// out_texture_diffuse
	GL_COLOR_ATTACHMENT1,// out_texture_normal
	GL_COLOR_ATTACHMENT4,// out_texture_specular
};
glDrawBuffers(3, drawbuffers);

[/QUOTE]

Hello, sorry for the late reply.

I’m going to revert back to textures. Also by different texture attachments do you mean something like this:


enum {
	passMaterial__begin  = 0,
	passMaterial_diffuse = 0,
	passMaterial_normal     ,
	passMaterial_specular   ,
	passMaterial_emissive   ,
	passMaterial_properties ,
	passMaterial__size      ,
};
typedef struct {
	// Width and height of textures
	float _w;
	float _h;
	// Framebuffer object
	GLuint _fbo;
	// Texture attachments
	GLuint _texture_diffuse;
	GLuint _texture_normal;
	GLuint _texture_specular;
	GLuint _texture_emissive;
	GLuint _texture_properties;
	// Depth render buffer
	GLuint _rb_depth;
} passMaterial_t;

// ... I bind attachments like such
GLenum attachments [] = {
	GL_COLOR_ATTACHMENT0 + passMaterial_diffuse   ,
	GL_COLOR_ATTACHMENT0 + passMaterial_normal    ,
	GL_COLOR_ATTACHMENT0 + passMaterial_specular  ,
	GL_COLOR_ATTACHMENT0 + passMaterial_emissive  ,
	GL_COLOR_ATTACHMENT0 + passMaterial_properties
};
glDrawBuffers(5, attachments);