Some semi-advanced questions

Hi,
still adapting to Opengl (3.2) after some years on the DX10 bandwagon. I hope you can help me shedding some light on a bunch of questions whose place is just a step beyond the “basic” stuff I suppose.

1) Depth/Alpha test
Is it possible that, despite getting rid of the fixed function pipeline, depth and alpha test are still managed by a GlEnable/GlDisable switch? I am wondering because I am not finding any reference to depth or alpha test in GLSL.

2) Multiple fragment and vertex shaders
I’ve seen that an arbitrary number of shaders can be attached to a program, so I wonder what does that mean exactly. Let’s say I need to render meshes in two different ways: normal and alpha blended. Right now I’d stick to having to programs: one with a vertex and fragment shader designed to render solid objects, one with another vertex and another fragment shader designed to render transparent objects. I will then switch between the two programs as needed. Is that right? In that case what’s the use of N fragment and N vertex shaders in a single program?

3) Opengl “costs” reference sheet
Trying to avoid performance pitfalls that would be handy. Has anyone compiled something like that? I am wondering, for example, how expensive is to enable/disable certain things, if a 2dTextureArray with a single slice performs a lot worse than a classic 2dTextures, etc. etc.

Thanks in advance for your help,

andrea

  1. Depth test has not changed. It is still controlled through glEnable (GL_DEPTH_TEST) and glDepthFunc.

Alpha-test does not exist anymore. You can do tests in shaders and then use the “discard” keyword to discard fragments that don’t pass your tests.

  1. Never used that feature, but AFAIK you can use it to stitch a shader together from multiple sources. So for example one source contains your “main” function for the vertex-shader, which calls “FooBar” and then you can attach another piece of code that actually contains the “FooBar” function, making the source complete.

I am pretty sure there is nearly no one who uses this feature, at all.

  1. I don’t know of any such thing. I don’t think that array-textures are really slower than normal textures, at all, so your example would be not problematic, i think.

The most expensive calls, in my experience, are switching shaders, vertex-buffers and textures. Also try to prevent redundant glUniform-calls, that can easily get out of hand.

Other than that, of course all readbacks are potential performance-killers.

In the end, finding out how to get good performance is the most difficult part in learning any 3D API, but OpenGL is a bit special here, because it offers so many (deprecated, redundant, vendor-specific) ways.

Jan.

I currently use this, although I’m not very sure yet whether it is the best approach or that all functions in a single file (and thus a single shader) would be better.

What you have to do is:

  • only 1 main function is allowed per shader stage
  • you call other shaders in your main function, but these must be declared in the source of your main function, so that would become something like:

// main function of fragment shader
#version 330 core


uniform int shading_mode;
out vec4 outcolour;

// declare func1 and func2 which are defined in other sources
vec4 func1();
vec4 func2(int arg1, vec4 arg2);

void main(void)
{
  switch(shading_mode)
  {
    case 1:
      outcolour = func1();
    break;
    case 2:
    {
      int i = 3;
      vec4 v4 = vec4(1.0, 2.0, 3.0, 4.0);
      outcolour = func2(i, v4);
    }
    break;
    default:
      outcolour = vec4(1.0);
    break;
  }
}

edit:
Now when you create your glsl program you first make glsl shaders from the sources of the main function, func1 and func2. Attach all of them to your program and it should work properly.

You can combine any number of different shader objects into your program, what provides a good granularity factor.

I use this feature extensively:

  1. Material can have it’s own implementation of shading models, normal/tangent and color ‘getters’. This, for example, allows using specularity map instead of constant specular factor for a particular material without any renderer/user-space class aware of that.

  2. Particle systems processors are created by combining a number of behaviors. Each behavior implements some functions set up by the particle system contract.

I’d say, not using this feature makes your OpenGL-based render obsolete.

Thanks everyone for your answers, I am really coming to love how OpenGL evolved since the fixed function pipeline era.

the complaint about that system (combining ‘object files’ to make a single program) was always that it offered no advantage to preprocessing your source code yourself in terms of link speed.

Apart from that it’s a very flexible system to work with for the reasons given. You can calculate all the possible shared variables in an init() function (world transform, transformed position & normal etc.), storing them in a global structure. If any of these variables aren’t used by the modules you dynamically bind to the program then the linker will throw away the calculations that produced its value. You lose no performance.