How to crash a GLSL shader?

I’ve been missing an assert in GLSL for a long time. Is there a way to write an assert macro? I’ve had something like this in mind:


#define assert(x) (0 / int(x))

then I do, for example,


gl_FragColor = (something complicated) + vec4(assert(0));

This, however, only paints the fragments white.

EDIT:
I’ve tried with this:


#define assert(x) if(!bool(x)) while(true); else

assert(0)
gl_FragColor = ...

Now if assert fails everything freezes and the computer becomes inoperable :slight_smile:

It shouldn’t do. On 7 and XP here nVidia’s driver has some kind of failsafe. After ~5 seconds without a frame being rendered it reboots the driver and you lose your context(s).

The whole point of GLSL is that it doesn’t do stuff like asserting and IF statements are provided out of necessity in some situations, not because they’re meant to be used heavily.

I can’t find it in the spec right now, but it seems that I read in it somewhere that crashing (i.e., program termination) is not permitted in the case of runtime GL errors, but that what actually must occur in the case of runtime GL errors is generally unspecified. So, a white fragment resulting from your first assert example is a reasonable response.

In the case of your second assert example, you have it execute an infinite loop. Not surprising that everything will seem to freeze when you enter an infinite loop. There is no interrupt-driven timesharing operating system running on the GPU to handle escaping from infinite loops.

As far as your whole computer becoming unusable, there is probably some place in the GL driver (which runs on your CPU, not on your GPU) that is waiting for some signal from the GPU after you issue a draw command. Entering that infinite loop on the GPU will ensure that required signal never arrives. For such cases, the driver writers could place time limits on various places where they wait for hardware signals. They’ve probably never created a test case where they executed an infinite loop on the GPU. Their rationale would probably be that to implement such code will just slow the driver down all the time under all conditions and make it more complex just to handle a situation that should never occur. I can’t say I’d argue against that rationale.

I am not criticizing the driver writers (it was a linux ATI driver, btw), I’d just like an assert.

I’d just like an assert.

And what exactly would it do? Which vertex/fragment would it stop on, and what would you expect to see on the screen when it did? Since the CPU will almost certainly be far, far away from the site of the call that caused the assert, where would you want a debugger to stop?

GLSL asserts are the kind of thing that sounds like a good idea, until you stop and think about how it would actually function.

It would stop shader execution. When it would happen does not matter. What I would see on the screen is irrelevant. It would tell me something, at some time, did not go as expected.

It would tell me something, at some time, did not go as expected.

Which you could tell by looking at the screen and seeing unexpected/weird results. You don’t need an assert to know that something, somewhere went wrong.

Worst case, instead of asserting, you could just write white, black, or some other easily detectable nonsense result to the output.

(I love pure 1,0,1 as debug color)

I’ve used 1000,0,0 as a debug color in FP16 framebuffers. Sticks out like a sore thumb if exported to a HDR image format.

Chances are he would like is that the program would crash with a message which draw call did it.

Naturally, though that is not going to happen because the actual drawing and shader execution happen not when one calls glDrawStuff… but there is a way to get a better handle on what draw call made a shader do bad, but it requires some trickery… and GL_NV_shader_buffer_store.

The basic idea is this, when an assert happen write something into a buffer object when it happen. An example is below.

GLSL:


#ifdef DEBUG

uniform int *buffer;

void
do_assert(bool b, int line)
{
  if(!b)
  {
    
    atomicExchange(buffer, line);

    /**
    
    More complicated bits are also
    possible to store a list of all ASSERTS that failed
    incrementing buffer[0] with atomicAdd and then writing
    debug info [line, shader, vertex (for vertex shaders), window location (for fragment shader), etc) 
    into another buffer object, i.e.:
     struct assert_failure_type
     {
        int line;
        int shader_type;
     };
    
     uniform int *number_assert_failures;
     uniform struct assert_failure_type *assert_failures;
      
     .
     .
     int location;
     location=atomicAdd(buffer, 1);
     assert_failures[location].line=line;
     assert_failures[location].shader_type=//shader type;
     .
     .

     C/C++ code will also need to be changed too.
      
    **/
     
  }
}

#define ASSERT(X) do_assert(X, __LINE__)

#else

#define ASSERT(X) 

#endif

and in your C/C++ code:



#ifdef DEBUG

void
set_gl_assert_source(const char *file, int line);

void
print_gl_assert_error(void);

GLuint
gl_assert_buffer_object(void);

GLuint64
gl_assert_buffer_object_gpu_address(void);

void
do_gl_assert_prepare(const char *file, int line)
{
  GLuint glslprogram(0);
  glGetInteverv(GL_CURRENT_PROGRAM, &glslprogram);

  
  

  //step one: set the one value stored in gl_assert_buffer_object as -1:
  int minus_one(-1);
  
  glMemoryBarrierNV(SHADER_GLOBAL_ACCESS_BARRIER_BIT_NV); 
  glNamedBufferSubDataEXT(gl_assert_buffer_object(), 0, sizeof(int), &minus_one); 

  //step two set pointer in the glsl program:
  if(glslprogram!=0)
  {
    glUniformui64NV( glGetUniformLocation(glslprogram, "buffer"), gl_assert_buffer_object_gpu_address);
  }
}

void
check_gl_assert_error(void)
{
  int value;

  glMemoryBarrierNV(SHADER_GLOBAL_ACCESS_BARRIER_BIT_NV); 
  glGetNamedBufferSubDataEXT(gl_assert_buffer_object(), 0, sizeof(int), &value);
  
  if(value!=-1)
  {
     print_gl_assert_error();
  } 
}


#define GL_ASSERT_PREPARE do_gl_assert_prepare(__FILE__, __LINE__)
#define GL_ASSERT_CHECK check_gl_assert_error() 



and then whenever you do a draw call you surround each such call by GL_ASSERT_PREPARE and GL_ASSERT_CHECK… doing that means either doing #define magic on glDrawWhatever or wrapping the draw calls to something that does.

On another note, one can do the same idea as above with GL_EXT_shader_image_store, but that is still NVIDIA only, but that capability is part of DX11, and as such, something like that, I would think, would appear eventually in either GL 4.x (x>=2) or as an extension that both ATI and NVIDIA do.

Enjoy.

Woa, brilliant, yeah I thought some kind of feedback would do it :slight_smile: Too bad I have an ATI card :frowning: Oh, maybe I’ll just use some GLSL debugger, thanks for sharing the idea.

I noticed crashing the shader has the added benefit of making the buggy Visual studio 10 core about 50% too :slight_smile: A fun side effect! Not everytime, of course, because that would make something about VS10 deterministic!

Did you use my assert macro or something custom and which card do you have?

Not sure you were asking me, but, just in case:

my shader was looping:
for (int i = 0; i < NumTex; ++i)

where NumTex was a uniform that I forgot to upload. Hence the crash.

Card: Nvidia GTX 260, Windows 7, VS 2010 !!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.