PDA

View Full Version : how to crash a GLSL shader?



ugluk
09-25-2010, 02:18 AM
I've been missing an assert in GLSL for a long time. Is there a way to write an assert macro? I've had something like this in mind:


#define assert(x) (0 / int(x))

then I do, for example,


gl_FragColor = (something complicated) + vec4(assert(0));

This, however, only paints the fragments white.

EDIT:
I've tried with this:


#define assert(x) if(!bool(x)) while(true); else

assert(0)
gl_FragColor = ...

Now if assert fails everything freezes and the computer becomes inoperable :)

NeXEkho
09-25-2010, 01:50 PM
It shouldn't do. On 7 and XP here nVidia's driver has some kind of failsafe. After ~5 seconds without a frame being rendered it reboots the driver and you lose your context(s).

The whole point of GLSL is that it doesn't do stuff like asserting and IF statements are provided out of necessity in some situations, not because they're meant to be used heavily.

david_f_knight
09-25-2010, 03:03 PM
I can't find it in the spec right now, but it seems that I read in it somewhere that crashing (i.e., program termination) is not permitted in the case of runtime GL errors, but that what actually must occur in the case of runtime GL errors is generally unspecified. So, a white fragment resulting from your first assert example is a reasonable response.

In the case of your second assert example, you have it execute an infinite loop. Not surprising that everything will seem to freeze when you enter an infinite loop. There is no interrupt-driven timesharing operating system running on the GPU to handle escaping from infinite loops.

As far as your whole computer becoming unusable, there is probably some place in the GL driver (which runs on your CPU, not on your GPU) that is waiting for some signal from the GPU after you issue a draw command. Entering that infinite loop on the GPU will ensure that required signal never arrives. For such cases, the driver writers could place time limits on various places where they wait for hardware signals. They've probably never created a test case where they executed an infinite loop on the GPU. Their rationale would probably be that to implement such code will just slow the driver down all the time under all conditions and make it more complex just to handle a situation that should never occur. I can't say I'd argue against that rationale.

ugluk
09-25-2010, 11:04 PM
I am not criticizing the driver writers (it was a linux ATI driver, btw), I'd just like an assert.

Alfonse Reinheart
09-26-2010, 12:05 AM
I'd just like an assert.

And what exactly would it do? Which vertex/fragment would it stop on, and what would you expect to see on the screen when it did? Since the CPU will almost certainly be far, far away from the site of the call that caused the assert, where would you want a debugger to stop?

GLSL asserts are the kind of thing that sounds like a good idea, until you stop and think about how it would actually function.

ugluk
09-27-2010, 12:14 AM
It would stop shader execution. When it would happen does not matter. What I would see on the screen is irrelevant. It would tell me something, at some time, did not go as expected.

Alfonse Reinheart
09-27-2010, 01:32 AM
It would tell me something, at some time, did not go as expected.

Which you could tell by looking at the screen and seeing unexpected/weird results. You don't need an assert to know that something, somewhere went wrong.

Worst case, instead of asserting, you could just write white, black, or some other easily detectable nonsense result to the output.

ZbuffeR
09-27-2010, 06:11 AM
(I love pure 1,0,1 as debug color)

malexander
09-27-2010, 11:34 AM
I've used 1000,0,0 as a debug color in FP16 framebuffers. Sticks out like a sore thumb if exported to a HDR image format.

kRogue
09-27-2010, 11:50 AM
And what exactly would it do? Which vertex/fragment would it stop on, and what would you expect to see on the screen when it did? Since the CPU will almost certainly be far, far away from the site of the call that caused the assert, where would you want a debugger to stop?

Chances are he would like is that the program would crash with a message which draw call did it.

Naturally, though that is not going to happen because the actual drawing and shader execution happen not when one calls glDrawStuff... but there is a way to get a better handle on what draw call made a shader do bad, but it requires some trickery.. and GL_NV_shader_buffer_store.

The basic idea is this, when an assert happen write something into a buffer object when it happen. An example is below.

GLSL:


#ifdef DEBUG

uniform int *buffer;

void
do_assert(bool b, int line)
{
if(!b)
{

atomicExchange(buffer, line);

/**

More complicated bits are also
possible to store a list of all ASSERTS that failed
incrementing buffer[0] with atomicAdd and then writing
debug info [line, shader, vertex (for vertex shaders), window location (for fragment shader), etc)
into another buffer object, i.e.:
struct assert_failure_type
{
int line;
int shader_type;
};

uniform int *number_assert_failures;
uniform struct assert_failure_type *assert_failures;

.
.
int location;
location=atomicAdd(buffer, 1);
assert_failures[location].line=line;
assert_failures[location].shader_type=//shader type;
.
.

C/C++ code will also need to be changed too.

**/

}
}

#define ASSERT(X) do_assert(X, __LINE__)

#else

#define ASSERT(X)

#endif


and in your C/C++ code:




#ifdef DEBUG

void
set_gl_assert_source(const char *file, int line);

void
print_gl_assert_error(void);

GLuint
gl_assert_buffer_object(void);

GLuint64
gl_assert_buffer_object_gpu_address(void);

void
do_gl_assert_prepare(const char *file, int line)
{
GLuint glslprogram(0);
glGetInteverv(GL_CURRENT_PROGRAM, &glslprogram);




//step one: set the one value stored in gl_assert_buffer_object as -1:
int minus_one(-1);

glMemoryBarrierNV(SHADER_GLOBAL_ACCESS_BARRIER_BIT _NV);
glNamedBufferSubDataEXT(gl_assert_buffer_object(), 0, sizeof(int), &minus_one);

//step two set pointer in the glsl program:
if(glslprogram!=0)
{
glUniformui64NV( glGetUniformLocation(glslprogram, "buffer"), gl_assert_buffer_object_gpu_address);
}
}

void
check_gl_assert_error(void)
{
int value;

glMemoryBarrierNV(SHADER_GLOBAL_ACCESS_BARRIER_BIT _NV);
glGetNamedBufferSubDataEXT(gl_assert_buffer_object (), 0, sizeof(int), &value);

if(value!=-1)
{
print_gl_assert_error();
}
}


#define GL_ASSERT_PREPARE do_gl_assert_prepare(__FILE__, __LINE__)
#define GL_ASSERT_CHECK check_gl_assert_error()





and then whenever you do a draw call you surround each such call by GL_ASSERT_PREPARE and GL_ASSERT_CHECK... doing that means either doing #define magic on glDrawWhatever or wrapping the draw calls to something that does.

On another note, one can do the same idea as above with GL_EXT_shader_image_store, but that is still NVIDIA only, but that capability is part of DX11, and as such, something like that, I would think, would appear eventually in either GL 4.x (x>=2) or as an extension that both ATI and NVIDIA do.

Enjoy.

ugluk
09-29-2010, 12:20 PM
Woa, brilliant, yeah I thought some kind of feedback would do it :) Too bad I have an ATI card :( Oh, maybe I'll just use some GLSL debugger, thanks for sharing the idea.

nickels
09-30-2010, 03:39 PM
I noticed crashing the shader has the added benefit of making the buggy Visual studio 10 core about 50% too :) A fun side effect! Not everytime, of course, because that would make something about VS10 deterministic!

ugluk
10-01-2010, 01:23 AM
Did you use my assert macro or something custom and which card do you have?

nickels
10-04-2010, 01:43 PM
Did you use my assert macro or something custom and which card do you have?

Not sure you were asking me, but, just in case:

my shader was looping:
for (int i = 0; i < NumTex; ++i)

where NumTex was a uniform that I forgot to upload. Hence the crash.

Card: Nvidia GTX 260, Windows 7, VS 2010 !!