PDA

View Full Version : Shader/glDrawArrays crash only on nVidia/Win32



ggadwa
01-08-2013, 08:10 AM
I'm the author of an open source, cross-platform freeware 3D engine dim3 (www.klinksoftware.com). Two components: the engine and the editor. The editor is all fixed function, it works everywhere. The Engine is all shader code. It works on iOS (es2), OS X with Intel GPU, nVidia, or ATI(AMD) cards, and on the PC with ATI(AMD) or Intel GPU.

On nVidia, it always crashes, the first shader call (against a glDrawArrays), with pretty much any shader, no matter how simple.

Sadly, I don't have this configuration, and debug code to my users isn't getting me anywhere. Has anybody encountered this? Is there a way to get further debug information? I'm sure it's something simple the PC nVidia drivers are doing differently, maybe they require things to be set in a certain order?

Further notes: No built-in variables are used, all vertex, uv, matrixes, etc, are passed in by vertex arrays or uniforms. Everything uses VBOs. The shader compile correctly, and all the locations have proper integers.

You can find the latest PC build at the url above. Should crash right away if you have a PC/nVidia setup.

Any ideas? Anything for me to try?

[>] Brian

glDan
01-08-2013, 12:35 PM
I'm the author of an open source, cross-platform freeware 3D engine dim3 (www.klinksoftware.com (http://www.klinksoftware.com)). Two components: the engine and the editor. The editor is all fixed function, it works everywhere. The Engine is all shader code. It works on iOS (es2), OS X with Intel GPU, nVidia, or ATI(AMD) cards, and on the PC with ATI(AMD) or Intel GPU.

On nVidia, it always crashes, the first shader call (against a glDrawArrays), with pretty much any shader, no matter how simple. ... [>] Brian

What is the crash message ?

ggadwa
01-08-2013, 09:55 PM
It's an access violation.

As always, the minute I post this, I think I've found the problem, but will need to verify with my users, hopefully by tomorrow. It has to do with glEnableVertexAttribArray and glVertexAttribPointer.

I've got some enables that are leaking to shaders where no AttribPointer call is made -- because -- as it always is -- I was getting a bit too aggressive with the optimizations. But here's the interesting part: The IDs aren't hooked up to any attributes in the shader code. It works everywhere, except PC/nVidia. The drivers must be doing some kind of pre-flight check, and that's causing the access violation.

I got away with this for a long time, until I ran into a user with that setup.

For instance, I might have A, B, and C all enabled, only A & B have offsets into the VBO set, but only A & B are used in the shader, or referenced at all.

Is what the driver is doing right or wrong? It's certainly checking data it will never use, but then again, I shouldn't be enabling data without setting a pointer to it. I'll have more later when I know if this is the real reason.

If anything, it's an interesting difference in the drivers.

[>] Brian

ggadwa
01-10-2013, 08:50 AM
Yes, that's what it was.

So, for anybody else that searches and stumbles onto this:

For only nVidia PC drivers (not OS X), if you enable a vertex array that doesn't exist in the shader, it'll crash with a access violation when you attempt to draw on the shader. Other drivers ignore this, as it's really a no-op.

[>] Brian

Aleksandar
01-10-2013, 02:27 PM
Thanks for the "reference". Few years ago Alfonse said that it was nonsense when I said all unused attributes have to be disabled to prevent application crash. :whistle: I'm using NV hardware for years and this behavior is quite natural to me.

Alfonse Reinheart
01-10-2013, 08:15 PM
Yes, and it's still in violation of the OpenGL specification. Nowhere does it allow such a thing to end with program termination; therefore, it should not.

Complain to NVIDIA about it, not to me.

arekkusu
01-10-2013, 08:35 PM
The GL spec says:
"These error semantics apply only to GL errors, not to system errors such as memory access errors."

If you pass a pointer to the GL (glVertexAttribPointer) and then ask to dereference the pointer (glEnableVertexAttribArray) and the pointer is invalid, what do you expect to happen?

If your app is written in a C-like language, I expect it to crash.

ggadwa
01-10-2013, 09:32 PM
The GL spec says:
"These error semantics apply only to GL errors, not to system errors such as memory access errors."

If you pass a pointer to the GL (glVertexAttribPointer) and then ask to dereference the pointer (glEnableVertexAttribArray) and the pointer is invalid, what do you expect to happen?

If your app is written in a C-like language, I expect it to crash.



Note to carry this on further than it needs to be, but that's like writing this code:

void call_me(char *str,char *str2)
{
fprintf(stdout,"%s\n",str);
//str2 not referenced
}

void start_here(void)
{
char str[256]={"blech"};
char *str2;

call_me(str,str2);
}

This won't cause a crash, even though str2 is pointing who knows where. nVidia's drivers are touching things that aren't referenced. Regardless, it's my problem as I shouldn't have things enabled that aren't used, but I don't think what nVidia is doing is right either, because they are certainly doing things that don't need to be done (it seems.)

[>] Brian

Aleksandar
01-11-2013, 04:07 AM
Complain to NVIDIA about it, not to me.

I'm not complaining, just stating. NVIDIA has a lot of optimizations in drivers. I don't know what they do, but probably check state of each active array, and if array is not define it could cause problems.
Programmers very often are not aware of benefits provided for them. Sometimes it encourages ill programming strategies, but brings better performance in most of the cases. Better performance is predominant goal of NVIDIA's implementation.

kRogue
01-20-2013, 04:05 AM
If an attribute is left enabled with a source that is no longer in scope, expect a crash even if the shader does not use that attribute. You get this ALL the freaking time in embedded land. The most natural advice at this point in time: use VAO, they exist for basically the idea of compartmentalizing attribute source state... OR get your tracking together and make sure that if an attribute is enabled the source is still valid. As to why the NVIDIA driver crashes, again speculation, but it likely sets a DMA (for non-buffer object code) for the attribute (since quite often the thing that pulls attribute data is disjoint bits than the shader.. and as for buffer objects, I'd bet that underneath there is a 64-bit pointer for VRAM....