Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 10

Thread: Shader/glDrawArrays crash only on nVidia/Win32

  1. #1
    Junior Member Newbie
    Join Date
    Jan 2013
    Posts
    4

    Shader/glDrawArrays crash only on nVidia/Win32

    I'm the author of an open source, cross-platform freeware 3D engine dim3 (www.klinksoftware.com). Two components: the engine and the editor. The editor is all fixed function, it works everywhere. The Engine is all shader code. It works on iOS (es2), OS X with Intel GPU, nVidia, or ATI(AMD) cards, and on the PC with ATI(AMD) or Intel GPU.

    On nVidia, it always crashes, the first shader call (against a glDrawArrays), with pretty much any shader, no matter how simple.

    Sadly, I don't have this configuration, and debug code to my users isn't getting me anywhere. Has anybody encountered this? Is there a way to get further debug information? I'm sure it's something simple the PC nVidia drivers are doing differently, maybe they require things to be set in a certain order?

    Further notes: No built-in variables are used, all vertex, uv, matrixes, etc, are passed in by vertex arrays or uniforms. Everything uses VBOs. The shader compile correctly, and all the locations have proper integers.

    You can find the latest PC build at the url above. Should crash right away if you have a PC/nVidia setup.

    Any ideas? Anything for me to try?

    [>] Brian

  2. #2
    Junior Member Newbie
    Join Date
    May 2002
    Posts
    25
    Quote Originally Posted by ggadwa View Post
    I'm the author of an open source, cross-platform freeware 3D engine dim3 (www.klinksoftware.com). Two components: the engine and the editor. The editor is all fixed function, it works everywhere. The Engine is all shader code. It works on iOS (es2), OS X with Intel GPU, nVidia, or ATI(AMD) cards, and on the PC with ATI(AMD) or Intel GPU.

    On nVidia, it always crashes, the first shader call (against a glDrawArrays), with pretty much any shader, no matter how simple. ... [>] Brian
    What is the crash message ?

  3. #3
    Junior Member Newbie
    Join Date
    Jan 2013
    Posts
    4
    It's an access violation.

    As always, the minute I post this, I think I've found the problem, but will need to verify with my users, hopefully by tomorrow. It has to do with glEnableVertexAttribArray and glVertexAttribPointer.

    I've got some enables that are leaking to shaders where no AttribPointer call is made -- because -- as it always is -- I was getting a bit too aggressive with the optimizations. But here's the interesting part: The IDs aren't hooked up to any attributes in the shader code. It works everywhere, except PC/nVidia. The drivers must be doing some kind of pre-flight check, and that's causing the access violation.

    I got away with this for a long time, until I ran into a user with that setup.

    For instance, I might have A, B, and C all enabled, only A & B have offsets into the VBO set, but only A & B are used in the shader, or referenced at all.

    Is what the driver is doing right or wrong? It's certainly checking data it will never use, but then again, I shouldn't be enabling data without setting a pointer to it. I'll have more later when I know if this is the real reason.

    If anything, it's an interesting difference in the drivers.

    [>] Brian

  4. #4
    Junior Member Newbie
    Join Date
    Jan 2013
    Posts
    4
    Yes, that's what it was.

    So, for anybody else that searches and stumbles onto this:

    For only nVidia PC drivers (not OS X), if you enable a vertex array that doesn't exist in the shader, it'll crash with a access violation when you attempt to draw on the shader. Other drivers ignore this, as it's really a no-op.

    [>] Brian

  5. #5
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,158
    Thanks for the "reference". Few years ago Alfonse said that it was nonsense when I said all unused attributes have to be disabled to prevent application crash. I'm using NV hardware for years and this behavior is quite natural to me.

  6. #6
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Yes, and it's still in violation of the OpenGL specification. Nowhere does it allow such a thing to end with program termination; therefore, it should not.

    Complain to NVIDIA about it, not to me.

  7. #7
    Advanced Member Frequent Contributor arekkusu's Avatar
    Join Date
    Nov 2003
    Posts
    783
    The GL spec says:
    "These error semantics apply only to GL errors, not to system errors such as memory access errors."

    If you pass a pointer to the GL (glVertexAttribPointer) and then ask to dereference the pointer (glEnableVertexAttribArray) and the pointer is invalid, what do you expect to happen?

    If your app is written in a C-like language, I expect it to crash.


  8. #8
    Junior Member Newbie
    Join Date
    Jan 2013
    Posts
    4
    Quote Originally Posted by arekkusu View Post
    The GL spec says:
    "These error semantics apply only to GL errors, not to system errors such as memory access errors."

    If you pass a pointer to the GL (glVertexAttribPointer) and then ask to dereference the pointer (glEnableVertexAttribArray) and the pointer is invalid, what do you expect to happen?

    If your app is written in a C-like language, I expect it to crash.

    Note to carry this on further than it needs to be, but that's like writing this code:

    void call_me(char *str,char *str2)
    {
    fprintf(stdout,"%s\n",str);
    //str2 not referenced
    }

    void start_here(void)
    {
    char str[256]={"blech"};
    char *str2;

    call_me(str,str2);
    }

    This won't cause a crash, even though str2 is pointing who knows where. nVidia's drivers are touching things that aren't referenced. Regardless, it's my problem as I shouldn't have things enabled that aren't used, but I don't think what nVidia is doing is right either, because they are certainly doing things that don't need to be done (it seems.)

    [>] Brian

  9. #9
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,158
    Quote Originally Posted by Alfonse Reinheart View Post
    Complain to NVIDIA about it, not to me.
    I'm not complaining, just stating. NVIDIA has a lot of optimizations in drivers. I don't know what they do, but probably check state of each active array, and if array is not define it could cause problems.
    Programmers very often are not aware of benefits provided for them. Sometimes it encourages ill programming strategies, but brings better performance in most of the cases. Better performance is predominant goal of NVIDIA's implementation.

  10. #10
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    600
    If an attribute is left enabled with a source that is no longer in scope, expect a crash even if the shader does not use that attribute. You get this ALL the freaking time in embedded land. The most natural advice at this point in time: use VAO, they exist for basically the idea of compartmentalizing attribute source state... OR get your tracking together and make sure that if an attribute is enabled the source is still valid. As to why the NVIDIA driver crashes, again speculation, but it likely sets a DMA (for non-buffer object code) for the attribute (since quite often the thing that pulls attribute data is disjoint bits than the shader.. and as for buffer objects, I'd bet that underneath there is a 64-bit pointer for VRAM....

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •