Part of the Khronos Group

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 2 FirstFirst 12
Results 11 to 15 of 15

Thread: Raspberry Pi OpenGL issue with Kivy screenmanager

  1. #11
    Junior Member Newbie FrankGould's Avatar
    Join Date
    Jan 2018
    Greetings Dark Photon and others watching this thread. I have spent many hours researching and testing environments to establish the right OS build to isolate this bug that eventually consumes all graphic memory and crashes the Kivy app.

    Once I determined all Raspberry Pi linux builds exhibit this same problem that did NOT occur on Oracle VM Mint Linus (No glGetError messages), I began adding debug messages in shader.pyx that allowed me to isolate where certain actions causing a glGetError 0x502 (Invalid Operation) that I had suspected caused the eventual 0x505 out of memory errors.

    What I found was a “uniform” named “t” that passed a zero value (i.e. 0) immediately before an opacity named uniform was uploaded. Below is what I found in the logs.

    [DEBUG ] [Shader ] uploading uniform t (loc=2, value=0)
    [DEBUG ] [Shader ] uploading uniform opacity (loc=1, value=1.0)
    glGetError 0x502
    [DEBUG ] [Shader ] -> (gl:1282) opacity

    That prompted me to change shader.pyx to ignore these invalid “t0” uniforms and the app stopped generating the glGetError 0x502 errors; however, that did not stop the “memory leak” because eventually, like in 10 minutes, the app crashed with the 0x505 and then 0x506 (invalid frame buffer operation - same FBO error in Kivy log entries) error messages. I think this is also why shader.pyx reports 'Exception: Shader didnt link, check info log' because there is no memory to link.

    So, now I’ve collected several run logs that show memory usage during the period when 0x505 error messages are displaying on the console. Below are from the results while running 'vcdbg reloc' when these console 0x505 error messages are displayed (see run log named vcdbg reloc - Fresh Boot18Mar-crashingwithGC.txt for full snapshot).

    9.8M free memory in 9 free block(s)

    Of the many entries (like the two below with parenthetical values removed), these two appear to be the majority memory consumption candidates (as Dark Photon mentions above). The amount used varies throughout the log.

    [ 336] 0x375bf740: used 6.3M 'Texture blob'
    [ 840] 0x37c03760: used 1.5M ''

    Then, in one section, I found the following error messages.

    0x2c570da0: corrupt trailer (space -256 != 1572961)
    0x2c570da0: corrupt trailer (guards[0] = 0xffffff00)
    0x2c570da0: corrupt trailer (guards[1] = 0xffffff00)
    0x2c570da0: corrupt trailer (guards[2] = 0xffffff00)
    0x2c570da0: corrupt trailer (guards[3] = 0xffffff00)
    0x2c570da0: corrupt trailer (guards[4] = 0xffffff00)
    0x2c6f0e00: corrupt entry (space 0xffffff00)
    resynced at 0x2c6f0fe0 (skipped 480 bytes)

    I'm not sure what to do next other than read the GL runtime debug log to see if I can find what is going on in OpenGL and how to fix it in Kivy or I can try to figure out what screenmanager is doing when it issues the OpenGL commands (or whatever uniform upload is getting). My problem is not knowing how these graphic layers operate under Kivy and which might be suspect.

    Below is a recap of other suggestions from forums:

    1. vcgencmd get_mem gpu returned 512 always, which I defined in RPi config.
    2. There are screenmanager transitions that do not exhibit this same 0x502 error but eventually show glGetError 0x505 with low free memory.
    3. I have attempted these tests with different images with only two jpg files in low resolution but the same results eventually occur with 0x505.
    4. I was not able to free the GPU memory (tested using vcdbg reloc) using echo # > /proc/sys/vm/drop_caches.
    5. Got same results after changing RPi graphics to GL Driver (Full KMS) in raspi-config->Advanced->GL Driver.
    6. Got same results after installing gl4es, libgl1-mesa-dri, and xcompmgr.
    7. Generated a huge (>60K lines) realtime OpenGL Debug Kivy console log that shows many internal GL errors. This was enabled using os.environ['KIVY_GL_DEBUG'] = '1' in Kivy app.

    Things to try going forward:
    1. Finish setting up QEMU to allow others to see the execution to duplicate and maybe fix.
    2. Find a way to 1) free memory, 2) fix shader/screenmanager, 3) get dev support, 4) examine log to find module causing memory leak.

    Any other suggestions will be greatly appreciated by anyone and everyone.
    Last edited by FrankGould; 03-18-2018 at 11:49 AM. Reason: Adding more details about GL log from Kivy

  2. #12
    Junior Member Newbie FrankGould's Avatar
    Join Date
    Jan 2018
    Greetings Dark Photon and others watching this thread. I have found a Kivy companion (dolang on #kivy) who is much more familiar with the graphics core in Kivy than I am. He suggested we further isolate the problem to determine which direction to go next. In Kivy, there is a context.pyx module that manages the graphics memory resources where we inserted the following lines of code to display the texture, canvas, and frame buffer object resources it had acquired.

    l_texture_count = get_context().texture_len()
    print 'l_texture_count,l_canvas_count, l_fbo_count=' + str(l_texture_count)

    What we learned from this exercise was that these three counts initialized at (2, 6, 0) then as images are processed in Kivy, they maximize out at (16, 11, 2), as shown in the attached Rasp-Stretch-Kivy-Context-log-20Mar.txt file. Eventually, because of something outside Kivy, the memory is consumed until glGetErrors 0x505 messages begin appearing, indicating no more memory.

    While these tests are running, we also checked memory using 'sudo vcdbg reloc' and found that '' and 'Texture blob' were not releasing their resources. Before running the Kivy test app, free memory was 427M but during the 0x505 messages free memory was 9.8M and declining. This can be seen in the link from my post above on 03-18-2018,*at 10:58 AM.

    We searched through the Kivy code files and were not able to grep KHRN in any code files but learned from an online search that it stands for Khronos, which we believe is an OpenGL brand name.

    What this appears to indicate is that the memory leak appears to be in the OpenGL code and we wanted to report this as bug for correction or learn where we should test next to isolate the problem more specifically.

    If possible, is there someone at OpenGL who can help us find a solution to this memory leak? It would be greatly appreciated to learn what we should test or explore next.
    Attached Files Attached Files
    Last edited by FrankGould; 03-20-2018 at 07:21 AM. Reason: Adding attachment that didn't take originally.

  3. #13
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    A suggestion: You probably should nail down which application is allocating these images via the RPi GPU driver. At least then you know who to ask about for more info or (if you establish that it's a bug) to report the bug to.

    On that note...

    Clone this GIT repo (source code for ARM side libraries for interfacing to Raspberry Pi GPU):

    Code :
    git clone

    If you search it, you'll find that this contains plenty of references to those "KHRN_IMAGEs". For instance, see egl_khr_image.h, egl_server.h, khrn_image.h, khrn_int_image.h, khrn_int_image.c, etc. It appears it is the underlying representation used for EGL, VG, WFC, etc. images inside the userland system libraries.

    If you look at eglCreateImageKHR() in egl_khr_image_client.c, which is one place which operates on KHRN_IMAGE formats, you can see that there is some logging in here:

    Code cpp:
       vcos_log_info("eglCreateImageKHR: dpy %p ctx %p target %x buf %p\n",
                                          dpy, ctx, target, buffer);

    If I were you, I'd search through the source code and figure out how to turn on that logging (see vcos_logging.h and vcos_log_set_level()). Even better would be to get it to log the app so you could tell who was allocating what.

    Also, you might double-check your GPU memory dump and make sure there isn't anything in there that identifies the app, process, and/or thread that allocated or owns the images. That would help you walk this back to the instigating app/process as well.

    A few links down into that repo above:

  4. #14
    Junior Member Newbie FrankGould's Avatar
    Join Date
    Jan 2018
    THANK YOU Dark Photon for your help! You are right. It turned out to be a problem with a kivy module that was fixed by development. I really appreciate having you help us isolate, find, and fix this problem. You spent a valuable amount of time and effort that is greatly appreciated.

  5. #15
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Glad you got it figured out!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts