Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 4 of 4

Thread: Frame Rate is painfully slow

  1. #1
    Junior Member Newbie
    Join Date
    Jun 2018
    Posts
    4

    Frame Rate is painfully slow

    I recently decided to start using OpenGL to make myself a game engine and it was going pretty good until I started to attempt performance optimizations.
    At first I was using immediate mode but I quit using it right away and went straight to VBOs. Even after switching to VBOs i still experienced tons of lag,
    about 4-10 FPS. I am using C# with SharpGL as my access to OpenGL.
    My test game is just set up to render chunks of cubes so that I could test performance.
    Here are some factors that might help.

    Triangles being rendered: 76800
    VBO count: 1 (Static)
    Times VBO is updated: 1
    Draw Calls: 1 (glDrawArrays)
    FBOs: Yes
    Shaders: None
    Textures: None
    Finish/Flush calls: None
    Culling: Enabled, CCW, Back

    I'm not sure what is causing the frame rate to be so slow, I've been researching performance optimizations for weeks but I have found nothing that fixed my problems.
    I am also aware that the low framerate could 100% be the fault of C# and/or SharpGL, and that C++ is much faster than C#.

    If you need me to post any code I can do that too.
    Last edited by ShadowDev; 06-24-2018 at 04:21 PM.

  2. #2
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,421
    Quote Originally Posted by ShadowDev View Post
    Times VBO is updated: 1
    Is this once per "run" or once per "frame"?

    Your perf makes it sound like you've flipped down into a software rendering path.

    If you need my to post any code I can do that too.
    That would help. Please distinguish between: 1) What's called once on startup, versus 2) what's called every frame (including SwapBuffers). With #1, you might include your context creation including the output of GL_RENDERER and GL_VERSION.

  3. #3
    Junior Member Newbie
    Join Date
    Jun 2018
    Posts
    4
    Thanks for the quick response!

    I technically update the VBO when the float array I use to store vertex positions changes.

    I do this (as well as drawing) on a timer that ticks every 1 ms (the time I set the tick to has not positively affected performance). I'm still pretty new to OpenGL so I'm not completely familiar with what SwapBuffers actually does, but I still call it.

    *Note: GL11 is just a class I made because I used to use LWJGL a bit

    The following code is run once every frame

    First I get the models that will be rendered and add their data to the float array

    Code :
                //Set the current models that should be rendered
                pinnedModals = data;
                //Add vertex data to float array inside of a object called vertexBuffer
                formatModalData(data);

    Next I set buffer data if I have to and I make a draw call

    Code :
    //                                                         This variable stores the vertex data used last frame and checks if the buffer should update                                                                   
    if (!compareArrays(vertexBuffer.buffer(), GL11.CURRENT_BUFFER_DATA))
                        {
                            //Set Buffer Data
                            SetData(0, vertexBuffer, false, 3, GL11.GL_STATIC_DRAW);
                            Debug.WriteLine("Updating buffer with " + vertexBuffer.getSize() + " vertices");
                        }
     
                        gl.DrawArrays(GL11.GL_TRIANGLES, 0, 36 * pinnedModals.Length);

    Next I make a call to Blit (a function in sharpgl) which does this

    Code :
    if (deviceContextHandle != IntPtr.Zero)
                {
                    //  Set the read buffer.
                    gl.ReadBuffer(OpenGL.GL_COLOR_ATTACHMENT0_EXT);
     
    			    //	Read the pixels into the DIB section.
    			    gl.ReadPixels(0, 0, Width, Height, OpenGL.GL_BGRA, 
                        OpenGL.GL_UNSIGNED_BYTE, dibSection.Bits);
     
    			    //	Blit the DC (containing the DIB section) to the target DC.
    			    Win32.BitBlt(hdc, 0, 0, Width, Height,
                        dibSectionDeviceContext, 0, 0, Win32.SRCCOPY);
    		    }

    Then I make a call to Win32.SwapBuffers and im done with that frame.

    Here are the calls made once when the program starts:

    Code :
    GL11.gl.Create(SharpGL.Version.OpenGLVersion.OpenGL2_1, RenderContextType.FBO, 1, 1, 32, null);
                GL11.gl.ShadeModel(GL11.GL_SMOOTH);
                GL11.gl.ClearColor(0F, 0F, 0F, 0F);
                GL11.gl.ClearDepth(1.0);
                GL11.gl.Enable(2929U);
                GL11.gl.Enable(GL11.GL_VERTEX_ARRAY);
                GL11.gl.Enable(GL11.GL_CULL_FACE);
                GL11.gl.CullFace(GL11.GL_BACK);
                GL11.gl.FrontFace(GL11.GL_CCW);
                GL11.gl.DepthFunc(GL11.GL_LESS);
                GL11.gl.Hint(3152U, 4354U);

    Thank you!

    -Shadow


    Edit: Here's the link to the SharpGL source on github: https://github.com/dwmkerr/sharpgl/t...L/Core/SharpGL

  4. #4
    Junior Member Newbie
    Join Date
    Jun 2018
    Posts
    4
    I did a bit of looking around my code and I realized that my problem had nothing to do with the performance of my vbo or OpenGL, the problem was the amount of looping I did to gather vertex data, after removing some unneeded loops and doing optimizations on when to update the buffer, I was able to get my frame rate back up to 500 FPS.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •