Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 3 123 LastLast
Results 1 to 10 of 25

Thread: Problem with glReadPixels using FBO

  1. #1
    Intern Contributor
    Join Date
    Nov 2017
    Posts
    80

    Smile Problem with glReadPixels using FBO

    Hi, All!

    I have a problem with glReadPixels function.
    There is an empty result.


    My code:
    Code :
     
        // initialization
        depthMapFBO = glGenBuffers();
        glBindBuffer(GL_PIXEL_PACK_BUFFER, depthMapFBO);
        glBufferData(GL_PIXEL_PACK_BUFFER, display.getWidth() * display.getHeight() * 4, GL_DYNAMIC_DRAW);
        glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
        ....
     
        // using after moment when scene become rendered
        glBindBuffer(GL_PIXEL_PACK_BUFFER, depthMapFBO);
        ByteBuffer buffer = BufferUtils.createByteBuffer(display.getWidth() * display.getHeight() * 4);
        glReadPixels(0, 0, display.getWidth(), display.getHeight(), GL_DEPTH_COMPONENT, GL_FLOAT, buffer);
        ByteBuffer glMapBuffer = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_WRITE, buffer); // is not null. it seems buffer initialized ok.
        makeScreenShot(display.getWidth(), display.getHeight(), glMapBuffer);
        glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);


    But if I make screenshot like this, all is ok:
    Code :
          ByteBuffer buffer = BufferUtils.createByteBuffer(display.getWidth() * display.getHeight() * 4);
          glReadPixels(0, 0, display.getWidth(), display.getHeight(), GL_DEPTH_COMPONENT, GL_FLOAT, buffer);
          makeScreenShot(display.getWidth(), display.getHeight(), buffer);


    What is wrong with depthMapFBO's variant?


    Thanks for answer!

  2. #2
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,999
    Whatever the problem is, it's probably in makeScreenShot(). In particular, I can't see how it would make sense to pass a function pointer (glMapBuffer) in one case and a ByteBuffer object in the other case.

  3. #3
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,567
    Quote Originally Posted by nimelord View Post
    My code:
    If this is supposed to be C++, it's not valid, and it shouldn't even compile.

    You should include GL prototypes and compile with a type-checking C++ compiler. That may help you clear up some of your problems.

  4. #4
    Intern Contributor
    Join Date
    Nov 2017
    Posts
    80
    Quote Originally Posted by Dark Photon View Post
    If this is supposed to be C++, it's not valid, and it shouldn't even compile.

    You should include GL prototypes and compile with a type-checking C++ compiler. That may help you clear up some of your problems.
    It is not C++, it is a java library. LWJGL - thin wrapper on the native OpenGl calls.
    GL function signatures is almost the same.

    I already implemented scene with cascade shadow mapping for the directional light, point light shadows via 'texture cube', and spot light shadow maps. And see smallest difference in signatures between C++ and LWJGL - it is glGenBuffers signature and VBO references as int, without GLunit type.

    I do not know why, but I solved all my problems by reading this forum and most of them was explaned for other OpenGl juniors.
    So, I mean this forum is much more effective place for solutions for me than others.

    I'm sorry for the difference between yous platform and my one.

    Maybe you see other problems than the wrong syntax?
    I'm sure I'm using gl calls incorrectly, but all examples I could find didn't help me find where is problem.
    Last edited by nimelord; 01-09-2018 at 03:44 AM.

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,789
    OK, quite a few problems with your code.

    First of all, you are not actually using an FBO - you're using a PBO - a Pixel Buffer Object. That's not a code problem, it's terminology, but it's important to get it correct because if you go looking for help with FBOs you won't actually find any useful help.

    Secondly, you're using the PBO incorrectly, in that you're setting up as if you wish to glReadPixels into a PBO, then you do the actual glReadPixels as if you were reading it into a system memory pointer, then you continue as if you were reading into a PBO.

    The last argument of your glReadPixels call, if a buffer object is currently bound to GL_PIXEL_PACK_BUFFER, is not a system memory pointer but is instead interpreted as an offset into the buffer object's data store. So what you probably actually intend to use here is 0.

    The buffer's data store may then be accessed via glMapBuffer or glGetBufferSubData.

  6. #6
    Intern Contributor
    Join Date
    Nov 2017
    Posts
    80
    Quote Originally Posted by mhagain View Post
    The last argument of your glReadPixels call, if a buffer object is currently bound to GL_PIXEL_PACK_BUFFER, is not a system memory pointer but is instead interpreted as an offset into the buffer object's data store. So what you probably actually intend to use here is 0.
    You are right, it is working fine, now.
    It was rude mistake.

    Thank you a lot!

  7. #7
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,789
    The other thing to watch out for with glReadPixels is that if it needs to do a format conversion during the read it will kill your performance (not that glReadPixels is fast to begin with). We typically see this when people attempt to read GL_RGB or GL_BGR data, which will always require a format conversion.

    With reading the depth buffer you might get better performance by first checking if you actually have a 32-bit floating point depth buffer to begin with, then adjusting the call to match the format you actually do have. If you really need it as floating point it can often be faster to convert in code yourself than it is to let GL do it for you.

    With PBOs the idea is not to read to a PBO then map and access it immediately after. Instead you should wait some arbitrary amount of time - one frame is good to start with - between the read and the map. This is to give the read sufficient time to complete asynchronously before you map. If you absolutely must have the data immediately then not using a PBO at all might be faster (but watch that format conversion).

    I mention these because it seems possible that you're trying to optimize this, but going down the wrong route to do so.

  8. #8
    Intern Contributor
    Join Date
    Nov 2017
    Posts
    80
    I try to implement 'Coverage buffer' for occlusion culling.


    I need to do several simple steps (as I learned earlier):

    1 - copy depth buffer from current rendered frame to texture.
    2 - render given texture to another low-resolution texture for downscale resolution.
    3 - get low-resolution texture to CPU side.
    4 - reproject depth map from previous frame to current.
    5 - rasterize result depth map with bound boxes for culling meshes.


    Now I'm trying to do 1st step.
    And I began to think that I'm doing wrong things.

    GL_PIXEL_PACK_BUFFER - means data movement between GPU and CPU. It will be required later, not now.

    it seems for 1st step I need glBlitFramebuffer.

    Thank you!


    PS: glBlitFramebuffer - let me skip #2.
    Last edited by nimelord; 01-09-2018 at 02:07 PM.

  9. #9
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,567
    Quote Originally Posted by nimelord View Post
    1 - copy depth buffer from current rendered frame to texture.
    2 - render given texture to another low-resolution texture for downscale resolution.
    3 - get low-resolution texture to CPU side.
    4 - reproject depth map from previous frame to current.
    5 - rasterize result depth map with bound boxes for culling meshes.
    ...
    Now I'm trying to do 1st step.
    And I began to think that I'm doing wrong things.
    ...
    it seems for 1st step I need glBlitFramebuffer.
    For #1, you can bypass this by targeting your scene rendering (step #0) to a Framebuffer Object (FBO), and back the depth buffer with a depth texture. Then you've already got it in a texture.

    For #2, you can use glBlitFramebuffer to do the resize if you're not particular on how the depth values are chosen. Otherwise, do a custom render as you suggested.

    That said, I suspect the whole reason you're doing #2 is so that #3 is faster, right? Be sure to time your result (particularly #1-#3) to make sure you're comfortable with the cost.

    If not, another option to consider (which may very well be cheaper, and will allow you to skip #3 and possibly #2) is to just keep the depth texture on the GPU and using it full-res there. That is:

    0) Render scene to FBO with depth buffer backed by a depth texture
    1) Reproject depth texture on GPU to generate another depth texture
    2) Use depth texture for culling, etc.

    I wouldn't expect perfect results using last-frame's data to render the current frame. But feel free to give it a shot.

  10. #10
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,789
    Yeah, whichever way you do it, getting the depth buffer from the GPU to the CPU is going to involve a pipeline stall. Size of the depth buffer is not going to be as important as the stall, so downsizing it is not going to be as much a performance optimization as you might think.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •