Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: GL_LUMINANCE and pbo : slow !

Hybrid View

  1. #1
    Junior Member Newbie
    Join Date
    May 2011
    Posts
    18

    GL_LUMINANCE and pbo : slow !

    I am trying to play a movie in yuv 420 ... so I have Three texture (one for lumi, one for u, one for V) with only one channel. When I use pbo it seems that it's slower than playing the same yuv files in 422 ! is GL_Luminance double pbo accelerated ( on both nvidia and amd card ?)

  2. #2
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    Considering that neither luminance, nor double precision pixels are directly supported by hardware, I'm not wondering.

    If you want to get the most performance, use core OpenGL instead of legacy stuff. That means, don't use luminance or intensity formats and don't use types that don't have a corresponding texture internal format, i.e. double.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  3. #3
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Considering that neither luminance, nor double precision pixels are directly supported by hardware, I'm not wondering.
    Luminance is directly supported by the hardware. Double is very much not though.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Dec 2000
    Location
    Madrid, Spain
    Posts
    136
    I made something similar yesterday (3 pbo to draw a yuv420 video frame). I'm not using double but normal UBYTE pbo's/textures. And I'm using GL_RED textures. And it is working fine.
    Here is the relevant code:

    Code :
    void CU1VideoPlayer::U1Video_RTInitialize()
    {
        ///--- Crea las texturas que va a usar para renderizar el vĚdeo
        TESTRENDERTHREAD();
        ...
     
        ///---
        GFXSelectTextureUnit(0);
        m_oFrameTexture_Y.Initialize(GFX_TEXTURE_LINEAR, GFX_TEXTURE_LINEAR, GFX_TEXTURE_CLAMP_TO_EDGE, GFX_TEXTURE_CLAMP_TO_EDGE, 
                GFX_TEXTURE_2D, GFX_TEXTURE_STATIC);
        m_oFrameTexture_Y.m_uWidth=1280;
        m_oFrameTexture_Y.m_uHeight=720;
        m_oFrameTexture_Y.BindTexture(0);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 1280, 720, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
     
     
        ///////
        m_oFrameTexture_U.Initialize(GFX_TEXTURE_LINEAR, GFX_TEXTURE_LINEAR, GFX_TEXTURE_CLAMP_TO_EDGE, GFX_TEXTURE_CLAMP_TO_EDGE, 
                GFX_TEXTURE_2D, GFX_TEXTURE_STATIC);
        m_oFrameTexture_U.m_uWidth=1280>>1;
        m_oFrameTexture_U.m_uHeight=720>>1;
        m_oFrameTexture_U.BindTexture(0);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 1280>>1, 720>>1, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
     
     
        ///////
        m_oFrameTexture_V.Initialize(GFX_TEXTURE_LINEAR, GFX_TEXTURE_LINEAR, GFX_TEXTURE_CLAMP_TO_EDGE, GFX_TEXTURE_CLAMP_TO_EDGE, 
            GFX_TEXTURE_2D, GFX_TEXTURE_STATIC);
        m_oFrameTexture_V.m_uWidth=1280>>1;
        m_oFrameTexture_V.m_uHeight=720>>1;
        m_oFrameTexture_V.BindTexture(0);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 1280>>1, 720>>1, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
     
     
     
     
        //////////////////////////////////////////////////////////////////////////
        ///--- Crea el PBO para las texturas
        glGenBuffers(1, &m_uiGLPBOID_Y);
        glGenBuffers(1, &m_uiGLPBOID_U);
        glGenBuffers(1, &m_uiGLPBOID_V);
     
        ...
    }
     
     
    bool CU1VideoPlayer::RTPrivUpdateTexture()
    {
        bool bRet=false;
        ...
     
     
        if(...){
            if(g_bSup_ARB_pixel_buffer_object){
                const int iWidth=1280;
                const int iHeight=720;
                const int iWidth2=1280>>1;
                const int iHeight2=720>>1;
     
     
                ///--- Bind
                glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, m_uiGLPBOID_Y);
                glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, iWidth*iHeight, NULL, GL_STREAM_DRAW_ARB);
                uint8_t *pboMemory;
                pboMemory=(uint8_t*)glMapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY);
                XXXXTrapIf(pboMemory==NULL);
                if(pboMemory){
                    m_pVDecoder->TransferBitmap(pboMemory, iWidth, iHeight, 0);
                    if(!glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB)) {
                        ///--- Handle error case
                        ...
                    }
                    m_oFrameTexture_Y.BindTexture(0);
                    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, iWidth, iHeight, GL_RED, GL_UNSIGNED_BYTE, NULL);
                }
     
     
                //////////////////////////////////////////////////////////////////////////
                glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, m_uiGLPBOID_U);
                glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, iWidth2*iHeight2, NULL, GL_STREAM_DRAW_ARB);
                pboMemory=(uint8_t*)glMapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY);
                XXXXTrapIf(pboMemory==NULL);
                if(pboMemory){
                    m_pVDecoder->TransferBitmap(pboMemory, iWidth2, iHeight2, 1);
                    glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB);
                    m_oFrameTexture_U.BindTexture(0);
                    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, iWidth2, iHeight2, GL_RED, GL_UNSIGNED_BYTE, NULL);
                }
     
     
                //////////////////////////////////////////////////////////////////////////
                glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, m_uiGLPBOID_V);
                glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, iWidth2*iHeight2, NULL, GL_STREAM_DRAW_ARB);
                pboMemory=(uint8_t*)glMapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY);
                XXXXTrapIf(pboMemory==NULL);
                if(pboMemory){
                    m_pVDecoder->TransferBitmap(pboMemory, iWidth2, iHeight2, 2);
                    glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB);
                    m_oFrameTexture_V.BindTexture(0);
                    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, iWidth2, iHeight2, GL_RED, GL_UNSIGNED_BYTE, NULL);
                }
     
     
                ///--- unbind
                glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, 0);
     
                bRet=true;
            }
            else
            {
                ...
            }
        }
        return bRet;
    }

    Hope this helps.

    --- Carlos Abril
    --- www.zackzero.com
    Last edited by Cab; 11-06-2012 at 12:10 PM.

  5. #5
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    Quote Originally Posted by Alfonse Reinheart View Post
    Luminance is directly supported by the hardware. Double is very much not though.
    No, texture swizzling is directly supported by hardware. Luminance formats are just emulated using swizzling.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  6. #6
    Junior Member Newbie
    Join Date
    May 2011
    Posts
    18
    regardings double ... i was talking about double pbo (not double format)...
    I am certain that GL_BGRA is accelerated on nvidia and amd card, but I am not sure that GL_Luminance is as fast as GL_BGRA : it's working but I try to work with very large movie (4K). ... But how I can use any other accelerated format .. I have only one channel .. not 4 !

  7. #7
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    No, texture swizzling is directly supported by hardware. Luminance formats are just emulated using swizzling.
    Remember the "as if" rule: if it gets the same results (with no loss of performance), then it is no different from using luminance. For all useful definitions of the term "supported", it is supported in hardware.

    double pbo
    What does that mean? Do you mean "double buffered pbo"?

    I am certain that GL_BGRA is accelerated on nvidia and amd card, but I am not sure that GL_Luminance is as fast as GL_BGRA
    You can't use GL_BGRA on a single-channel texture. More importantly... if you're not sure that LUMINANCE is faster or slower than something else, how do you know that your program is slow? Are you profiling?

    More importantly, why don't you actually show us your code, instead of making us guess at what you're doing.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •