PDA

View Full Version : PBO performances & compressed textures



jpchristin
05-05-2008, 02:48 PM
Hi,

I have some strange performance results when I use PBO. I'm on a nVidia 9800 gtx. I've found a way to get faster result with the PBO, but I don't understand why it is faster.

Here is the fastest way :

---------------
glTexImage2D( GL_TEXTURE_2D, a_nLevel, nInternalFormat, a_nUDim, a_nVDim, bordersize, nExternalFormat, nType, NULL );

glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, unBuf );
glBufferData( GL_PIXEL_UNPACK_BUFFER_ARB, nSize, NULL, GL_STREAM_DRAW );

void* ioMem = glMapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY );
memcpy( ioMem, a_pData, nSize );
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB);

glTexSubImage2D( GL_TEXTURE_2D, a_nLevel, 0, 0, a_nUDim, a_nVDim, nExternalFormat, nType, NULL );

glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, 0 );
---------------

But if I use that code, it is like 2 time slower:
---------------
glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, unBuf );
glBufferData( GL_PIXEL_UNPACK_BUFFER_ARB, nSize, NULL, GL_STREAM_DRAW );

void* ioMem = glMapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY );
memcpy( ioMem, a_pData, nSize );
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB);

glTexImage2D( GL_TEXTURE_2D, a_nLevel, nInternalFormat, a_nUDim, a_nVDim, bordersize, nExternalFormat, nType, NULL );

glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, 0 );
---------------

I do not see why it is slower. It look if the surface allocation is slower when a pbo is binded...

That cause a problem with the compressed texture since I cannot call glCompressedTeximage2D with a NULL pointer for data.

So if I use:
---------------
glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, unBuf );
glBufferData( GL_PIXEL_UNPACK_BUFFER_ARB, nSize, NULL, GL_STREAM_DRAW );

void* ioMem = glMapBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY );

memcpy( ioMem, a_pData, nSize );

glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB);

glCompressedTexImage2DARB( GL_TEXTURE_2D, a_nLevel, internalformat, a_nUDim, a_nVDim, bordersize, nSize, NULL );

glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, 0 );
---------------

it is way slower then using:
---------------
glCompressedTexImage2D( GL_TEXTURE_2D, a_nLevel,internalformal, a_nUDim,a_nVDim,bodersize, nSize, a_pData );
---------------

It is really strange to have such result. It's look like if I will not be able to use pbo with compressed texture, since the performance are really slower...

Thanks in advance,

J-P

Dark Photon
05-09-2008, 06:50 AM
That cause a problem with the compressed texture since I cannot call glCompressedTeximage2D with a NULL pointer for data. ... It's look like if I will not be able to use pbo with compressed texture, since the performance are really slower...
Your point isn't very clear, but the solution to your problem is to use glTexImage2D with a NULL pointer to pre-allocate your GPU textures (even if the internal format is a compressed texture format). That resolves your main question listed above.

Also, you seem unclear about the difference between allocating a GPU texture, and copying data into an existing texture. glTexImage*/glCompressedTexImage* allocates a GPU texture, possibility filling it with texel data if you provide a non-NULL pixels pointer (you have to provide a non-NULL pointer for glCompressedTexImage*). By contrast, glTexSubImage*/glCompressedTexSubImage* copies data into an existing GPU texture.

So allocate once, then just copy.

jpchristin
05-09-2008, 01:34 PM
You're right, I've try glTexImage2D with a NULL pointer and with an internal format compressed, and it is really faster.

Thanks for the anwser!

blechhirn
06-09-2008, 03:57 AM
So did you finally mix glTexImage2D with glCompressedSubImage? I read that this might be slow on several gpus.

Best Regards,
Manuel

jpchristin
06-16-2008, 08:08 AM
Yes, I have mix it. It looks like if it is the only way to sub load a compressed texture...

blechhirn
06-16-2008, 08:12 AM
Do I guess right, you're using an NVidia GPU? I have tried this on different GPUs causing trouble on several ATI cards.

Regards,
Manu

PS: could you post the final code snippet of this mixture? Might be I am mssing something. Thanks.

Dark Photon
06-19-2008, 06:33 AM
Do I guess right, you're using an NVidia GPU?

Yes. Post your snippet and we'll check for errors.

jpchristin
06-20-2008, 09:00 AM
For the code of mix, it is the same but instead of:
glCompressedTexImage2DARB( GL_TEXTURE_2D, a_nLevel, internalformat, a_nUDim, a_nVDim, bordersize, nSize, NULL );

I call:
glTexImage2D( GL_TEXTURE_2D, a_nLevel, nInternalFormat, a_nUDim, a_nVDim, bordersize, nExternalFormat, nType, NULL );

blechhirn, ATI driver with opengl seems really crapy, good luck!!!

Dark Photon
06-23-2008, 07:22 AM
For the code of mix, it is the same but instead of:
glCompressedTexImage2DARB( GL_TEXTURE_2D, a_nLevel, internalformat, a_nUDim, a_nVDim, bordersize, nSize, NULL );

I call:
glTexImage2D( GL_TEXTURE_2D, a_nLevel, nInternalFormat, a_nUDim, a_nVDim, bordersize, nExternalFormat, nType, NULL );

blechhirn, ATI driver with opengl seems really crapy, good luck!!!

Hrm. Second call looks good to me. Try an NVidia card?

jpchristin
06-25-2008, 07:52 AM
I use a nvidia..

In the specs it says : "Undefined results, including abnormal program termination, are generated if data is not encoded in a manner consistent with the extension specification defining the internal compression format. "

That could explain why glCompressedTexImage2D crash with NULL, but it should work...

Dark Photon
06-26-2008, 06:21 AM
That could explain why glCompressedTexImage2D crash with NULL, but it should work...

Related: link (http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Board=3&Number=159972).