PBO and Texture Float ...

Hi,

I have read a lot of topic on PBO in the forum, but i didn’t find any solution to my problem.

I try to use PBO to stream textures from CPU to GPU. Textures are GL_TEXTURE_2D_ARRAY_EXT in a mipmap way using GL_RGB16F_ARB internal format. I use the following code to send textures on GPU with PBO :

int Size = widthheight3;
GLfloat* mytex = new GLfloat[Size]; // then initialized …

glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, aBuf ); // aBuf created with glGenBuffer
glBufferData( GL_PIXEL_UNPACK_BUFFER_ARB, Size*sizeof(float), NULL, GL_STREAM_DRAW );

void* pbomem = glMapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY );
memcpy( pbomem, mytex, Size*sizeof(float) );
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB);

glTexSubImage3D( GL_TEXTURE_2D_ARRAY_EXT, aLevel, 0, 0, zOffset, width, height, 1, GL_RGB, GL_FLOAT, 0); // previously created and binded correctly

glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, 0 );

Without PBO, I use :

int Size = widthheight3;
GLfloat* mytex = new GLfloat[Size];

// mytex init … then,

glTexSubImage3D( GL_TEXTURE_2D_ARRAY_EXT, aLevel, 0, 0, zOffset, width, height, 1, GL_RGB, GL_FLOAT, mytex); // previously created and binded correctly

So, when I use the “usual” method without PBO, It works good and I can read my texture from a fragment shader. But with PBO, I have a strange behavior with texture values. I think my problem comes from the line :

glTexSubImage3D( GL_TEXTURE_2D_ARRAY_EXT, aLevel, 0, 0, zOffset, width, height, 1, GL_RGB, GL_FLOAT, 0);

because PBOs are defined as Byte and I try to load it with Float. I have tried other ways to send Sub-texture using GL_BYTE but without success.

Do you see an other way to load Float texture with PBO ? Maybe an error in my code ?

Thanks a lot for your help !

Regard,
JB

I maybe mistaken, but I think that if you want to write to a pbo you need to bind it with the GL_PIXEL_PACK_BUFFER_ARB (write) target and not GL_PIXEL_UNPACK_BUFFER_ARB (read).

At first sight, I don’t see anything wrong with that code…

Are you sure you tested on the same image data when comparing the"usual" and PBO method? Otherwise it could be related to image dimensions.

AFAIK, UNPACK is the right enum.

GPGPU PBO

Sure, it is the same data. Image dimensions are relatively small (256x256x128).

And did you use a 1 pixel border intentionally?

I don’t use border, set to 0 when I create the texture.

Oh right, my mistake, I was confusing the TexImage and TexSubImage calls…
What exactly is this “strange behavior” you’re seeing?

It looks like a mix of all texture data. This is why I thought that texture upload is the problem … data doesn’t seems to be in right place.

AFAIK, UNPACK is the right enum.

YES, my mistake too.

Did you check if any gl error is thrown?

If pbo data type is byte maybe you should do:

glTexSubImage3D( GL_TEXTURE_2D_ARRAY_EXT, aLevel, 0, 0, zOffset, width, height, 1, GL_RGB, GL_UNSIGNED_BYTE 0);

with GL_UNSIGNED_BYTE for pixel data type.

Yes, I ask the error string … but no error :wink:

Strange… If you can create a small glut-based application illustrating this problem I’d be happy to take a look at it for you.

Sorry, I don’t have a time to make a minimal application …

I have tried to disable mipmap with PBO … and it works ! It means that either, PBO doesn’t work with mipmap or I have an error in my code !

I think the second one is true :slight_smile:

Maybe, if anyone has experience with mipmap and PBO, i’d like to know what he thinks about that !

Thank you.

Perhaps this may help:

(13) What if an application wants to populate an array texture using
separate mipmap chains a layer at a time rather than specifying all
layers of a given mipmap level at once?

  RESOLVED:  For 2D array textures, call TexImage3D once with a NULL image
  pointer for each level to establish the texel array sizes.  Then, call
  TexSubImage3D for each layer/mipmap level to define individual images.

Don’t know if you’re already doing this…

Yes, I do because it works without PBO …

256x256x128 @ rgb32f = 98MB. So your app preallocate 98MB od VRAM for PBO and another for texture which is ~200MB. What hardware you use. Do you have enough video memory? Creating such large PBO’s can hit some internal limits. Can you try with smaller texture size?

I have found the solution … but it is not obvious !

In my first implementation, the algorithm to initialize mipmap 2D texture array with PBO looks like :

forall zOffset {
forall mipmapLevel {

 int Size = width*height*3;
 GLfloat* mytex = new GLfloat[Size];

 initTexture(mytex,mipmapLevel,zOffset);

 glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, aBuf );
 glBufferData( GL_PIXEL_UNPACK_BUFFER_ARB, 
               Size*sizeof(float), NULL,
               GL_STREAM_DRAW );

 void* pbomem = glMapBuffer( GL_PIXEL_UNPACK_BUFFER_ARB,
                             GL_WRITE_ONLY );

 memcpy( pbomem, mytex, Size*sizeof(float) );
 glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB);

 glTexSubImage3D( GL_TEXTURE_2D_ARRAY_EXT,
                  [b]mipmapLevel[/b], 0, 0, [b]zOffset[/b],
                  width, height, 1,
                  GL_RGB, GL_FLOAT, 0);

 glBindBuffer( GL_PIXEL_UNPACK_BUFFER_ARB, 0 );

}
}

So, just change the loop with :

forall mipmapLevel {
forall zOffset {

...

}
}

and It works. It means that changing mipmap sequentially with PBO doesn’t work fine (with lastest NVidia linux driver on 8800GTX) ! It looks simple when you look the pseudo-code, but it is hard to find in a real implementation ! :wink:

Thanks for your help. If anyone know or has already tried to update mipmap with PBO, I will be happy yo know how he did !

see you.

It doesn’t need to allocate a PBO of size 256x256x128 :wink: Only one of 256x256 is enough. However, the problem is not here (768Mb on my graphic card).

Thanks for your help.

I have made a test app to try PBO with mipmap. If you want test it, just compile with : g++ -o pbo main.cpp -lGL -lglut -lGLEW
(on linux system, with code inside main.cpp)

Just switch boolean variable “usePBO” to true (i.e. false) if you want to enable (i.e. disable) PBO for texture upload.

On my system, it works well without PBO and bad with PBO !

And you ? :slight_smile:

 
#include <GL/glew.h>
#include <GL/glut.h>
#include <GL/glu.h>
#include <GL/gl.h>

#include <iostream>

using namespace std;

#define BUFFER_OFFSET(i) ((char*)NULL + (i))


static bool usePBO = true;


static int width = 600;
static int height = 600;

static GLuint texid;
static GLuint pboid[8];


void
idle ()
{
  glutPostRedisplay ();
}


void
display ()
{
  glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

  glBindTexture(GL_TEXTURE_2D,texid);
  glEnable(GL_TEXTURE_2D);

  glBegin(GL_QUADS);
  glTexCoord2f(0.f,1.f); glVertex3f(-1.f,-1.f,-1.f);
  glTexCoord2f(1.f,1.f); glVertex3f(100.f,-1.f,-100.f);
  glTexCoord2f(1.f,0.f); glVertex3f(100.f,1.f,-100.f);
  glTexCoord2f(0.f,0.f); glVertex3f(-1.f,1.f,-1.f);
  glEnd();

  glutSwapBuffers ();
}


void
initGL ()
{
  glewInit();
  if (glewIsSupported("GL_VERSION_2_0"))
    cerr << "[PBO] Ready for OpenGL 2.0" << endl;
  else {
    cerr << "[PBO] OpenGL 2.0 not supported" << endl;
    exit(1);
  }

  glCullFace( GL_BACK );
  glEnable( GL_DEPTH_TEST );
  glEnable( GL_CULL_FACE );
  glEnable( GL_TEXTURE_2D );
  glDisable( GL_LIGHTING );

  glViewport(0,0,width,height);
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();
  glFrustum(-1.0,1.0,-1.0,1.0,1.0,100.0);
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();
}

void
initTextureAndShader ()
{
  const int w = 256;
  const int h = 256;
  const int z = 5;

  glGenTextures(1,&texid);
  glBindTexture(GL_TEXTURE_2D,texid);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);

  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_LOD, 0.f);
  glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_LOD, (float) (z-1));
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, (z-1));

  for ( int i = 0; i < z; ++i ) 
    glTexImage2D(GL_TEXTURE_2D,i,GL_RGB,(w>>i),(h>>i),0,GL_RGB,GL_UNSIGNED_BYTE,0);

  cerr << "Texture Allocation OK" << endl;

  if ( usePBO ) {
    // create PBO ...
    glGenBuffers(z,pboid);
  }

  for ( int i = 0; i < z; ++i ) {

    int iw = (w>>i);
    int ih = (h>>i);

    GLubyte* gputex = new GLubyte[3*iw*ih];
    for ( int j = 0; j < iw*ih; ++j ) {
      if ( i == 0 ) {
	gputex[3*j]   = 0;
	gputex[3*j+1] = 255;
	gputex[3*j+2] = 0;
      }
      else if ( i == 1 ) {
	gputex[3*j]   = 255;
	gputex[3*j+1] = 255;
	gputex[3*j+2] = 0;
      }
      else if ( i == 2 ) {
	gputex[3*j]   = 255;
	gputex[3*j+1] = 0;
	gputex[3*j+2] = 0;
      }
      else if ( i == 3 ) {
	gputex[3*j]   = 0;
	gputex[3*j+1] = 0;
	gputex[3*j+2] = 255;
      }
      else if ( i == 3 ) {
	gputex[3*j]   = 255;
	gputex[3*j+1] = 255;
	gputex[3*j+2] = 255;
      }
    }

    if (! usePBO ) 
      glTexSubImage2D( GL_TEXTURE_2D,i,0,0,iw,ih,GL_RGB,GL_UNSIGNED_BYTE,gputex);
    else {
      glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, pboid[i]);
      glBufferData(GL_PIXEL_UNPACK_BUFFER_ARB, 3*iw*ih, 0,
		   GL_STREAM_DRAW);
      
      GLvoid* ptr1 = glMapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY);
      
      GLubyte* tmp = (GLubyte*) ptr1;
      for ( int j = 0; j < 3*iw*ih; ++j )
	tmp[j] = gputex[j];
      
      glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB);
      
      glTexSubImage2D( GL_TEXTURE_2D,i,0,0,iw,ih,GL_RGB,GL_UNSIGNED_BYTE,BUFFER_OFFSET(0)/*gputex*/);
      
      glBindBuffer(GL_PIXEL_UNPACK_BUFFER_ARB, 0);
    }

    delete [] gputex;
  }

  if ( usePBO ) 
    glDeleteBuffers(z,pboid);

  cerr << "Texture upload OK" << endl;
}


int
main ( int argc, char** argv )
{
  glutInit (&argc, argv);
  glutInitDisplayMode (GLUT_RGBA | GLUT_DEPTH | GLUT_DOUBLE);
  glutInitWindowSize (width,height);
  glutCreateWindow ("PBO");

  glutIdleFunc (idle);
  glutDisplayFunc (display);

  initGL();
  initTextureAndShader();

  glutMainLoop ();

  return EXIT_SUCCESS;
} 


I get the same result with and without PBO…