Seeking 2D quad texture bitmap example? ( RGB888 )

I need to display 2D video. My goal is to use the power of the GPU to map video frames as a texture to display a 2D image. It sounds like OpenGl can do this and I have Googled for days and found way-cool OpenGL examples with 2D bitmaps mapped to rotating bouncing cubes that are way-too complicated to understand. Can anyone please point me to a code example that simply displays a 2D image?

Background: My 32-bit Linux software decodes streaming video into 24-bit RGB888 frames at 30 fps in Windows bitmap format. I want to use OpenGl for GPU performance because the decoding algorithm already uses it a large chunk of time.

Thanks in advance for your time and any tips or direction,

-Ed

The very basic code (you need a openGL framework to make it works)


GLint textureName;
// some init gl code here

// the texture (2x2)
GLbyte textureData[] = { 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 255, 255, 255, 0 };
GLsizei width = 2;
GLsizei heigth = 2;


glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glGenTextures(1, &textureName);   // generate a texture handler really reccomanded (mandatory in openGL 3.0)
glBindTexture(GL_TEXTURE_2D, textureName); // tell openGL that we are using the texture 

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_UNSIGNED_BYTE, (GLvoid*)textureData); // send the texture data

on your draw function


//some code here
//other code here
glEnable(GL_TEXTURE_2D); // you should use shader, but for an example fixed pipeline is ok ;)
glBindTexture(GL_TEXTURE_2D, textureName);
glBegin(GL_TRIANGLE_STRIP);  // draw something with the texture on
glTexCoord2f(0.0, 0.0);
glVertex2f(-1.0, -1.0);

glTexCoord2f(1.0, 0.0);
glVertex2f(1.0, -1.0);

glTexCoord2f(0.0, 1.0);
glVertex2f(-1.0, 1.0);

glTexCoord2f(1.0, 1.0);
glVertex2f(1.0, 1.0);
glEnd();

//other code here
swapBuffer();

I don’t know if I forget something but should give you some base to start with.

One thing missing from Rosario code : the update of the texture needed for video.
When a new frame is available :
glBindTexture(GL_TEXTURE_2D, textureName); // if not already bound
glTexSubImage2D(GL_TEXTURE_2D,0,0, width,height, GL_RGB /* or GL_BGR, depends on how your video is decoded*/
GL_UNSIGNED_BYTE, textureData);

You also probably want to avoid messing with mipmaps, so add this to the last line of init code written by Rosario.
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);

Thank you very much! Both Rosario and ZbufferR!

Your example startled me when I first saw it start working. I saw a beautiful image appear that looked like four different color quasars. I thought it must be a message from god. Then I realized that it was simply the four pixels of the 2x2 texture stretched out over the 640x480 window I had created making gradient transitions between each pixel. How cool! :slight_smile:

I had a question about alignment. My code currently makes the bitmap rows align to 32-bit words with padding bytes at the end of each row. I have seen this referred to as “stride”. How would I modify the example to handle this? Or must I change my code to output non-aligned data?

Thank you both again for taking the time to help a newbie get past the struggling and start making progress! I know it took considerable time to post the example and for ZbufferR to read and comment on it. I appreciate that.

-Ed

One more important question for my next step…

Can you please offer advice on how a worker thread should signal OpenGL there is a new texture (frame) to render? Should I have my glutIdleFunc() do a conditional wait on a mutex?

I am still learning hoe glutMainLoop() and glutMainLoopEvent() should be used.

-Ed

I am not sure of the best answer on this, but have a look at what is possible to tweak with glPixelStorei (quite a lot actually) :
http://www.opengl.org/sdk/docs/man/xhtml/glPixelStore.xml

About signals : glut is nice for starters but somewhat old and inflexible, so don’t hesitate to go away from glut if the solutions below don’t fit your needs.

  1. a possibility is indeed to sync code in glutIdleFunc() so that when a new frame is ready, update the texture with glTexSubImage2D, then call glutPostRedisplay().

  2. another would be to use a timer, glut has a trigger with glutTimerFunc(), but I am not sure of the accuracy.

>About signals : glut is nice for starters but somewhat old and
>inflexible, so don’t hesitate to go away from glut if the
>solutions below don’t fit your needs.

I do not want to spend the effort learning trailing-edge technology.

What are the modern alternatives to using glut? Or are you saying it is better and more flexible to use OpenGL directly?

Thanks again for you help!

-Ed

Glut is very easy to learn.
Besides, OpenGL is only the graphic side, you already ‘use OpenGL directly’.
All the event handling, window creation, etc is offered by glut or alternatives.
Freeglut is a compatible more modern reimplementation of glut.
Other multiplateform alternatives : SDL or GLFW or Qt or …
You can also use platform-specific APIs such as wgl/glx/…

Thank you for the overview. I have some reading to do…

Can you please review the following code for anything that is abnormal or missing? It is a hybrid of the posted example code and some example OpenGl Super Bible code.

The code loads and displays a 720x486 bitmap very badly. The screen is divided into a number of blocky areas where I can not make out details and possibly data is duplicated within the blocks. However, the colors and orientation of contrasts are about right. I’m thinking something to do with size or resolution?

Thanks in advance,

-Ed

void SetupRC(void)
{
StructBitmap* pBitmap = LoadBitMap(“frame_720_486.bmp”);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f ); // Black background
GLbyte* textureData = (GLbyte*)pBitmap->pData;
GLsizei width = pBitmap->headerBitMapInfo.biWidth;
GLsizei height = pBitmap->headerBitMapInfo.biHeight;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glBindTexture(GL_TEXTURE_2D, m_texture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, (GLvoid*)textureData);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
}

void TextureRender(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear the window with current clearing color
glShadeModel(GL_SMOOTH);
glEnable(GL_NORMALIZE);
glPushMatrix();
glDisable(GL_LIGHTING); // Draw plane that the cube rests on
glEnable(GL_TEXTURE_2D); // should use shader, but for an example fixed pipeline is ok :wink:
glBindTexture(GL_TEXTURE_2D, m_texture[0]);
glBegin(GL_TRIANGLE_STRIP); // draw something with the texture on
glTexCoord2f(0.0, 0.0);
glVertex2f(-1.0, -1.0);

glTexCoord2f(1.0, 0.0);
glVertex2f(1.0, -1.0);

glTexCoord2f(0.0, 1.0);
glVertex2f(-1.0, 1.0);

glTexCoord2f(1.0, 1.0);
glVertex2f(1.0, 1.0);
glEnd();

glPopMatrix();

glutSwapBuffers(); // Flush drawing commands
}

int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(720,486);
m_window = glutCreateWindow(“2D Image Texture 720x486”);
glutKeyboardFunc(&TextureKeyPressFunc);
glutDisplayFunc(&TextureRender);
SetupRC();
glutMainLoop();
glDeleteTextures(1,m_texture);
return 0;
}

Did you try with the 32bits alignment ?
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ALIGNMENT, 4);

Can you detail how the code below is defined ?
StructBitmap* pBitmap = LoadBitMap(“frame_720_486.bmp”);

Thank you for your suggestion. Yes, I tried various combinations of pack/unpack alignment including what you suggested. Coincidentally, I think it did not matter because at 720 x 486 and 3-bytes per pixel, each row is divisible by four so there is no padding needed for alignment?

LoadBitMap reads a frame I captured and saved in Windows bitmap file format. I uploaded a copy of the bitmap file here:

http://www.4shared.com/file/95098519/b225b769/frameOneshot.html

I am developing under Red Hat 5.2.

// pData points to the begining of the BGR data
GLbyte* textureData = (GLbyte*)pBitmap->pData;

typedef struct
{
unsigned char * pData;
BITMAPFILEHEADER headerBitMapFile;
BITMAPINFOHEADER headerBitMapInfo;
int countBytesPerLine;
} StructBitmap;

StructBitmap* LoadBitMap(char* fileName)
{
memset(&m_bitmap, 0, sizeof(StructBitmap));
FILE* stream = fopen(fileName, “rb”);
fread(&m_bitmap.headerBitMapFile, sizeof(unsigned char), sizeof(m_bitmap.headerBitMapFile), stream);
fread(&m_bitmap.headerBitMapInfo, sizeof(unsigned char), sizeof(m_bitmap.headerBitMapInfo), stream);
m_bitmap.pData = new unsigned char[m_bitmap.headerBitMapInfo.biSizeImage];
fread(m_bitmap.pData, sizeof(unsigned char), m_bitmap.headerBitMapInfo.biSizeImage, stream);
fclose(stream);

m_bitmap.countBytesPerLine = 3 * m_bitmap.headerBitMapInfo.biWidth;
m_bitmap.countBytesPerLine = m_bitmap.countBytesPerLine + (m_bitmap.countBytesPerLine % 4);
return &m_bitmap;
}

Please let me know if you can think of other ideas to try.

Thanks again for your time,

-Ed

Here’s a link to a screen-shot of the blocky output I am seeing.

http://www.4shared.com/file/95112619/24f27130/Screenshot-2D_Image_Texture_Test.html

Thanks,
-Ed

Droped your code and the bmp in one of my base projects, and it worked perfectly for me.
One thing to note is that you have to check that you can use NPOT textures, and check for maximum texture size.
If that is not the case, you have to allocate a 1024*512 array, partially filled with the video, and draw with adjusted texcoords.
What is your card and the driver you use ?

It works! Thank you so much for taking the time to compile and run my code and staying with me until I had it working.

I had an Nvida Quadro FX 570 but was using the original ‘nv’ driver that shipped with Red Hat 5.2. As a Linux newbie, I incorrectly assumed that drivers were updated when I ran software update. I installed 173.14.18 and it works great! Coming from Windows I was amazed; it was quick and no reboots.

>// you should use shader, but for an example fixed pipeline is ok :wink:
>glEnable(GL_TEXTURE_2D);

Regarding the above comment; is it difficult to replace this with a shader?

Would you recommend I learn how to use framebuffer for further CPU reduction in display of video?

I was curious about the details of the NPOT work around but hesitant to take anymore of your time. It sounded like you would implement a 10245123 array to display a 7204863 video.

Anyway thank you very much for your guidance. I hope to have motion video working by this afternoon!

-Ed

Some details about doing NPOT with only POT hardware support :

  • find the smallest POT texture in which the NPOT still fits in. No need to be be square, only power of two, ie in your case 1024*512 will be enough.
  • you can still map the texture to a full screen quad, but the texture coordinates have to be adapted, otherwise the unused parts of the POT texture will be visible. Example for maximum s texture coordinate value in your case : 720.0/1024.0 = 0.703125 (instead of 1.0), t max : 0.94921875
  • when doing update of the video frame, you can take advantage of gltexSubImage2d to only update the 720*486 section.