Large Bitmap running slow

I’m having trouble drawing a fairly large bitmap using glDrawPixels. The bitmap is loaded into a GLubyte array in memory and then drawn every frame. On my desktop it runs perfectly, but when I compile and run on my laptop, it takes at least 10 seconds to load everything (mouse goes hourglass hovering over the openGL window…) and then runs veerrryy slow (perhaps 1-2fps). My laptop is brand new, dual core intel, 2gb ram… the only thing I can think of is that my desktop has a Radeon HD4870 with 512mb of memory, whereas the laptop has an integrated Intel GMA 4500MHD, which I assume relies on main memory for textures etc? Could this be the problem? How can I speed things up? Or if anyone can think of another reason it’s being slowed down?

I’ve isolated the problem to the glDrawPixels call, which when commented out solves the problem.

My program is far too huge and messy to post, but here are some important lines:

//so the bitmap is about 3.6MB
GLubyte background [3700000];

//this is done once
ifstream inFile;
inFile.open(“Textures 1.2/full_level_1.bmp”, ios::in | ios::binary);
inFile.read(reinterpret_cast<char*>(background), sizeof(background));
inFile.close();

//this is done every frame
glPixelZoom(pixelZoom, pixelZoom);
glRasterPos2f(-GRIDW/2 + screenPos, -GRIDH/2);
glBitmap (0, 0, 0, 0, -screenPos*248, 0.0f, NULL);
glDrawPixels(2818, 434, GL_BGR_EXT, GL_UNSIGNED_BYTE, &background[54]);

whereas the laptop has an integrated Intel GMA 4500MHD

This is the one thing that make the difference especially if it is intel. Do a search at least on this forum and see how many people tear their hair out.

Anyway, you are drawing a quite big bitmap copying from system memory with glDrawPixels. Do not wonder why it is slow even if it works well the the ATI card (which completely outdo the intel one).
If you need fast memory copies I advise you to look for “pixel buffer objects” if the ARB_pixel_buffer_object is supported on your hardware. You may also use textures if your hardware supports NPOT texture.

More information:

http://www.opengl.org/registry/specs/ARB/pixel_buffer_object.txt
http://www.songho.ca/opengl/gl_pbo.html
http://www.nvidia.com/dev_content/nvopenglspecs/GL_ARB_texture_non_power_of_two.txt

Thanks for the great reply, this give me lots to dive into!

glPixelZoom(pixelZoom, pixelZoom);

i think it could be that. I remember using that before and it destroying the FPS of the program i was writing.

You’d be better off uploading the image as a texture. Then just drawing a textured quad.