glMapBuffer CPU Usage Peaking

I currently use a ping-pong method on two VBO’s that contain vertex attribute data. I update this per frame and set the BufferData as GL_DYNAMIC_READ

I use each float value attribute in the vertex shader as a lookup value in a 1D texture for a color scheme. I then assign the rgba value for the texture lookup to a varying vec4 colors which is then used in the computation of phone lighting in the fragment shader.

When I use the two VBO’s my CPU usage doesn’t increase over 2%, the rendering of my brain model is fine. When I use the float values when assigning to colors, either in the texture lookup or by assigning each element of the vec4 to the vertex varying variable vec4 colors, my CPU shoots up to 80%.

Any ideas?

I’ll post any code needed but I wasn’t sure which is most pertinent at this time.

What hardware are you on? What OS? What drivers? Etc.?

With the info you have given it sounds like something is falling off the fast path and going into software when you use floats.

Intel Mac Mini, Windows XP
NVidia GeForce 9400m, Driver Version 6.14.11.8585,
Running Natively in BootCamp to run Windows XP

They are the newest drivers to my knowledge? Any clues?

Now, when I am running in debug mode and when I map the buffer it is fine. When I edit the data in the buffer by the for loop, my cpu jumps to 55% from 5%.

float* pData = ( float* )glMapBuffer( GL_ARRAY_BUFFER, GL_WRITE_ONLY );
for (int i=0;i<VertexTotal;i++)
{
pData[i] = 0.5;
}
glUnmapBuffer( GL_ARRAY_BUFFER );

I should add, when I iterate through the for loop even when VertexTotal is 650,000 and I perform multiplication in the loop, my CPU only goes to 5-7%.
When I perform the

pData[i] = 0.5;

the larger the value of VertexTotal, the higher my CPU goes. Is this normal behavior with ping-pong of 2 VBO’s? Any help is great since I have no experience with CPU and GPU synchronization.

Based on your update…

Perhaps try calling glBufferData with NULL before mapping it.

This releases, but does not get rid of the old data on the GPU side, and this should stop the CPU side from stalling, which is what I think may be happening from your more recent description.

When the GPU is finished with the old buffer data it will clear it up for you, so don’t worry about it. Your new buffer data becomes the actual buffer then. The old one is gone. Rinse and repeat each frame.

This is fundamental to getting this kind of buffer flip flopping to work.

I’ll try calling glBufferData to NULL before I use glMapBuffer.
It sounds reasonable why this would cause a CPU stall but unfortunately, I won’t be near my workstation until Monday. I’ll post update then and it would be great if you don’t mind taking a look on Monday.

Thanks scratt!

No worries. I’ll keep an eye out. I’d like to know how it goes. :slight_smile:

scratt,

I just realized I was only using glBindBuffer prior to glMapBuffer. I would use glBindBuffer(VBO_ID). I should use a glBufferData call each frame? And where would I pass the pointer to the data array?

Assuming you are ping-ponging correctly as you describe at the top of the thread just make sure you call glBufferData with NULL before you map the next buffer. You are effectively cutting your ties with the old data (which is on the GPU at that point) and letting the GPU do it’s own thing with it.

If you were working on OS X, rather than XP, you could look at some of Apple’s rather sexy extensions like:

http://www.opengl.org/registry/specs/APPLE/flush_buffer_range.txt

You would also perhaps find the OpenGL implementation a little better on OS X. That’s purely subjective from my POV and has nothing to do with your issue now. :slight_smile:

If you were working on OS X, rather than XP, you could look at some of Apple’s rather sexy extensions like:

http://www.opengl.org/registry/specs/APPLE/flush_buffer_range.txt

Is that anything like the glFlushMappedBufferRange function that comes with ARB_MAP_BUFFER_RANGE, and is standard in OpenGL 3.x?

It may be. I have not read that spec yet.

I think all of the rather nice Apple only ones are all things that Apple wanted and at the time they came about some of the extensions that are now in GL3.x were still on the table with the ARB.

I guess that’s a bit like some of the sexier NVidia ones too.

Looks like Apple has been busy; fresh crop of extensions in the registry…
http://www.opengl.org/registry/specs/APPLE/texture_range.txt
http://www.opengl.org/registry/specs/APPLE/float_pixels.txt
http://www.opengl.org/registry/specs/APPLE/vertex_program_evaluators.txt
http://www.opengl.org/registry/specs/APPLE/aux_depth_stencil.txt
http://www.opengl.org/registry/specs/APPLE/object_purgeable.txt
http://www.opengl.org/registry/specs/APPLE/row_bytes.txt

Cool. Thanks for that… I’ve just finished reading all the new iPhone ones… This is going to make me want to play on a grownup GPU again!!

By God I’m going to break down and buy a Mac one of these days. :wink:

Most of those extensions are terrible. float_pixels is nothing more than a worse version of the floating-point behavior GL3.x already has standard. texture_range is too low-level to do anything with. aux_depth_stencil is a bad extension; if you want multiple depth buffers, expose those in FBO form.

The difference is we have them now available on all platforms that can handle appropriate extensions.

Who is “we”? You only have them if you’re developing for a Mac. Further, you already have the decent ones if you’re developing for PC through GL 3.x or through already existing extensions.

I think Mac has joined forces with Hulu in an evil plot to destroy the world.

I think it must just be your tone, but you are quite irritating aren’t you?
My point was simply a reaction to your almost childish: “Meh, all those extensions are crap!”, statement.

GL3.x AFAIK is not 100% stable or available to many many PC users, just as it is not available at all to anyone on OS X, yet…

However, a lot of really very useful and well thought out extensions which you yourself say are only available on GL3.x are available on OS X now (across all of Apple’s machines) and will continue to be available when GL3.x becomes part of OS X. Furthermore they were available some time ago and were designed to work well with OS X and it’s own graphics systems. Win win for people who use OpenGL on OS X. And as an OS X dev or user you are guaranteed to have those features if your HW supports them. What’s more if you want the more bleeding edge drivers (ala more 3.x type things) then there is always the opportunity to become a developer and choose to be exposed to earlier releases. That’s all.

Overall that’s a better situation for developers on that platform than having to worry about which GL driver / context / version they are running, and then also deal with all the various teething problems which are very well discussed on these forums, and supporting the hundreds of different flavours of drivers and OSs out there in PC land.

There are a lot of valid criticisms of Apple’s OpenGL implementation and the frequency of updates which Apple devs know oh too well, but yours it not really one of them.

Really Alfonso, the more I see of you here the more I get this image of a troll in my head. My problem is you’ve come into a thread where we were actually trying to help someone, taken my comment about OS X / XP totally out of context (I was simply referring to the fact that the OP was using XP on an Intel Mac) and then gone off on some kind of tangent without actually providing anything useful to this thread and it’s original subject.

That’s absolutely right. :wink: