Does anyone know how to tell if the memory returned by glMapBufferARB is indeed memory directly usable by the graphics board? I heard someone claim Nvidia’s OpenGL just returns system memory. Also, is there any difference between mapping a streaming buffer for AGP and PCI express hardware?
My other question is what is generally the fastest performance relative to the raw triangles/sec you can expect to get using streaming VBOs, assuming triangle lists, 2 byte indices, no lighting/texgen, pixel sized triangles, etc? I’m getting 60% on a Radeon 9200 (peak 63M t/s, but on a Radeon X300 (peak 162M t/s), I only get 9% of the peak. Does this low performance have anything to do with PCI express?
BTW, my benchmark just draws a list of really small identical triangles, with the #indices equal to the parameter returned by GL_MAX_ELEMENT_INDICES.
Appreciate your input.