using CVA & or VAR

hi !

The environment is Linux, but i think it’s better here.
Now, i use correctly CVA and or VAR, it seems there is no problem on it. However, i’d like some informations about their environment:

  1. I had succesfully render a scene (more than 13000 triangles not stripped, with texture mipmapping, and simple lighting) with a framerate of more than 160 triangles per second. Now, it’s above only half, and i don’t see why. I have reinstalled my system between. Could it be due to the OpenGL libraries ?

  2. It appears my geforce2mx could not use the Fast Write AGP option. What impacts it could have ? (i may only read the data into the AGP, but not change them…)

Thank you.

About only half of what?

13000 triangles times 160 frames == 2 million triangles, or 6 million verts (with vertex caching, more like 3 million).

position 3f, normal 3f, texture 2f == 32 bytes per vertex -> 64 MB/s.

I believe you are fill rate bound. Try running your code with glCullFace( GL_FRONT_AND_BACK ).

Also, at 160 frames per second, per-frame overhead really hurts. Try quadrupling the complexity (but not fill needs) of your scene.

I just have about 90 im/sec now.
So, it’s may due to my OpenGL libs, or maybe because i where not using materials before.

I’ll try to quadruplate the complexity: so i just need to put GL_CULL_FRONT_AND_BACK, and draw twice my objects.

So, i may draw around 50000 tri per scene ? this may be hard to stand…

You seem to only type half of what your thoughts are. If you don’t put your context in writing, nobody but you will actually understand what you’re saying (or asking).

Anyway: the more triangles per frame you draw, the lower your frame rate will be. However, as there is overhead per frame, the lower the frame rate, the more triangles per SECOND you’ll be able to pump through. The benchmark numbers (vertices per second etc) for these cards are probably taken at 10 fps or lower.

If you find that vertex size is becoming a problem, you can always submit vertices using shorts and the appropriate scaling matrices. However, given your initial data, you seem to be far away from that.

When I said to quadruple your complexity, I didn’t say submit more vertex arrays. There is overhead per vertex array, too. Instead, tesselate all your geometry finer to measure where the problem is. Or just turn off rasterization.

Thanx,

The big problem with me is that I haven’t got internet at home, so I have to wait to be to my parent’s… with not my computer, with no linux… so, sometimes, I could not explain in the best conditions… that’s a shame, i know, sometimes you could think I’m a fool or everything else…
I plan to have Internet at home, this will change a lot i’m sure. but now…

So, i did not understand well “When I said to quadruple your complexity, I didn’t say submit more vertex arrays. There is overhead per vertex array, too. Instead, tesselate all your geometry finer to measure where the problem is. Or just turn off rasterization.”

I don’t understant what is overhead per VA… could you explain more please ?

now, anyways, thanx a lot