Hey, everyone.
I’m currently putting the ATI extensions into the CAD engine I’m working on right now, and I’m running into some problems with performance.
~1,000,000 triangles(lit, non-textured, vertex-and-normals only, GL_FILL not GL_LINES, etc) in my seemingly optimal case gives me about 50 fps. That’s
nice. I’m happy. ATI rules!
However, when I load pretty much the same model (proprietary format, sorry) with a slightly different tristrip layout, I fall out of the “Fast” render path and I end up with 400 ms render times. I’m not so happy.
So, I’m trying to analyze what the heck causes this. I even have one 90k triangle model run at 140ms. It’s so slow.
So, obviously the tristrip layout is not very good in the “bad” cases and given that the models are quite large, it’s tricky to analyze the actual model layout (although I will start that next week. I’m away from the code right now and it’s driving me mad
What I’m wondering about is: What could cause this?
Since the 90k model is obviously much smaller than the 1M model but renders 5 times slower, something is flakey.
Does anyone know what recommendations ATI has put out? Max tristrip length? Minimum tristrip length? Max number of tristrips? Do collapsed triangles(in longer tristrips) matter? max indexes? Anything? Or is my Radeon 9700 Pro simply busted?
Since the data format is the same in both cases(vertex and normal array objects in
conjunction with the element array extension, etc) I doubt that the data format is the problem. (ie byte alignment etc shouldn’t be the root cause of this, right?)
Ok, hopefully someone will understand what I’m trying to say here. I’m basically falling off the SuperFast rendering path and end up plodding along on the lesser trodden paths of the card/driver.
Thanks for any help!
/Henrik