PDA

View Full Version : Want to try my demo?



WhatEver
03-12-2001, 05:43 PM
I've been working on a really cool model library that reads 3ds files. When I'm done I want to give it away for free for people who are not going to make a profit on the game it's used in.

When I'm near completion, I'll list all of the features.

There are 890 polygons in the model.

The top left number is the FPS. The number underneith is the time(left over from a previous demo)

Press the '~' for the console. The console lists a few commands you can try out.

I'd like to know the Video Card and Framerate for the 'uv' and 'none' modes.

Thanks http://www.opengl.org/discussion_boards/ubb/biggrin.gif.

Rm2 Demo 200KB (http://www.gamestead.com/rm2.zip)

Screenshot (http://www.gamestead.com/mangalor.jpg)

[This message has been edited by WhatEver (edited 03-12-2001).]

Nutty
03-13-2001, 05:01 AM
Nice...

I have PIII 800 with 32MB Geforce 256 running Win 2K.

Here are the frame rates I got. V sync was disabled.

UV 650
Color 634
Both 606
None 670

hope thats helps.

Nutty

Tim Stirling
03-13-2001, 10:31 AM
I have a p3 600, 128MB, and a TNT2 m64 32MB, running win98.

UV 410
color 370
both 330
none 440

Oh and on my computer it was the "'" (also the "@" key and not the "~" key that enabled the console. This could be the difference in us and uk keyboards?

Bob
03-13-2001, 01:12 PM
The program ran fine, but I couldn't get the console to pop up at all. Tried all/most keys, with and without shift/alt gr, but no console. Got a swedish keboard here.

Anyways, dual P3 733 and Win2k - no problems.

WhatEver
03-13-2001, 02:44 PM
UV 410
color 370
both 330
none 440

Woooo hoooo. Finaly a TNT2 that runs with a more than fine framerate http://www.opengl.org/discussion_boards/ubb/biggrin.gif. All of my previous attempts at drawing models on a TNT2 all had low framerates. All I did this time was align all the elements so there weren't so many vertices being transformed. Makes sence, but the GeForce doesn't seem to care.

Bob, my console uses the same key id uses for there console. I'm not familiar with UK KBs so I don't know what to say http://www.opengl.org/discussion_boards/ubb/frown.gif. Try using the keys Tim Stirling mentioned.

Thanks for testing my demo guys. Don't stop testing though 'cause I'd like to see the performance with a few other video cards.

NOTE: if the program doen't start up, it means your Video Card doesn't support the glLock and glUnlock extentions.

Eric
03-13-2001, 10:47 PM
Originally posted by Nutty:
I have PIII 800 with 32MB Geforce 256 running Win 2K.

Thought you had a brand new Athlon 1Ghz without a gfx card and hence running in VGA mode ???

http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif http://www.opengl.org/discussion_boards/ubb/smile.gif

OK, I am running a Dual P3 600Mhz with an ELSA Erazor X2 (GeForce 256 DDR) under Win2K + Detonator 10.80.

UV 566
color 527
both 479
none 632

By the way, I had to use the "@" key as well !

Regards.

Eric

Bob
03-14-2001, 01:29 AM
>>Bob, my console uses the same key id uses for there console

Yes, you do, but how do you detect that key? I can get the console to work in all of id's games, but that is not because I press the '~'-key. To get the console in Quake I press '', because thats the character mapped to that key (the key left of '1' and below escape). The same physical key, but different characters mapped to that key.

Maybe you try to detect keys using the actual character mapped to the key. But this is not so good. To get the same physical key on all keyboards, you should use the actual keyboard scan code, the code generated by your keyboard, which the OS in turn uses to map to different characters. Scan codes are identical on all/most keyboards.

WhatEver
03-14-2001, 04:03 AM
That is what I do. Here's my code showing the scan code, not the ascii code:

case WM_KEYDOWN:
{
switch(wParam)
{
case 192:
{
Console.ToggleDrop();
break;
}

People from the UK have tested my demo before and they've never said they were having any problems :/. I wonder what's wrong?

mellow
03-14-2001, 04:16 AM
hi

mines athlon 800mhz GeForce2 GTS

uv 1226
color 942
both 800
none 1405

I also had a problem with the console key (i'm using a UK keyboard), i had to press the ' (Sh-' => @) key for the console.

mellow

Nutty
03-14-2001, 05:03 AM
Yes Eric I do have an athlon.. I ran it on my work machine.

still waiting for them geforce 3's... http://www.opengl.org/discussion_boards/ubb/smile.gif

Nutty

Bob
03-14-2001, 06:35 AM
MSVC 6 documentation says that when a WM_KEYDOWN message is posted, wParam contains the virtual key code of the key pressed. The virtual key code is, as far as I know, the value after the device driver have decoded the scan code, using the selected keymap. So the tilde can be different physical keys on different keymaps.

lParam bit 16 to 23 contains the scan code, i.e. the physical key on the keyboard.

And the application might work on UK-keyboards, I don't gonna argue about that. But as I said, I'm on a swedish keyboard.

Anyways, to get a tilde, I have to press AltGr + the key left of enter (the upper one of the two) followed by a space, and that does not bring up the consol in your application.

Tim Stirling
03-14-2001, 10:00 AM
Originally posted by WhatEver:

People from the UK have tested my demo before and they've never said they were having any problems :/. I wonder what's wrong?

It didn't think there was much of a difference between US and UK keyboards.

Tim Stirling
03-14-2001, 10:03 AM
Originally posted by mellow:
hi

mines athlon 800mhz GeForce2 GTS

uv 1226
color 942
both 800
none 1405

I also had a problem with the console key (i'm using a UK keyboard), i had to press the ' (Sh-' => @) key for the console.

mellow

Those are scary framerates, http://www.opengl.org/discussion_boards/ubb/eek.gif

WhatEver
03-14-2001, 01:22 PM
I'm glad you told me that Bob. I thought I knew quite a bit, but this goes to show you there are always little details missed.

Thanks for the benchmarks. If anybody has a 3ds loader, and if it isn't to much trouble, I'd like you to benchmark the same model so I can see how my drawing speed differ from yours.

I've been studying OpenGL trying to get the best performance I could get from it, so far I'm pleased, but if one of yours blows mine away, I'll just know there's room for more optimization.

I'm releasing the library to the public, so we wouldn't want a slow model drawing routine, would we http://www.opengl.org/discussion_boards/ubb/wink.gif?

philippe
03-14-2001, 01:36 PM
Just got the message
"Could not create OpenGL context"
and a crash.

Pentium 3 733, W2K SP1, TNT, Detonator 3 (650) driver from NVidia.

Doesn't sound too good, huh ?

Bob
03-14-2001, 01:56 PM
Well, I got stuck in the console-not-working discussion, so I forgot to report my results http://www.opengl.org/discussion_boards/ubb/smile.gif

Could only test the "default setting" of known reasons - 75 fps w/vsync, hehe.

Adrian
03-14-2001, 03:12 PM
Philippe,

I've seen that problem before, I think it is caused by the 6.5 detonators not supporting older versions of GLUT. Try updating your glut32.dll to the one here http://www.xmission.com/~nate/glut.html

HenryR
03-14-2001, 03:12 PM
Hi,

None - 1615
UV - 1460
Color - 1171
Both - 1018

GeForce 2 Ultra on an Athlon 900, running 6.50 drivers on W2K Pro.

Again, English keyboard - problem with console. Pressing the ' key (which on my keyboard is two the right of the 'L' key) brings down the console. Also, the fact that the keypress to bring down the console gets sent to the console kept making me mistype stuff http://www.opengl.org/discussion_boards/ubb/smile.gif

HTH,

Henry

Heaven
03-14-2001, 05:44 PM
Now for the low end!

Intel Celeron 366 (5.5x66 Mhz bus - I usually run it at 5.5x83 but I'm giving it a break) with a, get this you GeForce geeks, 3dfx Velocity 100, or the equivalent of an 8 Mb Voodoo 3. The following are eyeballed averages.

none (no tex no color)=156
uv (texture)= 137
color+smooth shading = 73
color+flat shading=75
uv+color (tex + color)=66

I've no problems with the tilde key (~). Nice console. http://www.opengl.org/discussion_boards/ubb/smile.gif

On another note, your program has left me in a 640x480 desktop and not the 1024x768 desktop I started with. http://www.opengl.org/discussion_boards/ubb/frown.gif

I've noticed that none of NeHe's tutorials would properly preserve my desktop resolution either. Not even when I tried the "fullscreen fix" tutorial. If anyone could shed some light on this I'd be most appreciative.

Nice demo! A 3ds library would be most useful. Does it have display functions, like GLshow3DS() or does it just read them for you and convert to a list of tris?

Care,
Heaven (heaven@knology.net)

WhatEver
03-15-2001, 04:42 AM
GAH! Bugs http://www.opengl.org/discussion_boards/ubb/eek.gif

I am using glut alright. I use it for the mipmapping and setting up the perspective.

I have a way of getting the original res back, but I'll have to put it here after work 'cause I'm low on time.

Thanks for the compliment on my console http://www.opengl.org/discussion_boards/ubb/biggrin.gif. I'm using raster fonts right now; they're beyond slow so I'll convert it to textures in the near future.

What my library does is read the 3ds file and place all the contents usefull to a game model into a ready to use class. All you really need to do to open a 3ds file and then draw is this:

rm_model Rm2Model;

//ignoring the return value for simplicity
Rm2Model.Read3ds("parts.3ds");
TextureNames=Rm2Model.GetTexNames(&nTextureNames);
Rm2Model.Draw(TextureNames);

//some other thing you can do
Rm2Model.MakeSmooth();
Rm2Model.MakeFlat();
//set one object to draw an envirnmental map
Rm2Model.SetEnv("objectname", true);
//to draw a model with envmap
//I prolly could have overloaded the func :p
Rm2Model.DrawEnv(TextureNames, EnvMapName);

WhatEver
03-15-2001, 02:16 PM
Here's how you make sure the original setting are restored.

//Firts you create a global DEVMODE variable:
DEVMODE devMode;

//then you save the curent mode
EnumDisplaySettings(NULL, ENUM_CURRENT_SETTINGS, &devMode);

//now restore when you're all done
ChangeDisplaySettings(devMode, 0);

holocaust
03-15-2001, 03:39 PM
Ok, the 'official' quake console key under direct input is 0x29 aka DIK_GRAVE ;)

WhatEver
03-15-2001, 07:14 PM
That's prolly the same one for the lparam too.

Man, I still use windows messaging :/. I need to get a DirectInput lib going on some time.

RandyU
03-15-2001, 08:19 PM
It runs on my W98+Celeron 366 + Riva 128 ZX 8mb with such results:
UV - 121
color - 86
Both - 82
none - 132

billy
03-16-2001, 12:42 AM
Hi,
I could not activate the console becuase my keyboard is not US or UK. By the default settings I had:

60
9

On the top left coner.

I have pentium III 733MHz with 512 Mbytes RAM & nvidia TNT2 Pro 32 Mb.

[This message has been edited by billy (edited 03-16-2001).]

[This message has been edited by billy (edited 03-16-2001).]

BwB
03-16-2001, 01:12 AM
PIII 800 Coppermine 320mb RAM GeForce 2 MX 32mb

UV 542
Color 377
Both 325
None 650

Love the console http://www.opengl.org/discussion_boards/ubb/smile.gif You should make a program loop that times all four (or eight if you wanna take into account smooth vs. flat) modes and spits out the result into a text file... But I guess I'm just the lazy type, I actually had to go searching for one of them... whatcha callits.. writing utensils http://www.opengl.org/discussion_boards/ubb/smile.gif

Madoc
03-16-2001, 11:04 PM
pIII 800 cm, geforce DDR

uv 1195
col 770
both 650
none 1400

How come colours are so slow? What format are you using for the colours? Float?
The UV perf is really good. What optimisation methods do you use? CVA? VAR?
Are you using a single directional hardware light?

Madoc
03-16-2001, 11:06 PM
Oh yeah, I tried loading some higher tess models. It works fine with a few k polys but when the count gets high the demo crashes. Any idea why? Have you already tried it with high-polygon models?

WhatEver
03-17-2001, 08:03 AM
Madoc, I looked into the problem. It seems the vertex aligning algorithm is taking quite a bit of time to run through all similar vertices. The program never locked, it's just taking forever to process :/. I guess we'll have to stick to small models for now.

In the meantime, I'll try to improve it.

Wulf
03-17-2001, 09:01 AM
I get "an invalid set of flags was passed in"

WhatEver
03-17-2001, 09:26 AM
What vid card do you have Wulf? I'm assuming if it can't get past the display change, you're card couldn't handle the demo anyway http://www.opengl.org/discussion_boards/ubb/frown.gif.

WhatEver
03-17-2001, 09:37 AM
How come colours are so slow? What format are you using for the colours? Float?
The UV perf is really good. What optimisation methods do you use? CVA? VAR?
Are you using a single directional hardware light?

Whoa! I totally missed these question http://www.opengl.org/discussion_boards/ubb/eek.gif.

Colors are 4 floats...and your card prolly isn't optomized for colors.

From what I understand about CVAs, they're useless for this demo since there is only one pass for the skin, but it is used anyway.

I am using one directional light for the demo.

I don't know what VAR is.

Here's sample code:

glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, Meshes->Vertices);

if(DrawingState & RM_NORMAL_ARRAY)
{
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, 0, Meshes->Normals);
}
else
{
glDisableClientState(GL_NORMAL_ARRAY);
}

if(DrawingState & RM_TEXTURE_COORD_ARRAY)
{
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, UVcoords);
}
else
{
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}

if(DrawingState & RM_COLOR_ARRAY)
{
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_FLOAT, 0, Colors);
}
else
{
glDisableClientState(GL_COLOR_ARRAY);
}

if(glLockArraysEXT)
glLockArraysEXT(0, Meshes->nVertices);

//loop through triangles: loop not shown
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, VertexIndices[i]);

if(glUnlockArraysEXT)
glUnlockArraysEXT();

No one has explained to me what CVAs and VARs are yet. I've asked but their description made no sence to me, they just said what it was, not what you use to accomplish them. Maybe you can explain it to me.

There is one trick I'm using to get the speed up. The GeForce doesn't seem to care about it though. I remove as many repeated elements as I can. This process is really slow when the polys are high though.

[This message has been edited by WhatEver (edited 03-17-2001).]

Wulf
03-17-2001, 01:26 PM
I've got a voodoo 3 3000

WhatEver
03-17-2001, 02:53 PM
I have a Voodoo3 3000 and it works fine.

Have you run GLsetup?

Madoc
03-17-2001, 03:24 PM
You ought to get better perf (at least on NVIDIA cards) by using unsigned bytes for your colour component. I wouldn't say the geforce is unoptimised for vertex colours.
If you are using vertex colours and lighting I guess you're using Color_Material. That would explain the performance hit.

VAR or NV_Vertex_Array_range allows you to allocate vertex mem in video or uncached AGP memory. It can really speed up vertex transfers.

Have you not tested the effect on performance of CVA?
If you really don't know, CVA is supposed to store a processed form of vertices (mostly T&L OPs) when they are first used so that subsequent calls to DrawElements (or DrawArrays really) may reutilise them without reprocessing.

How do you arrange your triangles? (I think this is the clue to your demo's performance)
What do you mean by remove repeated elements? Welding vertices and uv coords? Is that what takes so long?

WhatEver
03-17-2001, 03:40 PM
Yeah, I guess you could call it welding. Every corner of every polygon tries to only share one: vertex, color, normal and uv.

I'm using color material alright.

The triangles aren't aranged at all :/. I don't know how to. They basicly keep the same order they were saved in the 3ds file.

So when I use: glLockArraysEXT(0, Meshes->nVertices);
I'm utilizing Compiled Vertex Arrays?

What does VAR stand for?

What does OPs stand for?

I've tested the performance of the Lock Arrays extention and they don't make my models render any faster at all :/, even with a second pass executing between the Lock and Unlock ext.

WhatEver
03-17-2001, 03:43 PM
I'm pretty sure I fixed the crashing problem. One of the variables were overflowing. Another bug was that if the uv coords were not saved in the 3ds file it would lock. I'll upload the newer version later though.

Wulf
03-17-2001, 06:28 PM
GLsetup? Nope, it crashes during the install.

Madoc
03-18-2001, 02:12 AM
I'm surprised such a simple approach yelds such high performance. I wonder what order the triangles in the 3DS file are in. I suspect that on many cards it is not such an optimal approach.

ColorMaterial is certainly what slows the geforce down so much.

I explained what VAR is : Vertex_Array_Range, an nvidia extension.
OPs is simply short for operations.

On NVIDIA hardware only one vertex format is optimised for CVA: t2f_t2f_c4ub_v3f (quake3 format). Unless you are using this format it's unlikely CVA will do anything but slow you down a tiny bit.
What card are you testing on?

WhatEver
03-18-2001, 05:12 AM
VAR or NV_Vertex_Array_range

LOL, call me dense http://www.opengl.org/discussion_boards/ubb/smile.gif. The "or" made me not put two and two together, plus I've been really tired for the past few days.


Unless you are using this format it's unlikely CVA will do anything but slow you down a tiny bit

Yep, by two frames :/.

I'm using a Geforce DDR, Geforce2 and a Voodoo3 3000.

I don't have 3D Studio. I save my models from Rhino3D. Maybe they order their polys eficiently http://www.opengl.org/discussion_boards/ubb/biggrin.gif. Oh yeah, that mangalor(parts2) came with UView, so maybe that particular model was already optomized. I did load it into Rhino3D and explode the model though, so Rhino3D reconstructed the poly order.

I know what a CVA is now.

So what's better; CVA or VAR? I can see VAR being better for NV cards, but CVA would be supported by more cards.

So, is there a better way to color sections of a model other than vertex colors using color material? I'm gonna update my lib to only support GL_UNSIGNED_BYTE sence you said it's faster.

WhatEver
03-18-2001, 05:23 AM
Taking things apart: t2f_t2f_c4ub_v3f

//is there two textures because one is for base
//and the other for lightmap?
tf2 = texture 2 floats
tf2 = texture 2 floats
c4ub = color 4 unsigned bytes
v3f = vertex 3 floats

Where are the normals?

Whoa, I just looked at the red book. It apears that t2f_t2f_c4ub_v3f is only used for interleaved arrays.

WhatEver
03-18-2001, 10:28 AM
I just found out that there are two bottlenecks in my model optomizers.

First the aligning(or welding) process is very slow.

Second the smoothing process is even slower.

I've taken out the aligning method from the 3ds loading routine so the programmer can have the choice to run it or not just before he saves his own file format.

I'll upload the latest version later today so you guys can mess with it some more. It loads high poly models really quick now...but be aware that smoothing and aligning are very slow.

Hopefully I'll have a lib ready for any of you interested in using the beta of my model library. A sample program using my lib will have to wait awhile though.

Lars
03-18-2001, 02:01 PM
//loop through triangles: loop not shown
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, VertexIndices[i]);

Do i understand it right, you do a glDrawElements for each Triangle ?
You get a large function call overhead with this, why not building one index list for the whole model, and then draw the complete model with one call to glDrawElements.

And did i understand it right, that t2f_t2f_c4ub_v3f is only optimized for interleaved arrays. because rendering with it works, but my colors are in a sepperate array.

Lars

WhatEver
03-18-2001, 02:11 PM
lars, you just gave me an idea http://www.opengl.org/discussion_boards/ubb/biggrin.gif. I knew about that, but I just thought of something else to do too http://www.opengl.org/discussion_boards/ubb/biggrin.gif.

I originaly did that way so I could have more than one texture per object. My new idea is to just group the polygons with similar texture names together, and then send the groups through glDrawElements. It'll require a process to group the polygons, but it should be worth it for sure.

WhatEver
03-18-2001, 02:29 PM
With a model consisting 376 polys drawn 40 times I get 115 FPS with one single call to glDrawElements, and 87 FPS with nPolygon calls to glDrawElements.

You know what that means http://www.opengl.org/discussion_boards/ubb/smile.gif. That means I'm gonna optomize some more...muahahahaha. Time to put on my thinking cap. I'll think about it at work tomorrow...shouldn't be to hard grouping by texture name. One drawback may occur though. The polygons may be ordered a certain way, so if I know them out of order, it could eliminate any performance that was already there :/. You win some, you lose some.

[This message has been edited by WhatEver (edited 03-18-2001).]

Lars
03-18-2001, 02:29 PM
Ok, i just tried the model with my own loader and viewer, of course it has optimized the model by smoothing some parts (don't know if it smoothed the right way, don't use the 3ds settings).
But it rendered with about 300 FPS, with texturing and one Vertex Light with my GF2MX on a 350 MHZ PII.
With your viewer i get 140 with uv, color and both.

I made two Screenshots and put them on my site
Textured Version :
http://userpage.fu-berlin.de/~larswo/parts2.jpg

Untextured Version :
http://userpage.fu-berlin.de/~larswo/parts2%20no%20texture.jpg

I hope it helps you

Lars

WhatEver
03-18-2001, 02:33 PM
To do a more acurate test you'll need to match the video res.

Let me zip up my latest creation, and then get the FPS from it...hold on.

WhatEver
03-18-2001, 02:42 PM
All you need to do to get a comparable framerate is this:

-Render in Full Screen mode at 800x600x32

WhatEver
03-18-2001, 02:44 PM
Oh yeah, dl the latest rm2.zip through the same link I provided for the original.

Lars
03-18-2001, 02:53 PM
Can't change the res in my utility, but my gameengine (which uses the same rendering background isn't affected by resolution, it should be the same for my tool, cause my p2 350 is to weak, to sustain enough triangles, to kill the fillrate of my Geforce.

When i load your mesh into my game performance drops to 150 FPS, but there is much more overhead, because of Octree building Cockpit drawing and so on

Lars

Lars
03-18-2001, 02:58 PM
250 FPS in all cases, so this should make rendering performance approximately aequivalent, cause my loading routine reduced the number of vertices, but i have some overhead thru windowing :-)

keep on the work.

Lars

WhatEver
03-18-2001, 03:04 PM
If you're loading the same model, and keeping the same order of polygons the 3ds file came with, then our routines should be equivilent, except for my multi call to glDrawElements.

Madoc
03-19-2001, 12:21 AM
Originally posted by Lars:
But it rendered with about 300 FPS, with texturing and one Vertex Light with my GF2MX on a 350 MHZ PII.
With your viewer i get 140 with uv, color and both.


How can WhatEver's demo give 140 fps on your GF2MX when it runs at 1200 on my geforce 256?

Bob
03-19-2001, 03:35 AM
Come on, framerate is not only based on the graphics board. There are a few more factors, including CPU speed for example.

Madoc
03-19-2001, 10:43 PM
I'm afraid an 800Mhz CPU hosting geforce DDR cannot be 10 times faster than a 350 with a geforce 2 mx. In fact in this case the CPU certainly makes very little difference, the geforce can do almost as well on a p2 266.
All his framerate are the same, vsync every second frame or something?

Alexei_Z
03-20-2001, 01:56 AM
Originally posted by Lars:
And did i understand it right, that t2f_t2f_c4ub_v3f is only optimized for interleaved arrays. because rendering with it works, but my colors are in a sepperate array.
Lars

Are you really talking about Interleaved Arrays? According to the OGL 1.2.1 specification there is no corresponding (t2f_t2f_c4ub_v3f ) enumerant for glInteleaved Arrays. The format is actually optimized for using with CVA,
Alexei.

Lars
03-20-2001, 04:17 AM
To the Framerate :
It depends, if he uses an format which is hardware accelerated, if not my cpu has many disadvantages. For example an PII doesn't have SSE or 3dnow, that means all the Vector operations are very slow.
Also the 3dmark shows this too, everything that is done on the cpu slows down everything dramatically. Just means the the programm from may not utilize hardware tnl.

To the Format :
No i more think of VertexArrays in theyre baseform. the data can be interleaved there too, by putting it in the same memory block, and keeping everything together (vertex1,color1,uv1,vertex2,color2...)

I hope this is not necessary to gain hardware acceleration, cause it would be incompatible with my lighting engine at the moment.

Lars

[This message has been edited by Lars (edited 03-20-2001).]

Lars
03-20-2001, 04:39 AM
Ok i just did a little test, and commented out all the drawing calls (from gllockarray to glunlockarray inclusive), and my framerate in my gameengine goes up from 52 to 54 FPS.

So if my CPU had to transform those approx. 3000 Vertices it would have gone up much more.

Lars