PDA

View Full Version : Display Lists



Rodrix
12-11-2006, 10:28 PM
Hey there!
Such a long time I don't post!

I am back at programming now :)
One short question:
If I compile in a display list commands like glscalef or glrotatef, when the display list is executed does it reexecute the glscalef and glrotatef commands or are the changes made during compile time made permanent so that when displaying the display list it displays a scales/rotated version of the original polygons?

In other words, is it faster to execute glscalef after calling a display list, or executing glscalef during the compilation of the display list?


Hope it's clear enough.

cheers! :)
Rod

k_szczech
12-12-2006, 12:48 AM
Both can be true - driver may optimize vertex data in such way, but in general, cost of glScalef, glRotatef, glTranslatef is relatively low comparing to cost of rendering polygons and multiplying vertices by matrix still needs to be done, so I doubt anyone has put such optimization in display lists.
These operations can be measured in nanoseconds, so I don't see a reason for such optimization and honestly speaking, neither I see the reason why would you want to know that :)

Relic
12-12-2006, 04:35 AM
In the end all your matrix calls are boiling down to a matrix multiplication. If you have multiple matrix calls in succession compiled into a display list, a clever implementation could calculate the resulting matrix and store only that.
Vertices inside display lists cannot be pretransformed because they must retain the modelspace coordinates the user sends. Think of shaders working in modelspace!
It wouldn't help performance anyway because in the end all vertices are going through the one matrix transformation which makes up for the current top of stack matrix and that's loaded into the HW.
With enough vertices to transform the cost of matrix manipulations gets benign.

k_szczech
12-12-2006, 07:35 AM
they must retain the modelspace coordinatesExcept for some cases (like drawing with no shaders, lighting, clipping etc.) this is why it's doubtful that such optimization exists, but theoretically it's possible.

Rodrix
12-12-2006, 11:49 AM
Originally posted by k_szczech:
neither I see the reason why would you want to know that :) ... My idea was to use all glscale commands during display list compilation time with glEnable (GL_NORMALIZE), and then on runtime gldisable (GL_NORMALIZE) to gain perfomance (as it won't be needed as normal won't be resized if the changes during compile time were hardcoded into the display list).

Does it sound to you as a good optimization?
Thanks! :)

V-man
12-12-2006, 02:26 PM
If you have a glScale in your display list, then you could just scale the object yourself and get rid of that glScale.

Also, if the scale is uniform (glScale(5.0, 5.0, 5.0))
then glEnable(GL_AUTO_NORMALIZE) is better. I think I have the name wrong.

Rodrix
12-12-2006, 05:09 PM
Originally posted by V-man:
If you have a glScale in your display list, then you could just scale the object yourself and get rid of that glScale.... Actually I use two files for my 3D objects. The first contains vertex, texture vertex, and normals data, and the second one contains material properties and the scale of the object.
The problem is that sometimes I use the same 3D object repeatedly but with different textures or scales depending the scenario. In other words I have one vertex data file and sometimes many 'material and settings' file associated to each vertex file.
So if I changed the scale myself (using for example 3d Studio Max) I would have to have several copies of the first file (that the contains vertex data and are usually large) one for each different setting I that I want.

...Anyways... maybe I won't get that much more performance disabling glenable (GL_NORMALIZE), and just keeping the glscale in compiled display list is a good idea?

What do you guys suggest?!
Do you use glenable (GL_NORMALIZE) in your proffesional projects?
Thanks! :)

Rod

Relic
12-13-2006, 12:28 AM
Except for some cases (like drawing with no shaders, lighting, clipping etc.) this is why it's doubtful that such optimization exists, but theoretically it's possible.During display list compilation the implementation doesn't know the current OpenGL state which is used on glCallList(s) later on, which means the original user input values must be available.


My idea was to use all glscale commands during display list compilation time with glEnable (GL_NORMALIZE), and then on runtime gldisable (GL_NORMALIZE) to gain perfomance (as it won't be needed as normal won't be resized if the changes during compile time were hardcoded into the display list).

Does it sound to you as a good optimization?Not at all, because 1.) glEnable doesn't affect the user input data stored in the display list and 2.) if you compile the display list that glEnable isn't even executed.
That's doesn't mean you should use GL_COMPILE_AND_EXECUTE, that's one of the OpenGL things to avoid.

glEnable(GL_AUTO_NORMALIZE) doesn't exist and the GL_AUTO_NORMAL thing is for generation of normals in evaluators (glMap), not for normalization.



What do you guys suggest?!
Do you use glenable (GL_NORMALIZE) in your proffesional projects?
Yes, you should always glEnable(GL_NORMALIZE) when you use scaling with fixed function pipeline's lighting and let the hardware do the rest. Normalization comes almost for free on current hardware. You should have more important problems to solve in a modeler.

It's probably the best strategy to only compile geometry data into display lists if it gets reused for multiple times and leave the scaling and materials in immediate mode.

Rodrix
12-14-2006, 04:19 PM
Thanks guys!!!
All the feedback was really useful! :)

glEnable(GL_NORMALIZE) stays... moving to the next thing... ;)