Material per vertex

Hi all,

I would like to specify material properties (ambient, diffuse, etc…) per vertex, using vertex arrays. Is this possible and how? How about display lists?

Thanks,
GarlicGL

Is there a reason for using materials per-vertex? Doing it like that would be dog slow. Use vertex colors instead because you can use Color Arrays that way.

You can set certain vertex properties to track the color. These properties include Ambient, Diffuse, Ambient&Diffuse, and Specular colors. To do this, you should call glColorMaterial to set what you want the color to track. Then, call glEnable(GL_COLOR_MATERIAL) to activate it.

I forgot to mention that you should group like materials together…that way you don’t have to change material properties per vertex.

Thanks for the reply folks.

What I am given in the model is the material properties per facet. I don’t know yet how use that in vertex arrays or display lists; I know how to use color and normals per vertex. I can compute the color per vertex depending on light position, normal, eye position, material properties, but that is computationally expensive (my eye position may change on per frame bases). I was windering if OGL allows me to specify material properties per vertex. If it did, I specify the normals, and use them in vertex array rendering.

I’ll read more on (GL_COLOR_MATERIAL) though. Maybe that’s they way I want to approach it.

Thanks again,
GarlicGL

Hi all,

I’m wondering if I am doing this right: If I want to include matrial properties in my code using vertex arrays, I use glColorMaterial when I fill out my color vertex array, right? I don’t see where else to use it since that’s the only place where I use glColor3f().

Another question. Will display lists work when the eyeposition changes inthe scene? Lighting would be fixed but I fly around in the scene and change the camera position.

Thanks,
GGL

GarlicGL,

AFAIK the material properties will track the color during the glDrawArrays/glDrawElements call, if the color array is enabled.

And yes, display lists will work with varying eye position, provided the color information given when creating the displaylist is eye-position independent. That means: if you calculate the vertex color yourself, based on the eye position, then of course using the display list with another eye position will not be correct.

HTH

Jean-Marc.

Ok, maybe I should approach this from another angle. Let me ask you all what is the best way to do the following.

  1. I have a scene with many facets, ~100K at least.

  2. Facets has diffuse and specular material properties.

  3. Lighting is enabled and may shift position.

  4. the camera position changes: The observer moves in the scene.

  5. During each frame, I need to extarct depth information (pixel position in the scene in world coordinates) and intensity information.

  6. Repeat the process for each frame. The camera position may change per frame or be the same for few frames. It depends on the way the user wants to move in the scene.

  7. My scene is 512x512

I’m trying to get a good FPS (~30fps) for the simulation. I know this depends on the hardware, but if I get good performance on a GF3, for example, then I know I will get at least as good a performance on better cards.

Your input is appreciated.

TIA,
GGL

I also have encountered this problem.

Futhermore, glMaterial does not work with my vertex array.

Does anyone know good tutorial with source code that uses vertex arrays?

GGL,

The best way to render what you want:

Well firstly it depends. If collectively your facets only use a few materials, you want to group facets by material, then (put in DL if you like):

  • set vertex and normal pointers
  • for each material
  • Set Material
  • Draw primitives

if there are very few shared materials, again put in DL if you like:

  • glColorMaterial(GL_BACK, GL_AMBIENT_AND_DIFFUSE)
  • set vertex, normal and colour pointers
  • Draw primitives

You can move the light and viewpoint as you like.

I can’t help you with stage 5, getting pixels is generally slow however.

The usual rules apply with batching, sending indexed primitives and so on. Generally for speed it is far better to preprocess your incoming mesh/scene to be real time friendly.

Hope that’s of some use,

Matt