arrays drawing fine under display list, but not out of them ???

Hi!

i use vertex arrays with or without extensions, and i call glVertexPointer(Ext) each time i need a new part to draw.
I put all in display lists, and the drawing is fine, quite fast.
But, if i don’t use these display lists, the drawing is not well, many things wrong , in fact.

Could somebody tell me why ?

Jide

Because you’re calling it wrong.

And, any idea about it ?

is it because i call to glPointer() each time before a new draw ? or is it well, and i misuse some other things ???

I use:

glEnableClientState(…) // eventually with extensions.

gl*Pointer[Ext] to know in which array i draw (i call it each time before a new part of the drawing).

and finally, i use glDrawRangeElement() in order to render all of these.

I don’t make any use to Nvidia extensions for drawing; but i use data allocation into AGP.

What seems strange is that i have a correct rendering under the display lists. and no more without them. why display lists are corrected the drawings if you’re wrong ???
i don’t think it could be possible…

anyway, hope someone could help here.

jide

That was the only answer I could give with the lack of detail in your question.

Now you added some: you put data in AGP (did you get this from wglAllocateMemory ?) but you use no extensions (I suppose you didn’t call glVertexArrayRange).

Try regular malloc() memory.

As you don’t say WHAT is wrong, it’s hard to give more specific advice. Are you setting texturing state right? What’s the modelview matrix? Does the data in your vertices get corrupted between successive renders? (the data gets copied into the display list on compile)

thanx for your regards

I am under Linux, and so, i use glxAllocateMemoryNV for such a thing.
I put vertices, normals, (colors) & texels into the VRAM.
then, i call glEnableClientState(…), glVertexArrayRange & glDrawRangeElements for the drawings.

I do the same way between the display lists which are compiled and when i try to render directly with such ways.

All about textures are rights. the modelview matrix depends on what i want to draw.
the data doesn’t get corrupted btw 2 renders, but some times i have some errors while writing into AGP (which is automatically corrected).

I may use too often gl*PointerExt - before any drawing.

thanx anyway

jide

Here are some codes concerning my problem.
I hope it will be more easier to find a way by this manner.

These are from 3 files making some rendering using extensions and NVidia.
it’s some about huge… so i cut them to reduce only to the renderer.

Here what i call when i render
<code>
[…]
inline GLuint Render( void){
static GLuint *dlists= new GLuint[branch_nb];
static bool done= false;

glPushMatrix();
for( GLuint i=0; i&lt;branch_nb; i++){
      if( done){
	glPushMatrix();
	matrix3D& pos= graphic_branch[i].children-&gt;position;
	glTranslatef( pos.x, pos.y, pos.z);
	matrix3D& rot= graphic_branch[i].children-&gt;orientation;
	glRotatef( _RAD_TO_DEG, rot.x), 0,1,0);
	glRotatef( _RAD_TO_DEG(rot.y), 1,0,0);
	glRotatef( _RAD_TO_DEG(rot.z), 0,0,1);
	glCallList( dlists[i]);
	glPopMatrix();
    }
    else{
	dlists[i]=glGenLists( 1);
	glNewList( dlists[i], GL_COMPILE);
	glPushMatrix();
	graphic_branch[i].Render();
	glPopMatrix();
	glEndList();
	if( i==branch_nb-1)
	        done= true;
    }
}
glPopMatrix();
		
return 1;

}

</code>
And here is what making the display list:
<code>
//! render the part data
inline GLuint Render( void){
ext.glextVertexPointer( 3, GL_FLOAT, 0, 9faces_nb, &vertices[0]);
ext.glextNormalPointer( GL_FLOAT, 0, faces_nb
9, &normals[0]);
ext.glextColorPointer( 4, GL_FLOAT, 0, faces_nb12, &colors[0]);
ext.glextTexCoordPointer( 2, GL_FLOAT, 0, faces_nb
6, &texels[0]);
ext.glextIndexPointer( GL_UNSIGNED_INT, 0, 3faces_nb, &index[0]);
ext.glextDrawRangeElements(GL_TRIANGLES, 0, faces_nb, 3
faces_nb, GL_UNSIGNED_INT, &index[0]);
return 1;
}
</code>

I must call the display list to get a normal render. Otherwise, i have something like a ball. It seems that the indexes always started from the first index of what’s in the NV memory.

try to allocate your vertexbuffer with the standard malloc() function.
or another test: insert a glFinish() after every glDrawElements() call.

if it works then i have an explaination for it:
when you are compiling a displaylist, your vertexdata gets copied in another buffer in the agp/videomem (depends on drivers decision…). When you are rendering without compiling into a displaylist, the GPU accesses you vertexdata directly from the memory you specified, but - and this is a big “but” - your rendering call returns before the GPU finishes rendering your primitives. And because you are rendering in a loop, you start overwriting the(maybe not yet rendered) vertexdata when submitting new vertices.
this could be problem.
a good solution would be to use GL_NV_fence extension.

I don’t use glVertexArraysRangeNv… isn’t due to this ?? ( i though glVertexPointerExt could do the same work).

So, i’ll need to call glFinish after any call to a render ?? this will slows down very much the renderer i think.

now, GL_NV_fence extensions are pure unknown from me… what are they ? and how to find some documentation about ?

jide

Originally posted by jide:
[b]thanx for your regards

I am under Linux, and so, i use glxAllocateMemoryNV for such a thing.
I put vertices, normals, (colors) & texels into the VRAM.
then, i call glEnableClientState(…), glVertexArrayRange & glDrawRangeElements for the drawings.

jide[/b]

if you-are NOT using glVertexArrayRange, then why do you saying that ?

and if you are using glxAllocateMemoryNV, why do you think, you are not using any extensions ?

and about glFinish: i said you should do it to check if it fix your problem. if the answer is yes -> Just take a look at GL_NV_Fence extension…

…and about extensions:
http://oss.sgi.com/projects/ogl-sample/registry/