glDrawElements suck for strips!! ;)

Hi!! i am using stripped primitives & VAR from NVidia!
My problem i simple, since 6.31 drivers glDrawArrays is buggy (freezing your comp on GeForce256 SDR only) so i am forced to use the Lame glDrawElements with a ‘fake indexes table’. What a shame! i can’t understand why glDrawArrays is still buggy under the 6.52 drivers!!! Moreover my timings concerning the use of glDrawArrays with stripped 100% primitives is 6% to 10% faster (CPU) and 2% to 4% (GPU). Any advice for GeForce chip detection under GL?? (chip ID sample from Nvidia uses D3D and is useless of course…)
thx!

I’m not familiar with this DrawArrays problem. Can you please elaborate?

  • Matt

Well, a call to glDrawArrays freeze the computer when using GeForce256 SDR board, the type of the primitive is GL_TRIANGLE_STRIP what can i add?? it works on all GeForce boards but not on GeForce256 SDR! Moreover if you call glDrawArrays using VertexArrayRange extension on GeForce based boards it will only hangs on GeForce256 SDR!! thus a call to glDrawArrays to draw Stripped primitives doesn’t work on GeForce 256 SDR! ok? understood?

it’s a silly thing coz glDrawArrays is faster than glDrawElements of course (non-indexed drawing routines is definitely faster than an indexed one isn’t it?)

Please, calm down.

Send me an application that illustrates the hang.

Are you sure it only happens with triangle strips, and not also with independent triangles, or fans, or quads, or other primitives? When you say “GeForce 256 SDR only”, are you saying that you have tested on other GeForce boards and it didn’t happen? I would be very suspicious, for example, of any OpenGL bug that happened on an SDR board but not on a DDR board.

  • Matt

Sounds an odd one alright.

As for speed, I was under the impression that indexed primitives were faster. (somat to do with the vertex cache??) Also, the lock extensions only work with indexed arrays too I believe.

P.S. I have a Geforce 256 SDR board, and I wrote a demo recently that use glDrawArrays, and it worked fine. Dunno what drivers I was using at the time. Probably later than 6.??

Nutty

Sorry Nutty but it only concerns glDrawArrays calls with VAR!! not classic vertex arrays (in this case no probs it works)

Now Matt here is the code but it’ll not help, i’ve tested this code on GeForce2 GTS and the call to glDrawArrays doesn’t hang the machine! I’ll test as soon as possible on GeForce 256 DDR but for sure it doesn’t work with the 256 SDR board (since ver 6.31).

if(pInfos->bits.locked){
	ptr = (VR_CHAR*) pInfos->plistCache;

//drawCacheArray:
glEnableClientState(GL_VERTEX_ARRAY_EXT);
glVertexPointer(3,GL_SHORT,primSize,ptr);

	if (pInfos->bits.useColors){
		glEnableClientState(GL_COLOR_ARRAY_EXT);
		glColorPointer(4,GL_UNSIGNED_BYTE,primSize,ptr+0x10);
	}
	if (pInfos->bits.useNormals){
		glEnableClientState(GL_NORMAL_ARRAY_EXT);
		glNormalPointer(GL_SHORT,primSize,ptr+8);
	}

	if (pInfos->bits.uvChannels > pPlugin->infos.caps.nbTextureUnits)
		maxChannels = pPlugin->infos.caps.nbTextureUnits;
	else
		maxChannels = pInfos->bits.uvChannels;
	offset = 0x14;

	for (channel = 0;channel < maxChannels;channel++){
		glActiveTextureARB(GL_TEXTURE0_ARB+channel);
		glMatrixMode(GL_TEXTURE);    
		glLoadMatrixf((float*)&pPlugin->currentMatrix[channel]);

		glClientActiveTextureARB(GL_TEXTURE0_ARB+channel);
		glEnableClientState(GL_TEXTURE_COORD_ARRAY);

		glTexCoordPointer(2,GL_FLOAT,primSize,ptr+offset);
		offset += 8;
	}
}

//glDrawArrays(primtypes[type],start,nb);
glDrawElements(primtypes[type],nb,GL_UNSIGNED_INT,fakeIndexes+start);

glDisableClientState(GL_VERTEX_ARRAY_EXT);

if (pInfos->bits.useColors)
	glDisableClientState(GL_COLOR_ARRAY_EXT);

if (pInfos->bits.useNormals)
	glDisableClientState(GL_NORMAL_ARRAY_EXT);

if (pInfos->bits.useTexCoords){
	for (channel = 0;channel < maxChannels;channel++){
		glClientActiveTextureARB(GL_TEXTURE0_ARB+channel);
		glDisableClientState(GL_TEXTURE_COORD_ARRAY);
	}
}

And you know Matt, i think you’re right! it hangs with any type of primitive on any type of GeForce 256 board. (with VAR ext)

I’m using GeForce DDR, 7.58 drivers. I

I’ve found that using glArrayElement does work, glDrawArrays or glDrawElements does not if you have enabled GL_VERTEX_ARRAY_RANGE_NV and are using vertex arrays that are not allocated by wglAllocateMemoryNV.

I think this should work, the driver detecting that this is not an vertex array that has been allocated properly and thus handling it correctly. … or? … I tried reading the extension spec but it’s quite difficult to make out exactly what the behaviour should be.

However, as glDrawElements and glDrawArrays are specified in terms of glArrayElement (as the extension says), this is obviously a discrepancy in driver behaviour and should be corrected.

I forgot to mention, this is not related to strips, it’s the same with discrete triangles and points as well (i.e, all types of primitives).

And it does not crash on my system, it merely does not draw anything.

[This message has been edited by macke (edited 04-16-2001).]

I had the same problem with glDrawArrays(GL_QUADS, …) using VAR. I didn’t tried other primitives but I also switched from glDrawArrays to glDrawElements and it worked.
But the problems is not GF DDR. It depends on the computer. The same program works, using VAR, with GF256, GF DDR & GF2 GTS (all cards I tested) in some computers, but they freeze other computer models with same cards/drivers. I tried different AGP configurations in the computers where it didn’t work but I have no success.
Also tried with W2K, W98 & WME with the same effect.
I sent a couple of emails with the problem to nVidia people… (Last one at the beginning of January, then I abandoned glDrawArrays…)

Originally posted by macke:
[b]I’ve found that using glArrayElement does work, glDrawArrays or glDrawElements does not if you have enabled GL_VERTEX_ARRAY_RANGE_NV and are using vertex arrays that are not allocated by wglAllocateMemoryNV.

I forgot to mention, this is not related to strips, it’s the same with discrete triangles and points as well (i.e, all types of primitives).

And it does not crash on my system, it merely does not draw anything.
[/b]

I was using just arrays that were using vertex that were stored in VAR memory (memory allocated using wglAllocateMemoryNV) and I had allways GL_VERTEX_ARRAY_RANGE_NV enabled.
I remembered that it happened with GL_QUADS & GL_QUAD_STRIPS (I didn’t try with other primitives…)
In my case, it freezes the computers. You just can press the reset button (W2K, W98 & WME).

Yes, if you try to draw arrays with vertex in system memory and GL_VERTEX_ARRAY_RANGE_NV enabled it doesn’t draw anything and it doesn’t crash the computer (As I remember, in this case, the spec says that the results are undefined). But this has nothing to do with this problem.

[b]

Yes, if you try to draw arrays with vertex in system memory and GL_VERTEX_ARRAY_RANGE_NV enabled it doesn’t draw anything and it doesn’t crash the computer (As I remember, in this case, the spec says that the results are undefined). But this has nothing to do with this problem. [/b]

I just figured they were related somehow…

You should be able to use system memory with VAR, so long as you set up the VAR properly. Memory from malloc() should work; it’s just that memory from wglAllocateMemoryNV will be more efficient.

  • Matt

OK, glad to see that the problem comes from Nvidia drivers!!! ) i hope they’ll fix that really soon bcoz has i’ve said before, as i am using triangles strips my performances gain is about 6-10%!!!
Finally concerning System memory and VAR i think it is really useless but it’s too early in the morning here!! ) (anyway what’s the point if it’s not in GeForce cache?)
Moreover i’ve just seen another NV extension just like NV_VERTEX_ARRAY_RANGE2, NV_BLABLA_EXt2000??? ;((
but what is all of this ****?? why those people are doing this D3D coding style??
it’s really annoying!!! ok,ok… probs with later extensions?? then fix them properly before releasing them!!! . grr…

Ooops!! eh! nice testings Cab!!!

Just a last thing, anyone could send the GL_RENDERER string for GeForce 256 DDR boards?? i’ve planned to use it as GeForce chip probing at the moment as glDrawArrays (still with VAR) seems to work with GeForce2 GTS boards and GeForce2 MX! Moreover if any lucky people has got a GeForce3 model could u send any infos about this glDrawArrays issue?? thx!