PDA

View Full Version : VBO troubles :-)



Ozzy
07-06-2003, 08:42 AM
Well, i wouldn't like to bring some bad news concerning the VBO extension but... ;-)

the specs are saying: 2.8A.1 Vertex Arrays in Buffer Objects
--------------------------------------

Blocks of vertex array data may be stored in buffer objects with the
same format and layout options supported for client-side vertex
arrays. However, it is expected that GL implementations will (at
minimum) be optimized for data with all components represented as
floats, as well as for color data with components represented as
either floats or unsigned bytes.


Blah blah which sounds like VBO is a VAO sequel! ;-))
Anyhow, let's imagine i really don't care about using floats &| bytes where it is intended to with the following vertex format ->

typedef struct SVR_VERTEX
{
VR_FLOAT x; // $0
VR_FLOAT y; // $4
VR_FLOAT z; // $8

VR_BYTE r; //Fantastic colors.. $0c
VR_BYTE g; // $0d
VR_BYTE b; // $0e
VR_BYTE a; // $0f

VR_CHAR nx; //non transformed normal (-128,127) $10
VR_CHAR ny; // $11
VR_CHAR nz; // $12
VR_CHAR rien; // $13

union {
VR_UV texCoord[4]; // $14
struct{
VR_FLOAT u0,v0; // $14,$18
VR_FLOAT u1,v1; // $1c,$20
VR_FLOAT u2,v2; // $24,$28
VR_FLOAT u3,v3; // $2c,$30
};
};

}VR_VERTEX;

and then, using 44.03 win2k + GF2GTS would it be possibly the reason why it generates an abnormal prg termination?? :-)

tchOo!

Pop N Fresh
07-06-2003, 07:17 PM
What compiler and options are you using?

Your structure might be larger than you think it is because the compiler might be aligning all the members on 2 or 4 byte boundaries.

Asgard
07-06-2003, 10:36 PM
Whatever you do wrong, you shouldn't get an abnormal program termination. Try to upgrade your drivers.

Relic
07-06-2003, 10:43 PM
How should the layout of the struct show where a problem in your VBO usage is?
Put some code and explain the crash condition.

Ozzy
07-07-2003, 05:57 AM
Well, i really think it is an implementation error. I've updated to 44.22 beta and it still occurs.. As i said, it reminds me a bug from the VAO ;-)

more details here -> http://www.orkysquad.org/main.php?id=lire&templ=templCode&nom=ozzy&cat=Code&month=July&year=2003#623

thx for the replies anyway. :-)

Pop N Fresh
07-07-2003, 08:45 AM
glNormalPointer(GL_UNSIGNED_BYTE,sizePrim,BUFFER_O FFSET(0x0c));

GL_UNSIGNED_BYTE isn't a valid type for glNormalPointer - check your OpenGL documentation.

Ozzy
07-07-2003, 09:43 AM
Well done dude! ;-)
That was it! it doesn't crash anymore. :-)
thx!!

Ozzy
07-09-2003, 08:03 AM
Ok, i'm now facing rendering problems with the normals using GL_BYTE, GL_SHORT and GL_INT while lighting is enabled. :(
Anyone the same problem?

Moreover, using the same data structures with CVA or other mechanism instead of VBO there is no problem.. argh.. :)

the shots (same url) http://www.orkysquad.org/main.php?id=lire&templ=templCode&nom=ozzy&cat=Code&month=July&year=2003#623

thx

Korval
07-09-2003, 09:38 AM
Moreover, using the same data structures with CVA or other mechanism instead of VBO there is no problem.. argh.. http://www.opengl.org/discussion_boards/ubb/smile.gif

Of course there's no problem; they have to do a copy to video-card accessible memory anyway.

The hardware cannot recognize most formats for most components. It can recognize floats for components, as well as GL_BYTE for colors. That's it. If you use anything else, it will impose a significant speed hit under VBO. So, don't do it.

Under CVA, since there's a copy anyway, they have the ability to convert any data you store into an appropriate format for the video card.

Ozzy
07-09-2003, 10:32 AM
Originally posted by Korval:
The hardware cannot recognize most formats for most components. It can recognize floats for components, as well as GL_BYTE for colors. That's it. If you use anything else, it will impose a significant speed hit under VBO. So, don't do it.
.

Well, i agree that there should be some performance penalties when using data components that are not matching the hardware.
As far as i know, early GF boards can support GL_SHORT for normals (using VAR) so i expect them to be treated as is internally.

Moreover, i only got a problem with rendering which is certainly related with *how* normals dataformat are interpreted in the case of GL_BYTE,GL_SHORT and GL_INT ;)
In other words, if it works with CVA (even with an internal conversion from the driver)
it should work with VBO and even with speed penalties :))

Finally, regarding FLOAT versus others components types i was amazed to see (using VAR again) that GPU was faster processing short structures including GL_SHORT for vertex coordinates, GL_UNSIGNED_BYTE for colors and finally GL_SHORT for normals vs
a all in GL_FLOAT structure ;-)

Ozzy
07-09-2003, 10:54 AM
Korval you're pointing out what is written into the specs ->

2.8A.1 Vertex Arrays in Buffer Objects
--------------------------------------

...
However, it is expected that GL implementations will (at
minimum) be optimized for data with all components represented as
floats, as well as for color data with components represented as
either floats or unsigned bytes.
...

ok then, not regarding the speed others components should work aswell?

Korval
07-09-2003, 12:50 PM
All formats should still function, regardless. If they don't, then it's an implementation bug.

[This message has been edited by Korval (edited 07-09-2003).]

jwatte
07-09-2003, 12:52 PM
There was a problem in some drivers where mixing numbered attributes for some arrays, and legacy names/bindings for others, would make it not work right. I forget the details, but if you're using both VertexAttribPointer and NormalPointer, this might be the issue.

Ozzy
07-10-2003, 02:19 AM
allright, waiting for the implementation fix (if there is a bug there.. nv guys any comment?)

Now, for testing pruposes, i would like to switch from VAR to VBO or any other GL mechanism.
Unfortunately it seems that it causes probs to have VAR and VBO both initialised. Of course, they are certainly using the same memory allocation mechanism (when u need to store data onboard -equ- static) thus, at first i tried to only disable GL_VERTEX_ARRAY_RANGE_NV but using VBO then i get no display anymore.. :(
Finally i tried to free VAR memory using:
...
wglFreeMemoryNV(pFastMem);
...
but it seems that VBO always fall back to vertexArrays, something like there is no more Vram available = switch to VA.

any idea? :-)

LarsMiddendorf
07-10-2003, 02:28 AM
I've got a Radeon 9800 pro. If I use gl_short as an vertex attrib format, the program becomes incredible slow. With gl_float there is no problem. The GF4 has had no problems when using gl_short. Is this a hardward limitation of the Radeon 9800?

Ozzy
07-10-2003, 03:37 AM
Well, it was already that case using VAO..
(much more buggy btw)
ATI implementation seems rather limited in term of functionality regarding the available formats. Thus as suggested by Korval and the specs ;-) you should use float everywhere to get the best results from one board to another. :((
- as it is written : *this is a minimum* ;(

Anyhow.. what about the speed on your radeon using GL_SHORT? ;-) same as CVA i guess? ;-))

Ozzy
07-15-2003, 01:47 AM
For those interested NV VBO vs ATI VBO using
floats everywhere -> http://www.orkysquad.org/main.php?id=lire&templ=templCode&nom=ozzy&cat=Code&month=July&year=2003#623

conclusion : not really flexible...

ToolTech
07-15-2003, 03:07 AM
jwatte ?

Could you explain in more detail about what driver had problem with mixing VertexAttribPointer and NormalPointer ?

I have a VBO app that runs many times slower than normal vertex array usage ?? Could this be explained with you driver anomaly ?

Ozzy
07-15-2003, 07:57 AM
What kind of struct do you use?
drivers version?
With 44.90 Nv implementation just do like the Radeon implemenations, with customs formats they fallback to VertexArrays mechanisms.. (well, it seems too.. something similar btw)