Hello!
Does someone has a sample program for using vertex buffer objects in C under linux?
I followed a tutorial but it does not work
Thanx
Christiane.
Hello!
Does someone has a sample program for using vertex buffer objects in C under linux?
I followed a tutorial but it does not work
Thanx
Christiane.
First, make a program with traditional vertex arrays.
Then, use the specification to port traditional VA to VBO thanks to the example of the end of the document.
Linux or not doesnât make any difference (except extension loading maybe).
At which point are you stuck ? Make traditional VA working ? porting to VBO ? loading extension ? finding adequate VBO format ? map/unmap VBO ? fill sub-part of VBO ?
As you can see there are many issues with VBO, so please explain your problem more in detail.
I read this tutorial:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=45
and âconvertedâ it to C.
I added the following lines:
#include âglext.hâ
PFNGLGENBUFFERSARBPROC glGenBuffersARB = NULL;
PFNGLBINDBUFFERARBPROC glBindBufferARB = NULL;
PFNGLBUFFERDATAARBPROC glBufferDataARB = NULL;
PFNGLDELETEBUFFERSARBPROC glDeleteBuffersARB = NULL;
âŚ
glGenBuffersARB = (PFNGLGENBUFFERSARBPROC) wglGetProcAddress(âglGenBuffersARBâ);
glBindBufferARB = (PFNGLBINDBUFFERARBPROC) wglGetProcAddress(âglBindBufferARBâ);
glBufferDataARB = (PFNGLBUFFERDATAARBPROC) wglGetProcAddress(âglBufferDataARBâ);
glDeleteBuffersARB = (PFNGLDELETEBUFFERSARBPROC) wglGetProcAddress(âglDeleteBuffersARBâ);
âŚ
The first error message I get is a parse error before glGenBuffersARB. The glext.h is there but for some reason the definition of PFNGLGENBUFFERSARBPROC (and all the other definitions) is not found.
I did a work around and copied the following lines into my c file:
typedef void (APIENTRY * PFNGLBINDBUFFERARBPROC) (GLenum target, GLuint buffer);
typedef void (APIENTRY * PFNGLDELETEBUFFERSARBPROC) (GLsizei n, const GLuint *buffers);
typedef void (APIENTRY * PFNGLGENBUFFERSARBPROC) (GLsizei n, GLuint *buffers);
typedef void (APIENTRY * PFNGLBUFFERDATAARBPROC) (GLenum target, int size, const GLvoid *data, GLenum usage);
Now there is no parse error, but a segmentation fault when calling wglGetProcAddress(âglGenBuffersARBâ);
I tried to use the SDL library, this works in some way, but I want to use GLUT, so I wondered, if it is possible.
Any ideas about the problems with glext.h and wglGetProcAddress?
thnx
Christiane
You should learn about how to use OpenGL under Linux, and specially how to load extensions. There are documentation about that on this site. Just type a search.
Or you can, and you stepped, use SDL.
Hope this helps.
A link would help even more
I searched opengl.org before I posted to this forum. The search functionality doesnât work, I get a server error. And Documentation->OpenGL Extensions just leads to sgi. Couldnât find anything in âcoding resourcesâ.
Christiane.
Instead of using wglGetProcâŚ, use glxGetProcAddress, or glxGetProcAddressARB (it seems it depends on your graphic card and drivers).
I donât think you need something else.
Hope this helps.
Does your program even link and compile under linux when you call wglGetProcAddress ?!
Yes you have to use glXGetProcAddressARB (with upper-case X). It works essentially like wglGetProcAddress, granted that your GL driver supports the glx-get-proc-address feature (almost all drivers do).
Thanx for your help!! It works now.
Iâm drawing 1.000.000 points and now get 15 fps (instead of 7 fps without vbos) but thatâs not very much, isnât it? Is this rendered in software?
I use SUSE 9.1 and a GeForce FX 5200. I installed the NVidia drivers via Yast Online Update. How can I find out wether the NVidia driver is used by the system or not?
Christiane.
cat /proc/drivers/nvidia/agp/status will bring you the current state of your system. If itâs enabled, then you already have had it.
15,000,000 vertices per seconds sounds good for a GeForceFX 5200.
Is texturing enabled ? lighting ? fog ? vertex colors (do you call glColor per vertex) ?
Originally posted by jide:
cat /proc/drivers/nvidia/agp/status will bring you the current state of your system. If itâs enabled, then you already have had it.
Status: Enabled
Driver: AGPGART
AGP Rate: 2x
Fast Writes: Disabled
SBA: Disabled
looks good
thanks a lot
Christiane
btw: itâs cat /proc/drivers/nvidia/agp/status
Originally posted by Christiane:
btw: itâs cat /proc/drivers/nvidia/agp/status
sorry, I meant
cat /proc/driver/nvidia/agp/status
Originally posted by vincoof:
15,000,000 vertices per seconds sounds good for a GeForceFX 5200.
Is texturing enabled ? lighting ? fog ? vertex colors (do you call glColor per vertex) ?
Of course itâs not rendered in software, otherwise there wouldnât be any speedup using vbos, right ?
I had a look at nvidiaâs, and they say a GeForce FX 5500 can handle 68 million vertices/sec. So, yes, 15 million sounds not that bad for a 5200
I disabled lighting already and Iâll try the others things now.
Christiane.
Itâs very difficult to reach what the graphic card vender says about performances.
I already tried out that some years ago, while having a geforce 2 mx. If I remember well (was discussed a bit here), I reached to have half the max performances, or a little over. But I was using colors, lighting and single texturingâŚ
Another fact for your current limits are that youâre using agp 2X instead of 8x. And maybe there are other things that might not be optimal.
Hope this helps.
AGP speed shouldnât really matter WRT performance if vertices are stored on the GPU side. It will matter only while uploading, which is performed once. Then if arrays are big enough you wonât make the performance difference between a PCIx16 bus and AGP 1x.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.