Ok, i'm going crazy about loading extensions!

I have set up a 3.3 context using SDL:


        const GLubyte *version=glGetString(GL_VERSION);
	const GLubyte *vendor= glGetString(GL_VENDOR);
	const GLubyte *renderer=glGetString(GL_RENDERER);

	cout<<"Renderer: " << renderer<<endl;
	cout<<"Vendor: " << vendor<<endl;
	cout<<"OpenGL Version: " << version<<endl;

        GLint nombre_extensions;

	glGetIntegerv(GL_NUM_EXTENSIONS, &nombre_extensions);
	cout<<"Extensions: " <<nombre_extensions<<" 
"<<endl;

this outputs:


Renderer: GeForce 8800 GTS/PCI/SSE2
Vendor: NVIDIA Corporation
OpenGL Version: 3.3.0
Extensions: 190

First, 190 Extensions? Is this correct? when I open the Opengl Extensions Viewer it says me that I have 210 :S

Second, I want to use “glGetStringi” … but i don’t know how to load it globally. If I make:


PFNGLGETSTRINGIPROC glGetStringi = (PFNGLGETSTRINGIPROC) SDL_GL_GetProcAddress("glGetStringi");

//AND THEN

const GLubyte *ext2=glGetStringi(GL_EXTENSIONS,0);
cout<<ext2<<endl;

It works ok, the problem is that if i call glGetStringi outside the function, i have to load it again…

Probably its an easy task but I’m unable to make it “global”.

Another question, if I need to use this:

glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, 0, 0);

How can I load that function?

Thank you.

First, 190 Extensions? Is this correct? when I open the Opengl Extensions Viewer it says me that I have 210 :S

When you say you made a 3.3 context with SDL, do you mean SDL 1.3? if so is the context core or compatibility, is it a forward compatible context?. If either question is yes, then that explains the difference: some of those extensions were put into GL spec before GL3 and at GL3, those functions were taken out.

It works ok, the problem is that if i call glGetStringi outside the function, i have to load it again…

Probably its an easy task but I’m unable to make it “global”.

That is correct, you need to make each of those function pointers global, I suggest making a header file with each function pointer declared as extern and a source file with each such pointer initialized as NULL. Then you will need to load the function pointers too. The easies way around this is to just use one of the (many) GL function loaders out there, for example GLee or GLEW, which do this for you automagically by initializing each such function pointer as a function that load the function and sets the pointer and calls it. Ahh, machine generate code :whistle:

There’s a library called GLEW which will help greatly. You make one call to glewinit and it automatically finds out what your system is capable of and gives you a set of simple variables so you can just say if( GL_VERSION_2_0 ) and if that is non-zero you can be sure that GL 2.0 is supported in its entirety. It also sorts out function pointers so you can use, as in your example, glVertexAttribPointer, as well as just about everything else not available in the standard Win32 GL implementation.

Yes I’m using SDL 1.3. I have this right now:


#define GL3_PROTOTYPES 1
#include <gl3/gl3.h>
#include <SDL.h>
..
..

I think GL3_PROTOTYPES 1 is what defines the use of core profile, no? How should I check if I get a core profile?

And yes, I used GLEW, It’s very easy but how can I use GLEW to ONLY load 3.3 CORE functions? i don’t want deprecated functions usable because i’m a opengl noob and I want to start fresh learning without seeing in my IDE deprecated functions like glBitmap… :frowning:

Thank you

Please, can anyone post me a example of what kRogue said?

That is correct, you need to make each of those function pointers global, I suggest making a header file with each function pointer declared as extern and a source file with each such pointer initialized as NULL. Then you will need to load the function pointers too. 

For example with glGenVertexArrays and glGetStringi.

Thank you :slight_smile:

First question: what OS are you using (Linux or MS-Windows)?

If the answer is Linux, and you don’t care about redistributing your binaries (and possibly rebuilding on updating your GL driver), then you can get away with:


#define GL3_PROTOTYPES 1
#include <gl3/gl3.h>

and in your linker, -lGL.

If you are on MS-Windows, opengl32.lib does not list the GL3 entry points, and now you come to an issue: what to do?

Myself, I wrote a small blab that takes as input a gl3.h header file and extracts the function names and then generates the code that GLee and GLEW have, but just for those functions in gl3.h … the blab is ugly, hacky, and the produced header and sources are ugly, etc.

Another approach, but I don’t think it is a good idea: use the above to get the code to compile (but not link)… then if it compiles, the rebuild it using GLee or GLEW in place of GL3… this will make sure you do not use any functions not in the core profile…

In truth the correct thing to do is to write a perl/awk/whatever script to read the contents of gl.spec file to extract the GL3 functions and tokens.

The example code is like this:

Header file:


extern PFNSOMEFUNCTIONPROC glSomeFunction;

Source file:


static return_type glSomeFunction_init(arguments)
{
   glSomeFunction=SDL_GetProcAddress("glSomeFunction");
   assert(glSomeFunction!=NULL);
   return glSomeFunction(arguments);
}

PFNGLGETSTRINGIPROC glSomeFunction=glSomeFunction_init;



If you do this for ALL functions then you do not need to link against GL even, you can include gl3.h as long as if you do NOT have GL3_PROTOTYPES defined either (otherwise glSomeFunction appears in both gl3.h and your header and not a good thing).

> If the answer is Linux, and you don’t care about redistributing your binaries (and possibly rebuilding on updating your GL driver), then you can get away with:

Even then, you should still be able to redistribute your binaries, right? But they would just depend on the final system having an accelerated GL driver installed that supports GL 3.0?

At least on my machine, I can link against /usr/lib/libGL.so at compile time, but at runtime, the dynamic linker will use /usr/lib/nvidia-current/libGL.so or presumably the AMD equivalent if that’s what is installed.

So I think: nothing fancy should be required. Just #include the right header, and away you go using GL 2/3 functions, that simple…

Thank you kRogue. I will use GLEW but i will have to check the usage of every function.

Thanks!

Even then, you should still be able to redistribute your binaries, right? But they would just depend on the final system having an accelerated GL driver installed that supports GL 3.0?

At least on my machine, I can link against /usr/lib/libGL.so at compile time, but at runtime, the dynamic linker will use /usr/lib/nvidia-current/libGL.so or presumably the AMD equivalent if that’s what is installed.

So I think: nothing fancy should be required. Just #include the right header, and away you go using GL 2/3 functions, that simple…

I have memory that I could not get away with this, though it was quite some time ago: I’d build my application using GL/gl.h and GL/glext.h and having the prototypes defined (there was some macro to define to get the function prototypes defined in glext.h) but when I updated drivers, the application would then fail to start until I re-linked it. I am pretty sure on this… nowadays I don’t link against GL at all and I always fetch the function pointers at runtime… I am also like 99% sure that LGP does the same exact thing.