cross platform opengl code that runs AND compiles

Hi,

I allready asked a related question on stackoverflow, just for reference:
here

Imagine I want to use glGenerateMipmap() on gl version >= 3.0 and glGenerateMipmapEXT on all earlier versions. What is the common way to do this? First of all i do runtime checks using GLEW, but if I also want to compile on platforms that don’t support certain features, what macros do I have to use? something like this?:

#ifdef GL_VERSION_3_0
                    glGenerateMipmap(m_glType);
#elif GL_EXT_framebuffer_object
                    glGenerateMipmapEXT(m_glType);
#endif

according to the link i posted above #ifdef GL_VERSION_3_0 won’t work if I want to distribute a binary since it is only set at compilation. is that true? If so, what should I check for instead? Maybe #ifdef GL_FRAMEBUFFER?

thanks!

if I also want to compile on platforms that don’t support certain features, what macros do I have to use? something like this

It’s a runtime test, whether a certain extension or version is supported. The actual function pointers will always be there; what matters is if those function pointers are NULL or not.

Or to put it another way, you can’t really do this at compile time. Not with OpenGL defines.

I don’t understand, shouldn’t the above code work at compile time? I know I can do it at runtime.- I will extend my example to clarify what I mean: I want to do a compile time and runtime check to achieve two things: the compile time check is to make the source code compile on multiple platforms where certain ogl functions / enums might not be available in the first place (only a runtime check would cause compile time errors otherwise). The runtime check is to actually choose the right feature on the actual platform the program is running on. i.e. in the example above the code would propably rather look like this:

if(!FeatureSupportImpl::instance().useGLExtension(GLSpecific::EXT_framebuffer_object))
                {
//this is just to avoid compile time errors
#ifdef GL_FRAMEBUFFER
                    glGenerateMipmap(m_glType);
#endif
                }
                else
                {
//this is just to avoid compile time errors
#ifdef GL_EXT_framebuffer_object
                    glGenerateMipmapEXT(m_glType);
#endif
                }
            }

the FeatureSupportImpl::instance() is using glew internally to decide what to use at runtime. So basically the the ifdef’s are only there to compile the code on a platform where glGenerateMipmap might not exist at all.

Am I totally thinking wrong about this? if so, how can I achieve that the code compiles on platforms where certain features/functions/enums do not exist at all?

There won’t be compile errors if your function pointers are NULL. Just to remind you, the GL functions (at least those coming from extensions and GL versions later than 1.1) are usually acquired using wglGetProcAddress (or equivalent on other platforms) and the return value is assigned to a function pointer variable so in fact glGenerateMipmap will exist even if you don’t have OpenGL 3.0 but you cannot get the value of it using wglGetProcAddress thus, as Alfonse said, it will likely have a NULL value.

So the only thing you should do is runtime check as the code will compile (if you use the appropriate version of glext.h, GLEW or whatever), just you should take care not to call invalid function pointers because those will result in runtime crashes.

I believe that the concept of a compile-time check is bogus. Firstly, what happens if the code is compiled on machine A but run on machine B? How does machine A know what’s available on machine B? It doesn’t. You can only make assumptions that are valid at the time you make them.

This is said quite often and it might be time to say it again. OpenGL is not software. What is or isn’t available does not depend on the platform it’s running on. It depends on the hardware and drivers. What happens if you assume that “platform B doesn’t support glBuildMeABufferWorthyOfMordorARB” and two weeks later the video hardware vendor ships a new driver that actually does support it? A compile time check can’t deal with that. What happens if in a month’s time the user buys a new video card that supports OpenGL 6.0 whereas their previous one only supported 1.4? Can a compile time check deal with that?

So in short - just stick with runtime checks. Big name programs have being doing it this way for over 15 years, some of these programs explicitly have multi-platform support, it’s proven and it works.

@mhagain:
that’s why I mixed both.

@agnuep & Alfonso:
Ah okay, I understand now.- I didn’t know that the functions are always defined even though they are not there.

What about enums though, they are not always declared, how would you deal with that?

Are you using GLEW ? This is something important to mention.
According to this doc http://glew.sourceforge.net/basic.html the check must be performed differently, only invoving the runtime part, as compile time is handled by GLEW itself.

Yes, I am using GLEW.- Anyways that does not solve the problem of undefined opengl enums. i.e:


uint32 chooseGlInternalFormat(uint32 _pixelType, uint32 _pixelFormat)
{
    uint32 ret;
    //...
    if(GLEW_EXT_texture_integer || GLEW_VERSION_3_0)
    {
        bool bUseExt = !GLEW_VERSION_3_0; //if only the extension is available but not gl 3.0, fallback
        ret = bUseIntEXT ? GL_LUMINANCE8UI_EXT : GL_R8UI;
    }
    //...
}

on version prior ogl 3.0 GL_R8UI is not defined and thus causing a compile time error.- Is there any alternative to defining all the enums myself (i.e. based on the newest specs) to make sure they exist? Any other ideas?

On top of glew (which is not always up to date with OpenGL declaration), you can use you own declarations or use the last glext.h from opengl.org (http://www.opengl.org/registry/api/glext.h).

This way you’ll have what you need at compile time.

ah! thanks, that was the thing I was missing. up to date glext file should fix this.-