Guide on transition from core OpenGL to nVidia OpenGL?

Hi,

I was wondering if there’re docs that cover the details on transitioning from standard OpenGL to nVidia OpenGL for developers who’re getting started on programming for nVidia hardware?
For starters, how do I specify the library/header etc settings for building nVidia OpenGL apps as compared to standard OpenGL apps?
To give an eg - I couldn’t find the standard “opengl32.lib” nor “gl.h” files in the nVidia OpenGL SDK package. Am I supposed to use the ones supplied with VC++? What library/header options do I have to specify if I want to access the NV extensions to OpenGL?
(I would’ve thought details like this would have been exhaustively documented…but unfortunately I can’t seem to find anything like it)

Regards,
Melvin Eng.

nVidia doesn’t supply OpenGL32.lib or gl.h - that’s M$'s job. Look in the SDK for glext.h and wglext.h - that has the headers for most of the functions you’ll need. Start slowly, maybe change a vertex array app to VAR or something to get the hang of it. Look at old whitepapers/demos on nVidia’s developer site. Look through the SDK demos, although they may be a little advanced to understand since they all use the NVEffectBrowser now. Do a search on this and the advanced board for other relevant posts using keywords like “extensions” or the name of any of the extensions you want to use (search link at the top right of the page). Older links on nVidia/here will probably be easier for you to understand. It’s really quite easy when you get the hang of it.

Hope that helps.

nVidia doesn’t supply OpenGL32.lib or gl.h - that’s M$‘s job. Look in the SDK for glext.h and wglext.h - that has the headers for most of the functions you’ll need.
=>but I keep wondering where those nVidia specific functions are implemented…it couldn’t possibly be in MS’s opengl32.lib/dll, so where are they implemented? (ie. exactly how is access to nVidia-specific hardware functionalities achieved from the user app level?)
=>I’m just befuddled as to why there isn’t any “official” doc/guide on nVidia’s dev site explaining how exactly to configure one’s project settings to take advantage of the various nVidia functionalities…I did plunge into the header files for a quick look at some point, but quickly got discouraged from the confusion :~<
=>emails to the dev relations helpline returned nothing, as did applications to join the developers’ program (just how on earth is a newbie nVidia OGL programmer going to get started?)

btw…thanks for the advice! (really appreciate it ;^)

The functions are implemented in nVidia’s nvOpenGL32.dll, along with OpenGL 1.2 (and soon OpenGL 1.3) functions. To use the functions, you have to get a pointer to them, hence you will often see references on these boards to wglGetProcAddress(). Using the pointers gives you the correct function at run-time from the dll.

There’s no official nVidia guide, I guess, because it started off being an easy technique in the early days of extensions. That’s why I suggested looking at earlier extension demos. Bear in mind that using extensions you have to do the same thing no matter what the hardware so you could check out ATI’s dev site - maybe they have some info.

Lots of people here probably have demos of how to use extensions. Check out Nutty’s demos at http://www.nutty.org for some good ones. LordKronos wrote the per-pixel lighting demo on nVidia’s site - maybe you could look at that one since he included lots of docs with it.

Dev relations have been quite recently - they’ve ignored a couple of e-mails from me too. Maybe they’re getting ready for releasing Det4’s If you want to join the registered dev program, be prepared for a long wait - all I can suggest is keep trying e.g. once a month. You may need a good reason as well e.g. developing a game since they have had problems with reg’d devs leaking drivers in the past (check out all the leaked drivers on the web).
Here’s a quick example of using an OpenGL 1.2 function which is declared in glext.h:

#include <GL/glext.h>
...
PFNGLDRAWRANGEELEMENTSPROC glDrawRangeElements = (PFNGLDRAWRANGEELEMENTSPROC) wglGetProcAddress("glDrawRangeElements");
assert(glDrawRangeElements);...
// Use it down here somewhere.

The procedure is the same for nVidia extension functions. You may also want to look at Cass’ wrapper glh_init_extensions(char *str) which makes life easier for extensions which define multiple functions.

Also, search the boards for “wglGetProcAddress” for lots of examples like the one I just gave.

Hope that helps.

The functions are implemented in nVidia’s nvOpenGL32.dll, along with OpenGL 1.2 (and soon OpenGL 1.3) functions.
=>strange…I have a “nvopengl.dll” instead of a “nvopengl32.dll”…wonder if this spells trouble or is it an older version? (btw, is this file installed by the nVidia ref driver or the SDK? if so, then it can’t be a deprecated version since I’ve installed the latest SDK)

To use the functions, you have to get a pointer to them, hence you will often see references on these boards to wglGetProcAddress(). Using the pointers gives you the correct function at run-time from the dll.
=>thanks for the ptr…but then there’s this “glh_extensions.h” that seems to do all this for the user…am I right in presuming so?
=>btw, do you know of any docs on the “glh” helper toolkit? (things like this kinda worry me coz there’s absolutely no mention of it whatsoever on the dev site…I simply wouldn’t know what I’m missing out if I hadn’t chanced upon it)

There’s no official nVidia guide, I guess, because it started off being an easy technique in the early days of extensions. That’s why I suggested looking at earlier extension demos. Bear in mind that using extensions you have to do the same thing no matter what the hardware so you could check out ATI’s dev site - maybe they have some info.

Lots of people here probably have demos of how to use extensions. Check out Nutty’s demos at http://www.nutty.org for some good ones.
=>excellent website I must say…I’m trying to solicit some advice from this cool chap.
LordKronos wrote the per-pixel lighting demo on nVidia’s site - maybe you could look at that one since he included lots of docs with it.

Dev relations have been quite recently - they’ve ignored a couple of e-mails from me too. Maybe they’re getting ready for releasing Det4’s If you want to join the registered dev program, be prepared for a long wait - all I can suggest is keep trying e.g. once a month. You may need a good reason as well e.g. developing a game since they have had problems with reg’d devs leaking drivers in the past (check out all the leaked drivers on the web).
Here’s a quick example of using an OpenGL 1.2 function which is declared in glext.h:

#include <GL/glext.h>
...
PFNGLDRAWRANGEELEMENTSPROC glDrawRangeElements = (PFNGLDRAWRANGEELEMENTSPROC) wglGetProcAddress("glDrawRangeElements");
assert(glDrawRangeElements);...
// Use it down here somewhere.

The procedure is the same for nVidia extension functions. You may also want to look at Cass’ wrapper glh_init_extensions(char *str) which makes life easier for extensions which define multiple functions.
=>again, you got any docs on that? (I’ve seen it appear in the sample app source code…but no explanation what it does at all)
=>I guess what I’m looking for is a more systematic approach to tackling all this nVidia OGL cum extensions business…sort of like a beginner-to-intermediate-to-advanced approach instead of diving headfirst into the deep end and then frantically grabbing onto anything that floats(which pretty much sums up my sentiments for now) :^(
=>any advice you can offer along these lines would be great!

Also, search the boards for “wglGetProcAddress” for lots of examples like the one I just gave.

Hope that helps.
=>thanks…any advice/tips you can give certainly helps!

You’re right about the nvOpenGL.dll vs nvOpenGL32.dll. I noticed after posting that I’d mispelled it but didn’t correct it because sometimes when I edit long posts it stuffs up the repost. I also hoped you wouldn’t notice

You’re also right about glh_extensions.h providing a function that helps out. That’s the function glh_init_extensions that I mentioned. For example, I have a file that includes:

if (!glh_init_extensions("GL_NV_vertex_program "
"GL_NV_texture_shader "
"GL_NV_register_combiners "
"GL_ARB_multitexture ")) { ...

and that initialises a whole bunch of functions for me. Internally, though, it’s probably just doing a few wglGetProcAddresses. The reason why nVidia hasn’t documented it is probably because there’s only really the one function to use in glh_extensions and it’s quite easy to use.

Nutty posts regularly on the advanced discussion board. This is also where there’s a lot of discussion about extensions so it pays to check there for information. You could always try to ask Cass to write some documentation. He is also a regular on the advanced board, and writes a lot of nVidia’s demos. What you’re looking for has probably been mentioned in one or two of nVidia’s whitepapers anyway. I presume you’ve also downloaded the nVidia OpenGL SDK docs? They’re all available as individual docs anyway.

My best advice would be to search the advanced discussion board. There will be heaps on information for you to read there. It’s really a simple concept once you’ve done it once or twice. Try starting off with a simple example like multitexturing. You should be able to find docs that explain that, and it’s not much different to using a single texture. If you own the red book 3rd edition there’s an example in there on multitexturing. I would send you an example file but all my programs use huge amounts of textures and I don’t have the bandwidth to handle them. I also advise getting the nVidia extension specs and reading up on the extensions you want to use before trying them so you know what they do.

In summary, you only need to know how to use glext.h and wglGetProcAddress, or alternatively glh_init_extensions so it’s not that much (and of course how to use the extensions you’re interested in). You’ll pick it up in no time. BTW, maybe there’s some info in the FAQ on this site regarding extensions?

Hope that helps.