Minimum calls for GL context creation

Hello everyone,

I’m designing an API where two processes
are running: one performs computations
with GL (off screen rendering) and the other
performs visualization.
I can’t modify the visualization process.
I’d like to know what is the minimum code
to set up a valid openGL context so that
I can make my off screen computations
(based on GL) ? I’m running under Windows2k,
but if this code could be crossplatform…
It’s strange, I’ve been making gl code for
several years, but as far as I was using
GLUT I never had to ask myself about that…

THanks for any suggestion,

and a little subsidiary question.
Provided that I’ll have two valid openGL
context at the same time (one for vis, the
other for computing), this should be bad
for perf, no ? I’d definitely better get
the gl context of the visualization proc ?

Many thanks again !

GlOrbOfCoding()

For off-screen rendering, use a pbuffer. Specs are at the extensions registry and tutorials on how to set one up can be found at vendor sites(ATI,NVIDIA).

EDIT: Remember that opengl is not thread-safe. It is suggested that if you have a multithreaded app, to do all gl interfacing in one thread. And don’t even think about sharing a context between two processes.

[This message has been edited by roffe (edited 10-02-2003).]

You’ll have to do following steps to create rendering context under Win32:

  1. obtain device context of target window by calling GetDC()
  2. create the PIXELFORMATDESCRIPTOR structure and fill it in with appropriate content (see MSDN)
  3. call ChoosePixelFormat()
  4. call SetPixelFormat()
  5. create OpenGL rendering context by call wglCreateContext()
  6. attach OpenGL rendering context to device context by calling wglMakeCurrent()
  7. do Your own OpenGL operations
  8. detach OpenGL rendering context from device context by calling wglMakeCurrent()
  9. repeat steps 6-8 so many times You need
  10. destroy OpenGL rendering context by calling wglDeleteContext()

It’s recommended to execute steps 1-5 in response on WM_CREATE message, steps 6-9 in response on WM_PAINT message (or other messages from user interface), step 10 in response on WM_DESTROY message.

This describes using OpenGL in Win32 environment, using in MFC is similar.

But, You need render the OpenGL scene to the memory first, If I uderstand well. I recommend try to set dwFlags member of PIXELFORMATDESCRIPTOR to PFD_DRAW_TO_BITMAP value, create so called memory device context by calling CreateComapatibleDC(), create and select the appropriate bitmap to this context, and do all above descibed steps with that memory device context.

Good Luck.

Above is for an on-screen buffer. He wanted for an off-screen buffer. This is very different.

Thanks for all the suggestions…

Roffe, all the coding for off-screen rendering is already written and fully works.
My problem is that in my previous utilisation, I had my own visualization attached,
and hence I had no problem of context.

Now I’d like to set up a rendering context
for off-line rendering, and possibly the
minimum context. The fact is I do not fully
understand the “why” of the different function
( choosePixelFormat, getDC, etc…) and as such
I don’t know which one are of use and which one are not…

Thanks for your time,
any explanation or hints welcomed !

glOrbOfCoding()

Originally posted by glOrbOfCoding():
Roffe, all the coding for off-screen rendering is already written and fully works.

Rendering commands, whether they are to an on-screen or off-screen buffer are the same, that is trivial. This is not what I meant.


My problem is that in my previous utilisation, I had my own visualization attached,and hence I had no problem of context.

I have no idea what you are trying to say.


I’d like to know what is the minimum code
to set up a valid openGL context so that
I can make my off screen computations
(based on GL)?


Now I’d like to set up a rendering context
for off-line rendering, and possibly the
minimum context.

I’m confused. What do you want, an off-screen buffer(pbuffer) or an off-line buffer(bitmap/DIB)?

And what do you mean by “and possibly the minimum context”? Every render buffer/target needs a rendering context.

Basic code for off-screen(pbuffer) setup:
-find correct pixelformat
-create pbuffer
-get pbuffer dc
-create new rendering context or share with other dc
-switch to that context with wglMakeCurrent

For on-screen setup:
-see above post

for off-line(bitmap/dib) setup:
-search microsoft’s website

[This message has been edited by roffe (edited 10-02-2003).]

Originally posted by roffe:
Above is for an on-screen buffer. He wanted for an off-screen buffer. This is very different.

roffe, read the last paragraph once more carefuly, there I described how to render into memory context, not on screen.

Ah, sorry, my bad.

Personally I will never understand why people are interested in doing bitmap/dib/MS rendering, when there are so much better alternatives for sw rendering.

Oups… It seems I was not clear at all
in this thread.

I have an algorithm that uses GPU to
compute things that are NOT to be displayed
(with render-to-tex and fragment prog).
This is what I call off-screen rendering
(sorry if I misused this word ? ).
In my previous utilisation I was displaying part of the result (therefore I had a valid
GL context). Now I want these computations
to be stand alone.
What I assume by “minimum context” is the
niminum number of calls to set a valid rendering context (but that wont be used
for online rendering). I already tried
to make call for instance to:

  • glutInit
  • glutInitDisplayMode
  • glutCreateWindow,

and then run the code. This solutions is
ok, but I want to know if there is not
a simpler way to define a context, with
just a call to the functions I really need.

Thks for your time guys !

glOrbOfCoding()

Originally posted by glOrbOfCoding():
[b]What I assume by “minimum context” is the
niminum number of calls to set a valid rendering context (but that wont be used
for online rendering). I already tried
to make call for instance to:

  • glutInit
  • glutInitDisplayMode
  • glutCreateWindow,
    [/b]

Well, glut is the easiest way to create an on-screen rendering context.

But again, for rendering that isn’t suppose to be displayed, use a pbuffer.

Roffe, thanks for your patience but I still
don’t get the point…

Right now I’m using nvidia pbuffer’s class
to perform my computation on GPU.
On Init, those pbuffers ask for a DC and a
valid context. It seems to me that I have to
create first those datas to use the pbuffer.
So my question remains…
But has my question got sense ?

glOrbOfCoding()

Originally posted by glOrbOfCoding():
Right now I’m using nvidia pbuffer’s class
to perform my computation on GPU.
On Init, those pbuffers ask for a DC and a
valid context. It seems to me that I have to
create first those datas to use the pbuffer.
So my question remains…
But has my question got sense ?

Much more sense.

But it’s strange that nvidia’s pbuffer init code requires a context.You sure this isn’t optional?

Yes it is true, to create a pbuffer you need a valid dc. Not just any dc, but one that represents the video card.

So how do you find the correct dc to use?
You need to create an invisible dummy window with the correct pixelformat. For best results, iterate through(1,2,3… until fail) DescribePixelFormat() and pay special attention to the dwFlags parameter of the PFDs. You want PFD_DRAW_TO_WINDOW and PFD_SUPPORT_OPENGL for sure. Look up PFD specs for more info on this. It’s tricky.When you think you have a pixelformat that is hw accelerated, call SetPixelFormat and create a rendering context and switch to it. Now, when you have a rc, check GL_VENDOR and make sure you didn’t get the MS generic implementation. If you did, pick another pixelformat and start over.

When you finally get a valid dc, use it to create a pbuffer and an optional new rc. After this you can destroy the dummy window to free resources. To get this working for all platforms is tough. The q3 engine does a pretty good job, but still, lots of people have problems(see User forum).