off-screen rendering

Hi all,

I am trying to grab some random memeroy, let gl render in to it, and get that memory again for use by an other thread. No window is nescasary or wanted. I have tried using FBConfig and xpixmap and glxpixmap, to no avail. So how can I render off screen efficiently?

Is it possible to create and link a glXcontext to a mem segment of choice?
Since I do not care for hardware acceleration but I do care for screen depth and size, I do not like being restricted to the visuals glxinfo -t can give me… is it possible to get an arbitrary visual into which I can render?

Thanks,
Robert

You want to do opengl display out of a window ? That seems interrestant a lot, but i have no idea on doing this.

I would like to know.

JD

If you are restricted to opengl and glx, then you are restricted to the visuals the glx driver supports (and glxinfo returns). The specification says that not all xvisuals have to be supported (well, there is a minimum but cannot remember…).
Futhermore, you can only render to drawables, i.e. windows and pixmaps. There are pbuffers in glx 1.3 and supported by the nvidia drivers via extensions. That are offscreen “drawables” - as far as I know especially for rendering directly to offscreen memory.
What are the difficulties with the pixmap approach?

I’ve tried pbuffers, but they only work as far as I can tell with a FBConfig, which just returns null in my case. I don’t seem to have framebuffer support on.

My problem with pixmaps has been “solved” since yesterday. I first created a small window with the visual I created. Then I tried the same with a pixmap. No problem until the glxmakecurrent, which failed. As it turns out, RGBA is supported in a window but not when I just use a pixmap. So this doesn’t realy solve my problem, since I’m stuck now with an 8 depth indexed color glxpixmap, while I need a 24 bit RGBA.

Any ideas?

Next to buying a new video card that is.

PS Can anyone tell me why for off-screen rendering you are stuck with the visuals glx provides?

If you consider hardware rendering on X, you must consider the underlying client/server architecture. Rendering does not occur in your (client) process space, but on the server side. That’s all a pixmap is about : it lies on the server and you can’t access its framebuffer memory until you invoke the X-shared memory extension (that will share the framebuffer memory between client and server). On the contrary, XImages are offscreen that reside on the client side. Thus you can only use server side operations (CopyArea, PutPixel, GetPixel) on a GLXPixmap for instance. As for the pbuffer extension, I’m not sure of what it’s offering.

Only if you’re using Mesa as a library (and thus software rendering library, I’m neglecting the 3DFX case), rendering will occur in your process space. The OSMesa interface will let you easily access the framebuffer memory. DRI won’t.

Originally posted by kaas:
I’ve tried pbuffers, but they only work as far as I can tell with a FBConfig, which just returns null in my case.

That’s because glXChooseFBConfig is a GLX 1.3 function - which is not supported by the NVIDIA driver. You should be albe to get pbuffers to work without using this call.

– Niels


[b]If you consider hardware rendering on X, you must consider the underlying client/server architecture.


I have an IMAP (parallel video processor) installed and an HP visualize fx which I cannot seem to get working. Don’t need it, I’m not looking for speed, just working software on any opengl system we run, which is usually not hardware accelerated.


XImages are offscreen that reside on the client side. Thus you can only use server side operations (CopyArea, PutPixel, GetPixel) on a GLXPixmap for instance. As for the pbuffer extension, I’m not sure of what it’s offering.


Does this mean I could use XImages to render to an arbritrary size image with gl?


Only if you’re using Mesa as a library


I am, 4.0.1. But I don’t use the mesax calls.


(and thus software rendering library, I’m neglecting the 3DFX case), rendering will occur in your process space. The OSMesa interface will let you easily access the framebuffer memory. DRI won’t.


But I still can’t get any visuals with a depth greater then 8… help!


That’s because glXChooseFBConfig is a GLX 1.3 function - which is not supported by the NVIDIA driver. You should be albe to get pbuffers to work without using this call.


No nvidia. No hardware at all. Just mesa 4.0.1 which installs client and server side glx 1.4.

I figure that especially with just software I should be able to render into some piece of mem with an arbitrary size and depth. Shouldn’t I?

Originally posted by Niels Husted Kjaer:
[b] That’s because glXChooseFBConfig is a GLX 1.3 function - which is not supported by the NVIDIA driver. You should be albe to get pbuffers to work without using this call.

– Niels[/b]

As far as I can recall, the createpbuffer call only takes a fbconfig as input, not just a visual. How can I get it to work without the fbconfig call?

I guess Niels is referring to the nvidia drivers, which support pbuffers via an SGI extension. Correct me, if i am wrong.

I figure that especially with just software I should be able to render into some piece of mem with an arbitrary size and depth. Shouldn’t I?

Yes, maybe right. But when you use opengl with X, you have to use glx (you are not using specific mesa x calls, as you said). And AFAIK there is no possibilty to render to memory except than window, pixmap and pbuffer (since glx 1.3). That is the specification, you cannot change it.

But the problem with the visual and pixmap is interesting. I will try it at home if I encounter the same problem…

Originally posted by plastichead:
I guess Niels is referring to the nvidia drivers, which support pbuffers via an SGI extension. Correct me, if i am wrong.

No, you were right. I was referring to the NVIDIA drivers - sorry…

– Niels

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.