Correctly use hardware OpenGL rendering

Does anyone knows, how make subj?
How to determine, is my rendering context use hardware rendering or not?

I’m using a DIB (memory DC) in Windows.
Can i force my OpenGL to render into DIB using hardware acceleration?

Is there any way not to render to visible part of the screen, while using a regular DC and not using DirectX?

Additonally, if anyone have good samples showing, how to use OpenGL hardware rendering + DirectX, please send them to me.

Can’t answer about the DIB, but about detecting hardware rendering: when you choose a pixel format, you can use DescribePixelFormat to get the description of the format, and see if it will use an ICD or not. If it has the PFD_GENERIC_FORMAT flag, then it uses the Microsoft software renderer. If it has then it uses an MCD (an old OpenGL acceleration architecture nobody uses these days). Otherwise it uses an ICD.

The problem is, you can’t tell if the ICD will render in software or in hardware. However, if you do have an ICD mode available, the minimal set of abilities (mainly no stencil) should give you hardware acceleration.

I believe ET3D meant to say “if it has PFD_GENERIC_ACCELERATED then it uses an MCD” instead of “if it has then it uses an MCD”

j

Thanks for answering my question. I already using DescribePixelFormat function. As ET3D wrote, these still chances, that rendering is performing using software instead of hardware.

Problem is, that i need rendered data for further processing.
For this i’m using my own pixel format (24bpp) and memory device context. This means, that all rendering performs without using hardware acceleration (.

I know, that hardware accelerated rendering strongly depends from current video mode.

But why is that?
Why if i don’t want to rely from current video mode, but still use hardware rendering?

Is there any way to tell GPU to render using 24bpp format to offscreen frame buffer instead of current video mode color depth (suppose GPU supports 24bpp pixel format)?

Does anyone know, is there any cards/drivers which can support this feature and what do i need to use it?

Rendering offscreen is always software, as it happens in main memory and not in the gfx-card’s memory.

Originally posted by Kilam Malik:
Rendering offscreen is always software, as it happens in main memory and not in the gfx-card’s memory.

No, offscreen - i meant in video memory, which is not visible to user.

pbuffers are ‘offscreen windows’ that are hardware accelerated

Originally posted by zed:
pbuffers are ‘offscreen windows’ that are hardware accelerated

Yes, it seems one i looking for.

Just one question: Is that mean, that i can use pixel format different from current video mode format with them?

[This message has been edited by DrDeath (edited 05-08-2001).]

j - oops, I guess I didn’t really paste when I thought I pasted

DrDeath, consider that regardless of how you do it, 24 bit will never be accelerated. All current accelerators (and those of the past few years) can accelerate in 16 bit or 32 bit only.

Originally posted by ET3D:

DrDeath, consider that regardless of how you do it, 24 bit will never be accelerated. All current accelerators (and those of the past few years) can accelerate in 16 bit or 32 bit only.

Well, 24 bits - it is a RGB triplet, means each color component have it’s corresponding byte. 32bpp - good too, but not 16bpp!
I hate 16bpp, because it’s color compression.
With 24/32 bpp image i can do whatever i want - antialiasing/smoothing/alpha blending and other stuff, that i can write on asm. But even simplest image processing with 16bpp can become hell if you need a really fast image FX.

Then just use 32bit and ignore the alpha channel