How to know if OpenGl is Rendering in HW or SW

Hi all,
I posted a question a while back on whether one can tell OpneGL to render in software or in hardware. I guess the answer was that one cannot do that. I’m just wondering

  1. How does OpenGL know when to do things in software and when to do it in hardware?
  2. Is there a way to know which method it is using?

Thanks,
GG

PS. I have GForce3 and have the drivers installed for both Linux and W2K. I’m writing a program that simulates what the pilot sees as he flies over a scene, so it could be a considered real-time appliation.

  1. The drivers know if your opengl calls are in hardware.
  2. There might be a flag somewhere im not sure, but if a simple app is running VERY VERY slowly, then its probably in software.

-SirKnight

Get the vendor/renderer name, and if it is Microsoft, then you know it is software driven… If it is anything else, then it is hardware calls. (well, usually anyway)

Use : (after you get a valid context)
glGetString(GL_VENDOR);
glGetString(GL_RENDERER);
glGetString(GL_VERSION);
glGetString(GL_EXTENSIONS);

Hardware vendors want their kit to run as fast as possible, and special-purpose gfx hardware is generally (but not always) faster than generalist hardware (i.e. the CPU) doing the same thing, so it’s pretty safe to assume that a driver will use hardware whenever it possibly can.

There is no practical way to tell which it’s doing other than by looking to see how fast it goes and making an educated guess. There’s certainly no flag. Bear in mind that there isn’t a hard-and-fast line between hardware and software; there’s a whole bunch of stages in a graphics pipeline, and at least SOME of them will be in software. Older cards rasterized in hardware but still did transform in software; newer cards do transform in hardware too but still use software for things like glRotate.

Thanks guys, I appreciate your input. One more thing, do extensions speed up things or are they just there to make graphics look nicer? I’m stil reading about the subject and don’t have a grasp on it’s functionality yet.

Thanks again,
GG

Not all features on specialised graphics hardware, are run via hardware acceleration.

For instance, vertex programs exist as an extension on GF1 + 2, yet are run with software emulation.

I would really like to see a feature, whereby we can querey weather a feature will be hardware accelerated by the card or not.

In OpenGL2.0 isn’t there timing methods, so you can query the driver how long something would take? The distinction between hardware and software is blurring with programability, now we use software pipelines but run on a different type of cpu.

At the end of the day you have several options.

  1. Ask the driver, is this hardware acclerated, or fast enuff for use in real-time. Currently not available in OpenGL.

  2. Perform a series of benchmarks at confiuration time on the host machine, to find what features are acceptable in performance.

  3. Allow the user to turn off certain features, allowing them to increase the speed at a cost of visuals.

Nutty

The trick to use glGetString(GL_WHATEVER) to ask for rendere, vendor, and all that stuff, might not always work. Say, for example, I use my GeForce2 to render stuff. Everything is fine (hardware acceleration). Then all of a sudden I start using 3D textures. The driver will fall back to software rendering, but glGetString(GL_RENDERER) will still report my GeForce (how can MS’s implementation take over when it doesn’t even know about 3D textures?).

Indeed. In that situation, M$'s renderer has nothing to do with it, it’s nvidia’s driver doing it in software, so you can’t tell from any strings, that anything has changed.

This is what I mean about, being able to query the driver to ask if it is a native hardware feature of the board installed.

Yeah, I know that, but that is why I said “(well, usually anyway)”

It would be nice if a flag was set. (somethiing like glgetstring(function,mode))

Actually, I wonder if Nvidia/ATI already do this someplace, they just don’t tell us about it?

hmm

[This message has been edited by Elixer (edited 01-24-2002).]

Under Windows there IS a flag that you can check - but that only tells wether the driver is a cutom driver or a generic (M$ software) driver:

PIXELFORMATDESCRIPTOR pfd;
int iPixelFormat, accelerated;

iPixelFormat = GetPixelFormat( DC );
DescribePixelFormat( DC, iPixelFormat, sizeof(PIXELFORMATDESCRIPTOR), &pfd );
accelerated = (pfd.dwFlags & PFD_GENERIC_ACCELERATED) | |
                 !(pfd.dwFlags & PFD_GENERIC_FORMAT) ? 1 : 0;

or something similar…

Yes, that tells you whether the driver is hardware accelerated, it does not tell you if a particular feature of that driver is done in hardware.

Revealing whether features are “done in HW” or “done in SW” is quite contrary to the design of OpenGL.

This issue was debated to death at least as early as 5 years ago on places like rec.games.programmer (I know, I was there).

  • Matt

Indeed, there were some important discussions in the ARB seven years ago. Look here to revisit a 7+ year old issue:
http://www.opengl.org/developers/about/arb/notes/minutes_2_95.txt

Last I saw, the ‘official’ policy is use something like isfast check here:
http://www.berkelium.com/OpenGL/isfast.html

This os probably out of date but still the ‘blessed’ approach, the best PC hardware started getting fast at everything so the goalposts moved. It used to be a texture filter tweak or visual change could easily slow your application to a crawl on the best available hardware, and with OpenGL it’s wrong to ignore slow stuff and force the wrong thing for the sake of performance vs quality.

Originally posted by Bob:
The trick to use glGetString(GL_WHATEVER) to ask for rendere, vendor, and all that stuff, might not always work. Say, for example, I use my GeForce2 to render stuff. Everything is fine (hardware acceleration). Then all of a sudden I start using 3D textures. The driver will fall back to software rendering, but glGetString(GL_RENDERER) will still report my GeForce (how can MS’s implementation take over when it doesn’t even know about 3D textures?).

nVidia seams to have the following way do describe the 1.2 and 1.3 features, if they are marked as extentions in the glGetString(gl_ext…) then they are in HW, but if you just query for the dll functions and still get a valid pointer then you have that function in software.

So are you saying that my GeForce2 MX can do the imaging subset from OpenGL 1.2, 3D texturing and vertex programs in hardware? They are all exposed by glGetString(GL_EXTENSIONS).

And by the way, it’s not always individual features that determine whether it’s software or hardware rendering.

Multitexturing with two textures, and polygon stipple is supported in hardware (talking GeForce again). But using both of them at the same time (stipple and MT w/ 2 textures), the driver will fall back to software rendering, because polygon stipple requires a textuer unit.

So, as I said, it’s not always the indivitual freatures, it’s also combinations of features that determines hardware/software rendering.

Originally posted by Bob:
it’s not always the indivitual freatures, it’s also combinations of features that determines hardware/software rendering.

Yes. I have seen this issue discussed several times in various forums/groups, and as I recall, this is always the 1 overriding issue that seems to come up. Hardware is not always symmetric. Just because it can do A, and it can do B, does not mean in can do A+B. Therefore, a simple “is this feature HW accellerated?” interface isnt sufficient.

Another example is the Radeon. It supports 3 texture units, but if you use a 3D texture, that actually consumes 2 texture units so you can only use 1 other texture. Plus, I think the 3D tex had to be loaded into a particular texture unit. If you dont follow these restrictions, Im not sure whether it falls back to software or just completely fails, but either way it makes things very complex to develop for.

One possible solution is to implement something like Direct3D’s ValidateDevice. For those unfamiliar, ValidateDevice allows you to determine whether or not a specific render pipeline setup is supported. you set up the hardware as if you were about to draw, then call ValidateDevice and it tells you if it will work. If it wont, you can fall back to a backup setting (ex: drop the third texture and just fallback to 2). This is great for determining what does and doesnt work (lets just pretend for a moment that there arent drivers that lie about what they support…and there are). But bringing this back to the main topic here (“How to know if OpenGl is Rendering in HW or SW”) this still cant deal with the case where rendering is half&half. We could go a step further, and create an interface where you can setup the whole pipeline and ask “given the current configuration, is THIS feature hardware accellerated?”. Of course, this is starting to get to be a bit complex/messy, and I wouldnt doubt that there might even be situations where even something like this becomes infeasible/impractical.

But the whole underlying thing (to me) is: why do you really need to know which features are hardware accellerated? Either just give your user some toggles for different features, or create an intelligent analyzer that examines how fast the app is running and disables features based on that.

It’s actually much, much worse than that.

If you think about it, you’ll find that it is nearly impossible to define the term “hardware” in a way that applies to certain OpenGL implementations. (Start thinking about embedded CPUs to implement features and the like.)

Since “hardware” and “software” have no meaning for certain implementations, what you really want to say is, is some feature “fast” or “slow”? (Note that “hardware” is not necessarily “fast” and “software” is not necessarily “slow”. This is another problem with saying whether something is done in hardware or software. You don’t care how it’s implemented, only how fast!)

Of course, now you can’t just use one bit to describe performance; performance is a whole spectrum. How fast, how slow? And often the driver doesn’t even know how fast something will go, because it depends on a litany of factors out of its control, so it couldn’t even return a very accurate number.

Given these considerations, a performance test is the way to go, and distinctions like “hardware”/“software” and “fast”/“slow” should stay out of the API.

The standard we’ve adopted is to not expose the extensions for features that will only run in SW, if the OpenGL version number indicates that we support the emulation. In the case of, say, ARB_imaging, there’s only one way to expose it (the extension string), so we expose it across the board. But for EXT_texture3D, we choose not to expose the extension unless the HW can do 3D textures, since a GL version of 1.2 or higher already indicates support.

This only works for a few features in the API, and other vendors may not follow this standard, so it may not be a good idea for you to rely on this in your app.

  • Matt