View Full Version : Querying the hardware
09-03-2000, 11:20 PM
I don't know how this pertains to workstations as I'm from a PC games background, but what seems to be lacking from OGL (IMHO) is some generic way of finding out what the hardware what it is capable of. i.e. how the blending pipeline can be set up.
Take the D3D TSS pipeline for example. With this mechanism it is possible to find out something about the hardware, e.g. number of blend stages, number of simultaneous textures, available blend modes etc. Armed with this information it is possible to make some reasonable guesses as to how the pipeline should be set up for the effect you wish to achieve. There is still the need for a ValidateDevice call to ensure that the effect will work, but it's not too much hassle to go through a number of effects trying to work out what will work.
Now, with OpenGL the developer is forced to go about things in a rather different way. Instead of having a single general mechanism for setting up effects we have to know about numerous (sometimes proprietary) extensions in order to get the best from the hardware. This problem is only going to get worse when considering the new DX8 capable cards (assuming they even get OGL ICD's) which are capable of even more texture stages, blending modes (EMBM, Dot3 etc), and even fancier stuff like pixel shaders.
Forgive me if I've gotten any details about OGL wrong, it's been a while and I've only just returned to it, but having read through the 1.2 reference docs it seems that I, and many other developers that want to use the latest features on the latest cards, will be forced to stick solely to D3D.
It would be nice to write something that could run on a number of systems from the start, but if the project would suffer by doing this (i.e. effects that should be available on a card are not because the graphics API is getting in the way) then I can imagine more and more developers sticking to Windows and D3D.
what a downer http://www.opengl.org/discussion_boards/ubb/frown.gif
09-03-2000, 11:37 PM
Remember, that an ideal OpenGL implementation must implement ( stupid sentence formation http://www.opengl.org/discussion_boards/ubb/wink.gif ) _all_ features of OpenGL version it claims to implement.
So, do I get you right when I say you want some functions to tell you what a certain implementation of OpenGL can do?
If so, the answer is pretty simple. Download the OpenGL specifications and have a look there.
What do I mean then? In the specifications, it says what functionality a certain version of OpenGL MUST be able to perform to call itself OpenGL. So, if you want to know what kind of blending modes are available, just look in the OpenGL specifications, because IF it's called OpenGL, it MUST support these blending modes. And if a certain hardware wants to support even more features, there are extension, and extensions are easily queried with glGetString(GL_EXTENSIONS). And if you ever want to use a cerian extension, just look into the Extension regiestry (http://oss.sgi.com/projects/ogl-sample/registry/) for details.
So, asking for features is rendurant, you are always guaranteed to have the features in the specifications (unless you have a MCD installed).
09-04-2000, 12:26 AM
This really is to do with games programming I guess. In games we want to implement the nicest looking features with the lowest cost (in terms of transforms & rasterisation), therefore we need to know what the graphics hardware can do. It'd be nice to have a generic way of finding out stuff about the hardware and setting things up.
I know that the true definition of OpenGL is that an implementation does everything (if not in hw then in sw), but for games this is not acceptable. Imagine (if it were possible in OpenGL) setting up a grand multitexture effect that used 6 textures and did Dot3 bumpmapping, used some wacky blending modes etc, then finding out that on most people's computers it renders at 1fps because it has to be done in software. This is not at all acceptable.
The only conclusion I can draw is that with the emergence of newer graphics hardware, capable of a lot more than the OGL spec encompasses, that OpenGL is not a viable API for a developer to write a game with. To support numerous extensions is not acceptable within the timeframe. It takes long enough to write a game, let alone write a whole load of versions just to get at different features of cards.
and anyway, doesn't the ARB_Multitexture extension only support 2 textures ? that's going to be really good on a Radeon. cue the proprietary extensions. http://www.opengl.org/discussion_boards/ubb/wink.gif
09-04-2000, 12:34 AM
on the subject of extensions : imagine explaining to your boss that you can get a D3D version done in N months with all these whizzy features, or you can do an OpenGL version in N months, with less features, but add a couple more months on and you can implement some stuff to use some of the extensions, but it still won't be as fully featured as the D3D version...
It's obvious which one they'll go for. And I'd have to agree with them, which is something I don't often do! http://www.opengl.org/discussion_boards/ubb/wink.gif
Guess we are talking about different things here. I mean that certain functions HAS to be implemented, but not required to be hardware accelerated. But what you probably mean, is to ask for what functions can be done in hardware. Completely different...
ARB_multitexture is REQUIRED to support AT LEAST 8 textures. Certain implementations support up to 24. And there is no theoretical limitation in how many texture an implementation can support, but 8 is MINIMUM. Same with lights and similar stuff. 8 lights is minimum, GeForce supports 8 in hardware (I think), but when adding more, it will fall back to software. Some cards support 24 lights in hardware.
Can you ask D3D how many hardware accelerated texture units that can be used? If it doesn't support 6 textures and Dot3 in hardware, you are still going to get software rendering. But if you can ask for it and it doesn't support 6 texture units, then you still have to write extra functions for using multipass rendering to get all 6 textures drawn (=add extra months here). Or stick to less textures (=less visual quality). So I don't agree. You still have to ask for functionality in D3D, and deal with the problems that might occur.
And extensions are used to "temporarly" add features. They exists because the philosophy behind OpenGL doesn't allow the manufacturer to just add this feature in the core. But sooner or later, thet might end up in the OpenGL core.
A while ago, SGI started the multitexturing extension (SGI_multitexture or whatever it was called). Later on, a few more started using this, and it became EXT_multitexture. Then it was approved by the ARB, and became ARB_multitexture. And now it's a part of the core. Similar story behind most of the new functions new since earlier verions.
This is why ARB_multitexture is no longer supported as extension, because it's implemented in the OpenGL 1.2 core http://www.opengl.org/discussion_boards/ubb/smile.gif
I do see your problem here, and I can see why it should be used, but as it has been said before on this board. Adding new features is good, but we have to make sure we REALLY need them. Adding too many features too fast is not good.
09-04-2000, 08:46 AM
Exactly, I want to know what can be done on the hardware when I render stuff. It doesn't do to have a game that runs at 5 frames per second! http://www.opengl.org/discussion_boards/ubb/wink.gif
OK, good, ARB_multitexture supports 8 textures. But again, how many can I use before I get stuck with software? Is there any way I can find out?
Yes, you can query D3D for how many texture stages the hardware supports, this is one of the methods you can use to decide how you will ultimately implement a technique. And yes, you will have to write fallback methods for doing multipass, that's not the issue. I'm OK with writing fallback shaders for cards that don't support as many texture stages or the newer blend modes. The point is, at least with D3D you can work out what the hardware is capable of and scale your content accordingly. I can't *easily* write fallback shaders in OGL because I can't find out (in a nice generic manner) what the hardware is capable of.
We're back to the old problem of having to mess around with extensions, ok, they aren't hard, but I'm lazy. Plus they make for ugly looking code http://www.opengl.org/discussion_boards/ubb/frown.gif
So, adding too many features too soon is bad...
adding enough features to keep up with the competition is essential...
perhaps now is a bad time to mention DX8 pixel shaders http://www.opengl.org/discussion_boards/ubb/frown.gif
09-04-2000, 10:37 AM
'Powered by DirectX'
On the subject of Direct3D vs openGL it amazes me that Microsoft are promoting DirectX with the X-box: it has a fixed hardware feature set which means that a custom openGL implementation would be the absolute best way to go. What is the point of having all that redundant 'flexibility'? If it does get a good openGL implementation I think you will see openGL software outstrip its direct3D counter parts on the X-Box which should convert a lot of developers to openGL. What does anyone else think?
09-04-2000, 11:16 AM
Well, you wouldn't expect MS to support a rival would you ???
Besides, there are a hell of a lot more games in existence that use D3D, and therefore a lot of games that can be ported quickly and fairly painlessly to XBox. As we all know, it's the software that makes or breaks a console.
09-04-2000, 12:31 PM
Incidentally, I've just found MAX_TEXTURE_UNITS_ARB (with reference to getIntegerv). I've read through the relevant bits of the 1.2 spec and I didn't notice a *required* number of texture stages, it just kept saying that it was implementation dependant.
Does this mean I can actually find out the number of hardware accelerated texture stages??
(and shall I take this to one of the programming forums?)
09-05-2000, 11:30 PM
I've been following your discussion and there is a way to query the number of available texture units on a particular platform, at run time, as well as the number of available lights and register combiners (on GeForce cards) for example. You just have to call a glGet method with the right constant. It should be something like glGetiv(GL_MAX_TEXTURE_UNITS, &nbtexunits) or something approaching it for the number of texture units. Have a look at the extension specifications.
Powered by vBulletin® Version 4.2.3 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.