How are you supposed to be able to detect whether a card can run a shader or not?

GLSL works okay if you are developing with a high-end card, but how the hell am I supposed to write code that will detect whether a card can handle a shader without crashing? There’s a limited number of instructions, variables, and funny looks you can give the card, and if you exceed any, it crashes with no explanation.

How am I supposed to write a program that works on all machines, if I can’t even tell if a card will arbitrarily crash? Why do graphics card manufacturers hate 3D developers?

GLSL is by far the most jury-rigged system I have ever dealt with. :eek:

With correctly working drivers the shader should never cause crash.
What can happen is:
[ul][li]The shader will fail to compile. This happens if you exceed the number of supported uniform variables or other queriable quantity such as number of varyings, attributes or number of texture image units.[]Rendering command will report error. This should happen if you use GLSL vertex shader together with assembly level fragment programs or fixed function pipeline and exceed number of available texture units.[]The rendering will fall to SW emulation. This happens if you use feature that is not supported in HW (sometimes this causes compilation failure, I do not know if failure is correct in that case according to the specification) or if you hit some unqueriable limit like number of instructions.[/ul][/li]
We live in real world so the above facts are not always true. I have seen some shaders that crashed older versions of GLSL compilers. Some valid shaders failed to compile. There were some problems with efficiency of varying/uniform allocation. SW emulation sometimes generates incorrect results. However that are only bugs that will be solved in future drivers. Unlike the C/C++ compilers the GLSL compilers are with us only for short time and they are aiming at rapidly moving target.

The worst problem with GLSL on current HW is that unlike the assembler interface, there is no standardized way to detect if the shader is running trough SW emulation or in HW. You have to measure its performance or, for some vendors, use logfile hacks.

Like most of the OGL, the GLSL was designed to hide many HW limits and to provide user with guaranteed functionality that will be provided trough emulation if HW is not capable enough. The bad side of this is that if you wish to take current generation of HW to its borders, you can not in many cases automatically determine within the API, where the borders are.

I was getting memory exceptions with a vertex shader compiled with a Radeon 9550. Then I changed some nested conditionals to just a list of independent conditionals, and it compiled fine.

I’m glad it works now, but it doesn’t fill me with a lot of confidence when I release my engine. :frowning:

I was getting memory exceptions with a vertex shader compiled with a Radeon 9550. Then I changed some nested conditionals to just a list of independent conditionals, and it compiled fine.
Did you submit a bug report, with an appropriate test case? If not, then it’s not ATi’s fault for not fixing bugs they don’t know about.

NVidia doesn’t seem to have a problem. :stuck_out_tongue:

Besides, ATI never answered my application for their Developer Program, despite the fact that I sell a popular world editor with thousands of users, and have a U.S. corporation. NVidia takes good care of me.

How am I supposed to write a program that works on all machines?
Only by testing it. My game works on everything starting from TNT2. I wouldn’t be able to achieve this without testing on actual hardware.
Besides - getting application to work is one thing, but actually finding minor differences in it’s behavior it’s another story.
For example - on GeForce 7800GT everything worked nicely, but on GeForce 6600GT all my textures were 16-bit. It was caused by using the default texture color depth.
Another thing is testing against some GPU/driver specific bugs. I had one of my shaders working perfectly on GeForce but not working on ATI although compilation & link were successfull (bug report submitted, thanks to Humus).

So even if you write “perfect” application that detects what features are supported you have no guarantee that it will work at all untill you actually test it on target platform. I actually started to write my current framework by the book - fully compilant with the GLSL specs - didn’t work on neither GeForce nor Radeon :slight_smile: . Had to bypass driver weakness on GeForce and driver bug on ATI.

So find people with different GPU’s and test your application. You have no alternative.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.