Longs Peak Hardware

What exactly is the hardware “level” that the Longs Peak API will support? I know we’re not talking about the new geometry shader stuff (that’s for Mt Evans). I’m wondering about some of the more mundane differences between nVidia and ATi. Things like:

[ul][li]Blend Equation Separate (no ATi card as yet supports this)[]Float textures (with blending)[]sRGB textures[/ul][/li]
Will implementations be required to expose support for these and other similar minutiae? Or will there be some way to detect that certain formats are available? I know there has been some talk (or a hint, at least) about having a format object that, upon creation, either works or doesn’t, and the user is able to be told why the format doesn’t work.

LP is essentially a recasting of OpenGL 2.1 into the new object model. Mt. Evans will follow fairly shortly thereafter, adding lots of new core features including many new image formats.

Radeon 8500 and newer support both ATI_blend_equation_separate and EXT_blend_equation_separate, on Mac OS X.

That’s just a guess, but I think everything that supports GLSL will support Long Peaks. This propably means anything down to GFFX / Radeon 9xxx.

It will propably be similar to OpenGL 2.0 support now. Many cards exposing GL 2.0 don’t support NPOT textures, but they provide a software fallback. I expect it to be similar with Long Peaks.

Many cards exposing GL 2.0 don’t support NPOT textures, but they provide a software fallback.
I guess it would be nice to have rendering contexts without software fallback.
That’s actually one more flag in context creation so I guess it wouldn’t hurt to add it to specs. It would be up to driver to decide if it reports any pure HW context at all.
One could create both contexts and compare gl version and extension list to determine which features can have limitations and then use full context (with SW fallbacks).

That’s one thing that I find wrong with OpenGL implementations. Features are supposed to be orthogonal, but they’re not (filtering float textures for example).
Many GPU’s report GLSL or OpenGL 2.0 and above while, to be honest - they shouldn’t (gl_FrontFace, gl_ClipVertex for example) - try writting GLSL vertex shader that works with clip planes so it will run on borh Radeon X800 and GeForce 6800 - #ifdef is a must.

I understand it’s competition and having OpenGL 2.1 in features list is better for marketing than 1.5, but that can be painful for developers so I hope this issue will be given more attention in the future.

That’s true. It would be nice, if the specs would be a bit more strict about what needs to be supported (and how well it is supported!), before a vendor can call its card OpenGL x.y compatible.

I mean, we are using DirectX terms (Shader Model 2, 3, 4, etc.) to make clear, what kind of hardware we are talking about, because OpenGL version-numbers are just useless nowadays!

Jan.

I’d like to see a separate lib being released by the ARB that exposes a standard set of short benchmarks, instead of me having my own set - which can be practically run at init time. That way you have your caps checking without polluting gl with the caps madness of d3d.

Well, specs are strict what needs to be supported :slight_smile:

But then both NVIDIA and ATI release new GPU’s that report OpenGL x.x with some pdf’s explaining what doesn’t work…
Or sometimes it works but in SW - it’s allowed.

Thats bad beause it’s of no use for application. Let’s assume I want to implement HDR and GeForce 6/Radeon X are the best GPU’s currently on market.
How can I detect if GPU can handle both FP16 blending and filtering?!? OK, on NVIDIA it’s GL_NV_vertex_program3 for example. So I test for this extension and release my application - I will have to write a path as soon as new ATI GPU will be available. Similar situation is now with vertex textures.

The only 100% sure way at this point is to test every feature (combination of features!) your application intend to use (except for the obvious ones of course).
Assume we are not satisfied with feature if:
-it’s software amulated or slow
-it’s not working as expected
-it crashes

So a test should include:
-performance test (measure time)
-correctness test (rendered image analysis)
-stability test (try/catch :slight_smile: )

I do that in my game. For example I try using FLOAT16 filtering:
Radeon X800:
-performance: passed
-correctness: failed
-stability: passed
That’s because filtering is neither supported or emulated (so I get GL_NEAREST).
GeForce FX:
-performance: failed
-correctness: passed
-stability: passed
Test indicates that it’s either emulation or GPU is too slow anyway.

FLOAT16 blending:
Radeon X800:
-performance: N/A
-correctness: N/A
-stability: failed (exception thrown by driver)

Vertex texture fetch:
Radeon X800:
-performance: N/A
-correctness: failed (failed to link shader)
-stability: passed

I’m doing this kind of tests for some “unsafe” features I use.

I’d like to see a separate lib being released by the ARB that exposes a standard set of short benchmarks
Mee too. But I think this could be rather a suggestion for authors of extension loading libraries :slight_smile:
I’d rather like the ARB to work on tests that enforce compatibility with specs upon vendors than such a library. There was even a poll recently. I voted for such tests, but much more people woted for SDK which I will use rarely (I only need specs and extension registry).
Let’s just hope no one will get to the point where he releases app with advanced effects and after lots of reports like: “not working on my GPU” he will only be able to say: “I should have voted for test suite…” :slight_smile:

Maybe we can start opensource project that can test OpenGL implementation… a conformance/compilance test. In this test we can check every supported feature and output result for each hw/OS/driver combination.

3dLabs have something like that for GLSL, but we clearly need something to check other features.

Im sure that IHV’s QA departments have such tools, but seems that’s not enough… some driver bugs still survive.

I don’t think that would help. Even if GPU would fail OpenGL 2.0 test - let’s say because of no support for gl_ClipVertex - do you really think GPU producer would change it’s driver to report OpenGL 1.5 in that case?

But an open source test to use in applications would be nice.
Well, GLSLValidate checks if shader is compatible with specs but we actualy need to check if GPU/driver is compatible with specs.
Many shaders that pass GLSLvalidate will not run on some hardware.