PDA

View Full Version : Reference Implementation



glfreak
12-09-2010, 10:18 AM
Hallo,

I was thinking that a full reference implementation of the current OpenGL specification with comparability profiles supported would be a great idea that can work as a sample implementation for hardware driver developers. Besides this can be used as a conformance and quality tests for both driver and application developers, so that they can make sure it's not the driver bug or vice versa.

Mesa3D is behind the current versions of the OpenGL API and the GLSL.

What about something the ARB/Khronos can take care of and provide it on all major platforms: Linux/Unix, Mac, and Windows.

aqnuep
12-10-2010, 03:32 AM
Nice idea. I could also take advantage of a reference implementation for current spec version.

Eosie
12-10-2010, 07:12 AM
Mesa3D is behind the current versions of the OpenGL API and the GLSL.
Mesa3D cannot support OpenGL 3, because floating point rasterization and renderbuffers are patented. See the IP status in ARB_texture_float. We've been going through this many times before and the current status is that floating-point rendering is implemented but nobody has the guts to merge it to master. This message from Brian Paul makes it clear:
http://lists.freedesktop.org/archives/mesa-dev/2010-September/002674.html

It's a shame because Mesa has one of the fastest software rasterizers out there (if not the fastest) based on LLVM.

I wonder what ARB has to say on this matter.

glfreak
12-10-2010, 09:55 AM
Interesting. Then the ARB should go ahead and buy a license. They should be the ones who provide the reference implementation.

arekkusu
12-10-2010, 12:45 PM
How many man-years of work do you think it is to implement a reference?

Eosie
12-10-2010, 07:47 PM
Interesting. Then the ARB should go ahead and buy a license. They should be the ones who provide the reference implementation.

ARB just make the OpenGL specification, they don't make any software per se. The ARB members don't work for ARB full-time, they work for their respective companies (NVIDIA, AMD, Intel, Apple..). They just make a new specification once in a while if time allows and that's it. You can't expect ARB to write any software, Khronos is NOT a software company.

aqnuep
12-11-2010, 01:24 AM
Even though Khronos is not a software company and the ARB members are just partially working on the spec, the Khronos Group makes standards and if you want to check an implementation whether it is compliant with the standard (either by having a conformance test or a reference implementation or both).
This is their responsibility and actually they are working on it (at least on the conformance test) so the Khronos Group actually does deal with some sort of software that is required to maintain their specification, of course, most probably they outsource it or whatever.
Also please don't confuse the ARB with the Khronos Group, one is just a core team that is working on the GL spec, the other is the whole consortium with some kind of ecosystem and infrastructure behind them.

Groovounet
12-11-2010, 10:36 AM
another hippie idea

glfreak
12-12-2010, 02:00 PM
My suggestion is based on the fact that OpenGL is always advertised as the defacto standard for professional graphics programming from CAD applications to Academia to sometimes portable games. This automatically implies a software implementation which is up to date with all versions and for every platform OpenGL is supposed to be portable to.
It does not make sense to let a third party open source project with limited resources to take care of this or even an IHV who treats OpenGL as a secondary API. The ARB or Khornos or who ever creates the specification has no excuse not to provide a reference implementation unless they can say it "we dunno how to make software because we dunno what we are talkin about in dee specification." ;)

One more thing I want to ask the specification :) how has the deprecation of many features helped out the driver quality of OpenGL and support of the current version? at least on integrated graphics which already support Direct3D 9 or above...

Alfonse Reinheart
12-12-2010, 02:14 PM
The ARB or Khornos or who ever creates the specification has no excuse not to provide a reference implementation unless they can say it "we dunno how to make software because we dunno what we are talkin about in dee specification."

By this logic, it is the job of the C++ standardization body to create a 100% conformant implementation of C++.

No. No it isn't. The job of a standardization body is to create a standard. Implementing that standard is the job of people who want to implement it.

The ARB has no software development resources. Khronos has only the software development resources they hire via contracts. None of these bodies are companies with actual resources. They do what they can with what they have. So the fact that they don't have anyone to write such a reference implementation is how they justify not having one.


how has the deprecation of many features helped out the driver quality of OpenGL and support of the current version?

NVIDIA sabotaged the removal (stop calling it deprecation!) with ARB_compatibility. Thanks to NVIDIA's tireless efforts in making OpenGL implementation writing as hard as possible, the ARB has effectively redacted the entire thing at this point, as they continue to publish compatibility specifications and write all of their extensions against the compatibility specs.

aqnuep
12-12-2010, 03:30 PM
NVIDIA sabotaged the removal (stop calling it deprecation!) with ARB_compatibility. Thanks to NVIDIA's tireless efforts in making OpenGL implementation writing as hard as possible, the ARB has effectively redacted the entire thing at this point, as they continue to publish compatibility specifications and write all of their extensions against the compatibility specs.

I agree completely with this one. When I first saw one of the slides where NVIDIA shows which deprecated functionalities are hardware accelerated, it made me laugh. They know the best that those functionalities are actually emulated with core functionalities. Starting from fixed function lighting and transform that is actually compiled to GPU code in the same way as shaders, to striped lines which are actually emulated with texturing and consumes one of the texture units even if you don't know about it.

Having deprecation boycotted made both sides unhappy:
1. Legacy GL fan guys always talk about how the ARB thought they remove those old stuffs (even though nothing is removed and they have everything like they had in the old time).
2. People who are for a modern GL complain about the fact that the deprecation didn't really happened as they wanted (actually it didn't happen at all).

So agree with Alfonse that you cannot expect better driver quality even if the latest released extensions still define interworking with legacy deprecated stuff. Just check the so famous NVIDIA bindless extensions. E.g.: NV_vertex_buffer_unified_memory still advertises conventional vertex attributes

Dark Photon
12-12-2010, 05:45 PM
NVIDIA sabotaged the removal (stop calling it deprecation!) with ARB_compatibility.
2. People who are for a modern GL complain about the fact that the deprecation didn't really happened as they wanted ... you cannot expect better driver quality even if the latest released extensions
Come on guys. Before we rekindle this pining, again, how exactly is it going to help GL?

This was all said and done almost 2 years ago, and there never was never vendor agreement or user agreement over this move. Nobody's holding a gun to the purist's head and making them call glTexEnv or enable GL_LIGHTING (or use the compatibility profile at all for that matter!). And, similarly, nobody's holding a gun to the practical user's head and making them "not" call glTexEnv or enable GL_LIGHTING.

Let's just get along and respect each other for having a different opinion, shall we?

glfreak
12-12-2010, 08:40 PM
Then what about this. Cannot we add legacy stuff to glu so we can have something like this:

gluBegin(...)

gluVertex3f(...)

gluTexEnv(...)

and maybe gluEnable(GLU_LEGACY)

since the "removed" features can be implemented in terms of the new core streamlined functionality.

This way the drivers implement the core profile and offload the legacy stuff to GLU.

Alfonse Reinheart
12-12-2010, 11:53 PM
Then what about this. Cannot we add legacy stuff to glu so we can have something like this:

That only runs into the same problem as conformance tests and reference implementations: who's going to write and maintain it? The ARB/Khronos doesn't have software development resources.

kRogue
12-13-2010, 05:35 AM
NVIDIA sabotaged the removal (stop calling it deprecation!) with ARB_compatibility. Thanks to NVIDIA's tireless efforts in making OpenGL implementation writing as hard as possible, the ARB has effectively redacted the entire thing at this point, as they continue to publish compatibility specifications and write all of their extensions against the compatibility specs.


Epic BS from Alfonse. Let's take a look at the technical deprecation issues, it cut out useful functionality: display lists, and line width [which is bizarre it is deprecated but in GL core, go figure] and some more. Additionally, there is a great deal of CAD software out there that uses the old school GL interface. Gee. Imagine that, backwards compatibility. As for making it easier to implement GL without the compatibility profile, I think it is pretty thin ice, and a BS argument. The idea "NVIDIA wanted compatibility there to make it hard for others to implement GL" is complete utter tin foil crap-trap.

At any rate, compatibility profile is optional and for new devices, when bringing GL (not GLES) the EGL standard recommends bringing GL3.2(or higher) core, not compatibility profile.

On desktop, GL is going to need backwards compatibility, simply because so much existing software is using it and there are so many, many man years expended for which the software is actively maintained and having features added. However, and in my opinion, wisely, that software is being maintained not rewritten which a core profile likely would require.



By this logic, it is the job of the C++ standardization body to create a 100% conformant implementation of C++.


Ahem. Depends. Khronons does provide a sample implementation for some, such as OpenWF. At the end of the day though, a GL implementation is a monster, so not likely one would see that.

glfreak
12-13-2010, 08:32 AM
Then what about NVIDIA takes care of this? I believe they like OpenGL a lot. :)

kRogue
12-13-2010, 11:27 AM
Then what about NVIDIA takes care of this? I believe they like OpenGL a lot.

If you are asking about getting compatibly like functionality (fixed function pipeline, display lists, glBegin/glEnd, client side vertex arrays, etc), it is taken care of: you create a compatibility profile context. If you are using the old school style profile creation functions, then this happens automatically. Under both ATI and NVIDIA, creating a context the old fashioned way does not give an "older" GL version than creating a core profile context the new fashioned way (i.e. the GL version number is the same regardless). There is only one pice of functionality (I think) that a core profile has that a compatibility profile: does not: in a compatibility profile, for one's vertex shader, one mush have an attribute at index 0, where as in core profile this is not required. If you do not set the attribute locations (be it via layout in the GLSL or glBindAttribute), then you will have one of your active attributes using attribute 0 anyways.

glfreak
12-13-2010, 04:18 PM
Understood. What about:

EXT_GL_reference_implementation

Which allows any feature/call/path not supported by the driver or hardware (or not accelerated) to be run in software emulation mode.

This defines a flag that forces full software mode. i.e,

glEnable(EXT_GL_SOFTWARE_ONLY);

And let it be for now only core profile, no compatibility/legacy stuff.

mfort
12-14-2010, 08:59 AM
Is everybody crazy here? Why are you wasting your time?

V-man
12-17-2010, 11:55 AM
Wait wait wait.
Floating point render target is patented?
That doesn't make any sense. GPU's execute their dingies in floating point and just write to a memory area. What is there is patent?

I can understand if the invention of floating point (16 bit, 32 bit, 64 bit) is patented because that involves working on the problem.


Is everybody crazy here? Why are you wasting your time?
Relax man. We are just discussing.

Alfonse Reinheart
12-17-2010, 12:48 PM
That doesn't make any sense.

Welcome to patent law. Enjoy your stay.

Eosie
12-17-2010, 02:09 PM
Wait wait wait.
Floating point render target is patented?
That doesn't make any sense. GPU's execute their dingies in floating point and just write to a memory area. What is there is patent?

I can understand if the invention of floating point (16 bit, 32 bit, 64 bit) is patented because that involves working on the problem.


Is everybody crazy here? Why are you wasting your time?
Relax man. We are just discussing.

Here's some info I was told from various people in the OSS community. Feel free to use google to get more precise information.

The patent in question is 6,650,327.
http://patft.uspto.gov/netacgi/nph-Parse...RS=PN/6,650,327 (http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnet ahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=6,650,32 7.PN.&OS=PN/6,650,327&RS=PN/6,650,327)

It covers floating-point rasterization and rendering. The patent used to be own by SGI, who sold their whole patent portfolio to Microsoft, who then sold the floating-point patent to some patent holding company. I don't have more info.

In the past, SGI sued ATI for infringing on the patent.
http://www.patentlit.com/2008/05/02/silicon-graphics-v-ati-is-a-draw/

There are some unresolved issues I seek answers for:
1) Does the patent cover only hardware implementations or both hardware and software?
2) If it's a hardware-only patent, does a driver (which is software) which uses hardware for floating-point rendering infringe on the patent?
3) Does software rasterizers running on a CPU infringe on the patent?

kRogue
12-18-2010, 12:27 PM
There are even more patents at the GL registry: https://www.khronos.org/files/ip-disclosures/opengl/. You will find the following bits under patents:
Portions of the OpenGL ES 1 specification: https://www.khronos.org/files/ip-disclosures/opengl/Apple%20OpenGL%20Disclosure%20Jun06_clean.pdf Tessellation as found in GL4.x: https://www.khronos.org/files/ip-disclosures/opengl/Matrox_OpenGL_IP_Disclosure_01MAR2010-clean.pdf Anisotropic filtering (most people are already aware of this): https://www.khronos.org/files/ip-disclosures/opengl/NVIDIA_IP_Disclosure_Certificate-clean.pdf S3 texture compression (most people are already aware of this): https://www.khronos.org/files/ip-disclosures/opengl/S3_IP_Cert_clean.pdf Floating point render targets: https://www.khronos.org/files/ip-disclosures/opengl/SGI%20IP%20Disclosure%20Mar05_clean.pdf

The one for portions of OpenGL ES 1.x being potentially under a patent threw me for a major spin.

Alfonse Reinheart
12-18-2010, 01:09 PM
It's rather surprising to see current ARB members holding out patents on OpenGL.

glfreak
12-22-2010, 10:42 PM
Lets all say loudly "Patent my a****s," "who cares?"

OpenGL is priority and it's the GOD of Graphics APIs.