I was wondering the same thing...
I was wondering the same thing...
I didn't try if it really works (maybe it is accidentally exposed by the driver in the extension string), just make a comparison of ARB extension lists of the two drivers. I have only tried GL_ARB_uniform_buffer_object, and it works (at least things that I tried).Huh? An ancient extension like that? Crossbar hasn't been relevant since the R300/NV30 days.
I think that Crossbar extension was always supported by Nvidia through their own NV extension (used the same tokens and everything) I think the only difference was behaviour when accessing a texture stage that did not have a texture bound.
From the ARB spec:
If a texture environment for a given texture unit references a texture unit that is disabled or does not have a valid texture object bound to it, then it is as if texture blending is disabled for the given texture unit
I think the Nvidia spec said behaviour was undefined when accessing a unbound texture stage.
It always seemed to me that Nvidia was being really pedantic about the spec - and since it already had an extension that did exactly the same thing (in non error cases), it was impossible for them to support it fully.
The Crossbar extension is part of OpenGL 1.4 and has been removed from OpenGL 3.1. Now it's back again.
(usually just hobbyist) OpenGL driver developer
But it's part of ARB_compatibility, since it was once part of core; you don't need to advertise it explicitly.The Crossbar extension is part of OpenGL 1.4 and has been removed from OpenGL 3.1. Now it's back again.
Original from: http://www.opengl.org/discussion_boa...316#Post263316
A feature request about precision.
Bundling of pipelines/stream processors for acting as a wider, more precise pipelines/stream processors. Enabling dynamical, programmable precisions.
Have seen that with GLSL, shaders can be coupled (coupled in serie) to do multiple effects.
What if you could couple pipelines in parallel for enhanced precision?
Not for parallel processing, just adding precision.
e.g. couple eight pipelines with full 32bit accuracy for each component to one combined pipeline with 8*32bit = 256bit accuracy.
This has the advantage to be very scalable with a good specification.
If there is only one pipeline then the pipeline will just have to take more time in calculations and store data in cache memory.
If there are pipelines left because of the size of the combined pipeline (e.g. 5*32bit on 17 stream processors/pipelines leaves out 17modulus5 = 2 left),
no problem, then so be it.
This would allow dynamical, programmable precision which is useful, welcome (essential?) in several area's.
Physics simulations for instance.
But also in more mainstream applications.
Position calculations in games on huge maps without running into precision issues. (It's going to be slower than less precision but at least it is possible to have good animation and movement.)
Got this idea after thinking about precision problems in physics simulations and the fact that shaders can be coupled after each other. That each pipeline has a certain precision and the current graphics processors have a lot of them ATI: graphic cards with 800 stream processors/pipelines.
Sounds good, doesn't it?
Why not bundling/coupling the stream processors to enhance precision when needed anyway?
Please don't cross-post.
The enumext.spec file has an error in it. There is a line:
Code :use VERSION_3_1 R_SNORM
in the EXT_texture_snorm extension. It should be this:
Code :use VERSION_3_1 RED_SNORM
as "R_SNORM" doesn't exist.
Additionally, in gl.tm, there is no entry for Int64, though it is used in the gl.spec file. There is an entry for Int64Ext.