OT: rumors

For some reason I recalled there was a thread recently about GL2, where someone made clear some GL2-implementation is present in current ATI drivers.

Well I opened the DLL in the editor and looked for it and I actually found some interesting things:

GL_GL2_fragment_shader GL_GL2_vertex_shader GL_GL2_shader_objects are present and the entry points. Ok this is old, but there is also

GL_ATI_uber_buffers

with following entry points (I believe these entry points belong to the extension)

glIsFramebufferATI
glGetFramebufferATI
glBindFramebufferATI glDeleteFramebufferATI
glNewFramebufferATI
glAttachMemATI
glSwapBuffersATI
glGetMemSubImage1DATI
glGetMemSubImage2DATI
glGetMemSubImage3DATI
glGetMemImageATI
glMemCopy1DATI
glMemCopy2DATI
glMemCopy3DATI
glCopyMemSubImage1DATI
glCopyMemSubImage2DATI
glMemSubImage1DATI
glMemSubImage2DATI
glMemSubImage3DATI
glCopyMemImage1DATI
glCopyMemImage2DATI
glMemImageATI
glGetMemATI
glGetBaseMemATI
glGetSubMemATI
glGetMemPropertiesATI
glGetMemPropertyATI
glDeleteMemATI
glCloneMemATI
glAllocMem1DATI
glAllocMem2DATI
glAllocMem3DATI

so as it seems the superbuffer thing is on it’s way, now what we need is the spec.

Another thing I don’t know what it is is GL_ATIX_il_fshader.

This is kinda OT, just wanted to let you know.

Regards
-Lev

il fshader. . . Infinite Length Fragment Shader?

The R350’s FBuffer is supposed to allow for practically infinite lengths to shader programs via the uber-buffers. It’d be kinda disappointing if the only way for this to be exposed was through a vendor-specific extension, though. It’d be nice if it were handled in the background in ARB_fragment_program or whatever future ARB fragment shader extension appears. Of course, the ATIX_il_fshader extension may just give us a token or two to enable and disable F-Buffer support – which would certainly simplify the matter of supporting it in an application.

[This message has been edited by Ostsol (edited 06-15-2003).]

[This message has been edited by Ostsol (edited 06-16-2003).]

Are the publically avaliable specs on the glslang-binding extensions implemented by these extensions? Or have these specs been updated behind closed doors, and these extensions reflect these changes?

I think I prefered the uber-buffer extension better when it refered to a generic “free-standing memory object”, rather than a frame buffer.

It’d be kinda disappointing if the only way for this to be exposed was through a vendor-specific extension, though. It’d be nice if it were handled in the background in ARB_fragment_program or whatever future ARB fragment shader extension appears.

First, I seriously doubt there will be a future ARB_fragment_program extension… outside of glslang versions, that is.

Secondly, I think they are going down the “switch to turn it on/off” route, rather than making a whole new language. It would make sense, considering how much ATi is pusing glslang; they’d want this extension to alias with both ARB_fragment_program and glslang.

Thridly, I prefer this method to having a driver that suddenly allows rediculously long shaders (at a non-trivial performance penalty). That way, when I go to compile/link my glslang fragment shaders, I would have to actually request this functionality, rather than simply getting it. I prefer knowing where my performance is going.

[This message has been edited by Korval (edited 06-15-2003).]

Lev, i guess that was CybeRUS talking about those things are present inside of the latest drivers. We could ask him about it, but he also said that he don’t have the right to say anything concrete about GL_2.0 implementation, 'till it’s officially released

btw, Lev, since this is OT can i ask some OT question: do you speak Russian? and also i gotta say extgl is COOL!

Originally posted by Korval:

Secondly, I think they are going down the “switch to turn it on/off” route, rather than making a whole new language. It would make sense, considering how much ATi is pusing glslang; they’d want this extension to alias with both ARB_fragment_program and glslang.

There are indeed no entry points for GL_ATIX_il_fshader (there are no entry points with “ATIX” at all actually), so it must be a on/off feature.

It is quite interesting to look through the driver dll, I just found some debug info (I think it’s debug info), things like

Optimization : Value numbering -> %d instruction(s) marked as CSE
Optimization : Dead code elimination -> %d instruction(s) removed
Optimization : BalanceVectorScalar() called
Optimization : Rewrite() called extra edge removed from instruction #%d
Optimization : RemoveUnnecessaryDependencies() called
Optimization : Fold of Phi Nodes -> %d phi node(s) marked useless SSA : Phi Replace -> %d added temps to break cycles
Optimization : Copy folding -> %d copy(s) removed

or other string mentioning loops

matt_weird: I do speak russian.

Lev: êàê äåëà?

Optimization : Value numbering -> %d instruction(s) marked as CSE
Optimization : Dead code elimination -> %d instruction(s) removed
Optimization : BalanceVectorScalar() called
Optimization : Rewrite() called extra edge removed from instruction #%d
Optimization : RemoveUnnecessaryDependencies() called
Optimization : Fold of Phi Nodes -> %d phi node(s) marked useless SSA : Phi Replace -> %d added temps to break cycles
Optimization : Copy folding -> %d copy(s) removed
This prompted me to examine nVidia’s latest driver dll, and I found the following strings:

Optimization : FSAA turned off in benchmark
Optimization : %d hard-coded clip planes added
Optimization : Benchmark shader detected and replaced
Optimization : %d glClear calls ignored

Any idea what they could mean?

Aaron:

I find it amazing that nobody is following up on these two very significant extensions. By now, I would have expected someone to find out what the signatures of the ATI_uber_buffers functions were.

BTW, nobody answered my question on this subject, so I’ll ask again: “Are the publically avaliable specs on the glslang-binding extensions implemented by these extensions? Or have the specs been updated behind closed doors, and these extensions reflect those changes?”

Optimization : Value numbering -> %d instruction(s) marked as CSE
Optimization : Dead code elimination -> %d instruction(s) removed
Optimization : BalanceVectorScalar() called
Optimization : Rewrite() called extra edge removed from instruction #%d
Optimization : RemoveUnnecessaryDependencies() called
Optimization : Fold of Phi Nodes -> %d phi node(s) marked useless SSA : Phi Replace -> %d added temps to break cycles
Optimization : Copy folding -> %d copy(s) removed

This is all very typical compiler-type stuff. Especially when compiling a C-esque language involving loops and so forth. I’m not quite sure what a “Phi Node” is, though.

Optimization : FSAA turned off in benchmark
Optimization : %d hard-coded clip planes added
Optimization : Benchmark shader detected and replaced
Optimization : %d glClear calls ignored

Tell me you’re kidding. Please tell me that nVidia didn’t leave such blatent evidence of their cheating in their drivers. Please tell me they were smarter than that.

Korval, please tell me you’re kidding us that you believe aarons joke? Re-read the post, and send it down your humour path for a second and see what the output is.

The ATI_uber_buffer functions look very similar to what was presented at GDC03 in the ARB superbuffers presentation.
http://www.opengl.org/developers/code/gdc2003/GDC03_ARBSuperbuffers.ppt

-Lev