Ati fragment shader hardware details revealed

I found this documentations a few days ago, and believe they are of interest to OpenGL programmers too:
http://ati.amd.com/companyinfo/researcher/Documents.html

Additionally, there’s the GPU Shader Analyser that we released just recently which reveals the exact hardware shader given an HLSL or assembly shader. No GLSL support yet, but it’s on the todo list.

Could you clarify a matter for me, please? On which commercial hardware does this CTM approach work?
Would it work on a 1950 Pro? Will it work on future Ati cards, R600 stuff? Of course, if it’s not confidential…

given an HLSL or assembly shader. No GLSL support yet, but it’s on the todo list.
Is there some specific reason why ATi treats glslang like an “also-ran” in terms of shading languages?

Originally posted by Korval:
Is there some specific reason why ATi treats glslang like an “also-ran” in terms of shading languages?
I assume that this is caused by the fact that the number of game developers using OpenGL is much smaller than number of game developers using DX. Since it is important from marketing point of view to have high score in gaming tests, they are putting most of the support work to the HLSL.

I assume that this is caused by the fact that the number of game developers using OpenGL is much smaller than number of game developers using DX.
But nVidia is perfectly capable of doing both equally well.

Originally posted by Korval:
But nVidia is perfectly capable of doing both equally well.
Not exactly equally. The NVPerfHUD is DX only. There is no OGL equivalent of FX Composer .

There is no OGL equivalent of FX Composer.
That’s because there’s no OpenGL equivalent of FX. That’s a D3D-only feature, though Collada FX looks to be an attempt to replicate this functionality.

Originally posted by Korval:
[quote]I assume that this is caused by the fact that the number of game developers using OpenGL is much smaller than number of game developers using DX.
But nVidia is perfectly capable of doing both equally well.
[/QUOTE]Not really, see <cough> nVPerfHUD </cough>… It’s a great tool, yet it’s D3D only. :frowning:

Originally posted by Korval:
That’s because there’s no OpenGL equivalent of FX. That’s a D3D-only feature, though Collada FX looks to be an attempt to replicate this functionality.
In the first versions of the tool it was using its own syntax instead of the FX format. At that time it was entirely possible to integrate the GLSL support to that syntax however they latter moved to FX files.

Not really, see <cough> nVPerfHUD </cough>… It’s a great tool, yet it’s D3D only
i believe gdebugger is similar (ive never used it though or nvperfhud )

check out NVPerfKit 2.1
http://developer.nvidia.com/object/nvperfkit_home.html

from the looks of it nVPerfHUD sits ontop of NVPerfSDK
which is opengl / d3d

in defense of ati (well maybe do a binary ! before that)
they really dont support d3d developers either.
look at whats on their developer website both d3d + opengl, its pathetic compared to nvidia’s. their source code examples are way less than 10% of what u find on nvidia’s site (humus personal site has more stuff than atis companys one)

personally ive always had the feeling (tainted by marketting perhaps)
but nvidia care more about the developers, whereas at ati, the managers go, ‘lets cut back on software expendature + instead stick the dollars into marketting’, this is nothing personal against the ATI software guys but they just havent been given the resources from above that nvidia has.
which must be very frustrating

personally i wasnt surprised by ATI getting taken over
i would be very surprised if the same thing happened with nvidia
remember corporations are people to!

Originally posted by Tzupy:
Could you clarify a matter for me, please? On which commercial hardware does this CTM approach work?
Would it work on a 1950 Pro? Will it work on future Ati cards, R600 stuff? Of course, if it’s not confidential…

I’m not 100% sure, but I believe it’s only X1K series at this point.

Wow, this is really interesting :slight_smile: This way, it could be possible to create own graphics drivers for a custom API :slight_smile: I want it

I still have my old system, Winnie 3200, 1 GB DDR, 6600 GT. I was planning to sell it cheaply.
If I’d know for sure that CTM works on X1950 Pro - not just X1900XT(X) and X1950XTX, I would keep my old system, except replace the 6600 GT with a X1950 Pro.
I am interested in best quality antialiasing of large triangle strip meshes, and maybe vertex coordinate computing - which is currently done on the CPU.
Is there any public information on the implementation of triangle strip rendering, with antialiasing, that could be used with CTM?

I read your post wrong. An X1950 pro should definitely work with CTM. I read that as an Radeon 9500, which I’m not so sure about.

Thank you for the information. So in principle it should work on any card that has X1K fragment shader hardware.
If I understand corectly, the number of inputs (textures) would be reduced from 16 to 12 on a X1950 Pro.

The number of physical units is reduced, but not the number of logical units.