PDA

View Full Version : Ati fragment shader hardware details revealed



Tzupy
11-24-2006, 10:37 AM
I found this documentations a few days ago, and believe they are of interest to OpenGL programmers too:
http://ati.amd.com/companyinfo/researcher/Documents.html

Humus
11-24-2006, 11:23 AM
Additionally, there's the GPU Shader Analyser (http://ati.amd.com/developer/gpusa/index.html) that we released just recently which reveals the exact hardware shader given an HLSL or assembly shader. No GLSL support yet, but it's on the todo list.

Tzupy
11-24-2006, 12:11 PM
Could you clarify a matter for me, please? On which commercial hardware does this CTM approach work?
Would it work on a 1950 Pro? Will it work on future Ati cards, R600 stuff? Of course, if it's not confidential...

Korval
11-24-2006, 01:51 PM
given an HLSL or assembly shader. No GLSL support yet, but it's on the todo list.Is there some specific reason why ATi treats glslang like an "also-ran" in terms of shading languages?

Komat
11-24-2006, 02:31 PM
Originally posted by Korval:
Is there some specific reason why ATi treats glslang like an "also-ran" in terms of shading languages? I assume that this is caused by the fact that the number of game developers using OpenGL is much smaller than number of game developers using DX. Since it is important from marketing point of view to have high score in gaming tests, they are putting most of the support work to the HLSL.

Korval
11-25-2006, 06:41 AM
I assume that this is caused by the fact that the number of game developers using OpenGL is much smaller than number of game developers using DX.But nVidia is perfectly capable of doing both equally well.

Komat
11-25-2006, 08:40 AM
Originally posted by Korval:
But nVidia is perfectly capable of doing both equally well. Not exactly equally. The NVPerfHUD (http://developer.nvidia.com/object/nvperfhud_home.html) is DX only. There is no OGL equivalent of FX Composer (http://developer.nvidia.com/object/fx_composer_home.html) .

Korval
11-25-2006, 08:47 AM
There is no OGL equivalent of FX Composer.That's because there's no OpenGL equivalent of FX. That's a D3D-only feature, though Collada FX looks to be an attempt to replicate this functionality.

andras
11-25-2006, 08:49 AM
Originally posted by Korval:

I assume that this is caused by the fact that the number of game developers using OpenGL is much smaller than number of game developers using DX.But nVidia is perfectly capable of doing both equally well. Not really, see <cough> nVPerfHUD </cough>.. It's a great tool, yet it's D3D only. :(

Komat
11-25-2006, 08:52 AM
Originally posted by Korval:
That's because there's no OpenGL equivalent of FX. That's a D3D-only feature, though Collada FX looks to be an attempt to replicate this functionality. In the first versions of the tool it was using its own syntax instead of the FX format. At that time it was entirely possible to integrate the GLSL support to that syntax however they latter moved to FX files.

zed
11-25-2006, 11:33 AM
Not really, see <cough> nVPerfHUD </cough>.. It's a great tool, yet it's D3D onlyi believe gdebugger is similar (ive never used it though or nvperfhud )

check out NVPerfKit 2.1
http://developer.nvidia.com/object/nvperfkit_home.html

from the looks of it nVPerfHUD sits ontop of NVPerfSDK
which is opengl / d3d

in defense of ati (well maybe do a binary ! before that)
they really dont support d3d developers either.
look at whats on their developer website both d3d + opengl, its pathetic compared to nvidia's. their source code examples are way less than 10% of what u find on nvidia's site (humus personal site has more stuff than atis companys one)

personally ive always had the feeling (tainted by marketting perhaps)
but nvidia care more about the developers, whereas at ati, the managers go, 'lets cut back on software expendature + instead stick the dollars into marketting', this is nothing personal against the ATI software guys but they just havent been given the resources from above that nvidia has.
which must be very frustrating

personally i wasnt surprised by ATI getting taken over
i *would* be very surprised if the same thing happened with nvidia
remember corporations are people to!

Humus
11-27-2006, 05:46 PM
Originally posted by Tzupy:
Could you clarify a matter for me, please? On which commercial hardware does this CTM approach work?
Would it work on a 1950 Pro? Will it work on future Ati cards, R600 stuff? Of course, if it's not confidential... I'm not 100% sure, but I believe it's only X1K series at this point.

Zengar
11-27-2006, 09:42 PM
Wow, this is really interesting :-) This way, it could be possible to create own graphics drivers for a custom API :-) I want it

Tzupy
11-28-2006, 01:54 AM
I still have my old system, Winnie 3200, 1 GB DDR, 6600 GT. I was planning to sell it cheaply.
If I'd know for sure that CTM works on X1950 Pro - not just X1900XT(X) and X1950XTX, I would keep my old system, except replace the 6600 GT with a X1950 Pro.
I am interested in best quality antialiasing of large triangle strip meshes, and maybe vertex coordinate computing - which is currently done on the CPU.
Is there any public information on the implementation of triangle strip rendering, with antialiasing, that could be used with CTM?

Humus
11-28-2006, 04:04 PM
I read your post wrong. An X1950 pro should definitely work with CTM. I read that as an Radeon 9500, which I'm not so sure about.

Tzupy
11-29-2006, 01:18 AM
Thank you for the information. So in principle it should work on any card that has X1K fragment shader hardware.
If I understand corectly, the number of inputs (textures) would be reduced from 16 to 12 on a X1950 Pro.

Humus
11-29-2006, 08:17 AM
The number of physical units is reduced, but not the number of logical units.