PDA

View Full Version : Catalysts 5.3 released.



^Fishman
03-09-2005, 03:37 PM
The CATALYST™ Software Suite introduces OpenGL Version 2.0 support. This version of OpenGL will be in line with the latest GL ARB approved specifications. New to this specification are GLSL, non-power-of-two-textures, separate stencil, multiple render targets, and point sprites. CATALYST™ Release Notes. (http://www2.ati.com/drivers/Catalyst_53_release_notes.html)
CATALYST™ Downloads (https://support.ati.com/ics/support/KBAnswer.asp?questionID=640)

Korval
03-09-2005, 05:32 PM
I'm pretty shocked at this boldface lie that they've produced here. I suppose if their drivers emulate NPOT, then they'll be 2.0 compliant.

But then nobody will care. They don't support NPOT in hardware. All this will do is make people afraid to check for 2.0 support or use NPOTs.

Two years ago, ATi was a company with integrety and a superior product based on clever engineering and some good design choices. Now, they're releasing garbage (feature-wise) hardware and telling horrible lies.

Humus
03-09-2005, 09:36 PM
Did you download and test NPOT at all?
We support NPOT textures in HW, but not 100% orthogonally. You get acceleration if you don't use mipmaps and don't use GL_REPEAT.

What's your proposed solution btw? Not expose GL2.0 at all? Has any hardware ever fully supported any GL version?

Edit: And what the heck was the lie???

Gong
03-09-2005, 09:42 PM
Where is EXT_framebuffer_object? I think it's the most important extension that should be supported.

zed
03-09-2005, 10:17 PM
whats the shading version 1.00 or 1.10?

bobvodka
03-09-2005, 10:23 PM
Originally posted by Gong:
Where is EXT_framebuffer_object? I think it's the most important extension that should be supported.Yeah, I was kinda hoping to see that included as well, but it looks like we are going to be waiting another month :)
(I guess the driver pipeline is a little deeper than I thought)

Althought, a thought accures, would it be at all possible to get the OGL componet released seperately from the drivers? I know they are WHQL tested but afaik the OGL part isnt tested for, so it shouldnt upset things too much? (heck, even if it only went to registered devs I wouldnt be unhappy as I am one, heh)
Just an idea anyways :)

OneSadCookie
03-09-2005, 10:35 PM
The issue of whether to report extension support in the case of partial acceleration is always tricky, and I don't think the criteria for doing so are the same for all extensions.

For example, ARB_vertex_program can be implemented efficiently on the CPU. It'll never be as fast as GPU support of course, but for "most" purposes, it's good enough.

SGIS_generate_mipmap is closer to the line -- for Tex(Sub)Image2D calls, it probably doesn't matter if the mipmaps are generated in software, but for CopyTex(Sub)Image2D, it might matter a great deal.

I think I'd personally place ARB_texture_non_power_of_two over the line -- applications are used to working around the POT restriction, so the main reason to use ARB_texture_non_power_of_two is to remove the ARB_texture_rectangle restrictions. So yes, I think Korval is right, ATI has just foiled the ARB_texture_non_power_of_two extension check. We now have to explicitly test the vendor and card to know whether it's really usable.

Of course, then there's ARB_shading_language_100 -- currently shipped without even a software fallback AFAIK. In a lot of ways that's even less acceptable.

In conclusion, this is a general problem, and not one with an easy solution. Perhaps the right thing is to add a new API


new enums:
FULLY_SUPPORTED
PARTIALLY_SUPPORTED
NOT_SUPPORTED

FULLY_ACCELERATED
PARTIALLY_ACCELERATED
NOT_ACCELERATED

new functions:
QueryFeatureSupport(const char *feature, GLenum * supported, GLenum *accelerated);Then each individual app can make a personalized decision over how to treat each individual extension, possibly referring to vendor and model in the partial support/acceleration situations.

To go with this, I think it would be required to define feature names (fake extensions, if you will) covering the basic OpenGL features.

What do y'all think?

stanlylee
03-09-2005, 10:41 PM
pleas tell us. why no FBO in the extension list. I miss it very much ~_~

Korval
03-09-2005, 11:14 PM
We support NPOT textures in HW, but not 100% orthogonally. You get acceleration if you don't use mipmaps and don't use GL_REPEAT.Ahh. So you're effectively using texture rectangles, except that your hardware can handle (and potentially always could handle) normalized texture coordinates.

And you don't consider these unspecificed limitations to be lying? Or, at least, intended to deceive wouldbe customers into thinking that they were getting a full 2.0 implementation?


What's your proposed solution btw? Not expose GL2.0 at all? Has any hardware ever fully supported any GL version?Well, technically, my proposed solution would be for ATi to get back to making great hardware; I liked them better when they actually upgraded their hardware. But, given that they didn't this go around, yes, they should not expose GL 2.0. They should expose the extensions that their hardware can handle, and that is all.

Wouldn't it be false advertising for a TNT2 to expose GL 2.0? Or, at least, false implication?


What do y'all think?On the surface, it sounds like a good idea. But, past that surface, it collapses.

Sometimes, whether a feature is "fully implemented" depends on other features. Let's say that some hardware supported the accumulation buffer and fragment programs, but not simultaneously. Is that a "full" implementation of either? Well, if you never use accum buffers, you'll never run into any limitations, so someone who is only looking for full fragment program support would be misinformed by seeing a "partial". But, then again, someone using them both who found a software fallback would be quite upset, since they both said "full".

There's no simple answer to this. Especially when version number increases of GL bundle all kinds of features together.


pleas tell us. why no FBO in the extension list. I miss it very much ~_~It's probably like bobvodka said; ATi's driver development pipeline is deeper than we would like to think. That, or implementing it isn't trivial for them.

Besides, even nVidia's implementation isn't final. So it's not like they've lost the race or anything.

stanlylee
03-10-2005, 12:19 AM
All things that I can do is waiting .
wait for ATI's driver developer to implement a FBO(maybe changes there mind ~_~)
and .
wait for NVIDIA's driver for Linux can support the FBO.

ATI has a extension names GLX_ATI_Render_to_Texture. but NV has not.. so I must use glCopy(Sub)TexImage under Linux... it's very unconvenient.

spasi
03-10-2005, 12:51 AM
Korval,

I've been lately disappointed from ATI too, but I don't think you're being fair. We have an FX here with the 75.90 driver and the GL version is 2.0. That doesn't make it suddenly support all of ARB_tex_npot's features. This has been done before (e.g. 3D textures) and never was a serious problem.

PsychoLns
03-10-2005, 03:26 AM
Originally posted by OneSadCookie:
ATI has just foiled the ARB_texture_non_power_of_two extension check. We now have to explicitly test the vendor and card to know whether it's really usable.
Several of the other GL2 related extensions are not defined in the 5.3 drivers (draw buffers, float textures), but it's still a 2.0 driver as the core functions are available.
But you don't see the arb_npot extension either - so yes, it no longer returns an error when uploading npot textures, but it will probably be forced into software rendering (didn't notice any hw rendering when I tried it, but that's hard to spot when the rendering times are several seconds anyway ;) .
I think this is the right way to handle such a "marker extension" which doesn't add to the interface - it will "work"(in sw) as the core specification requires, but the extension won't be exposed if it's not handled in hw.

execom_rt
03-10-2005, 03:53 AM
Apparently thoses new drivers, which export GL_VERSION to 2.0, so 'GL_ARB_texture_non_power_of_two' according the spec.

It seems that their implementation is working like the Direct3D D3DPTEXTURECAPS_NONPOW2CONDITIONAL feature.

So, finally, we don't need to have the texture coordinates in pixel but between 0-1.0, which is fair enough.

Well, if there no 'wrap mode', it's OK for me.

Also, we have a separate two side stencil (finally I can drop the GL_ATI_separate_stencil code, I hate to have two implementations for the same feature, it makes the code messy).

Point sprite, finally, but since you cannot specify a vertex array for the point size (like glVertexPointer-like for the 'point' size), it's unusable for me (Direct3D can do it, called D3DFVFCAPS_PSIZE)

But, the end of the road is nowhere close:

- Still waiting for depth component PBuffer support: Where is that WGL_ATI_render_depth_texture ? will be ever supported ? Anyway, there is even no DST support in Direct3D on ATI video cards)

- The poor rendering quality of GL_ARB_shadow (unless you are using fragment shaders for the whole thing)

- The bug where GLSL silently goes to software rendering AFTER linking and saying that the shader goes to hardware ...

- And an official list of all the limitations of the OpenGL implementation (like limitations of texture indirection etc.. Do you know an official document about that ?)

Humus
03-10-2005, 06:15 AM
Originally posted by zed:
whats the shading version 1.00 or 1.10?It's 1.10. To support OpenGL 2.0 you need to support 1.10.

Humus
03-10-2005, 06:21 AM
Originally posted by OneSadCookie:
So yes, I think Korval is right, ATI has just foiled the ARB_texture_non_power_of_two extension check. We now have to explicitly test the vendor and card to know whether it's really usable.No, we don't expose GL_ARB_texture_non_power_of_two. We expose it only through GL2.0. Korval was arguing that we shouldn't have exposed GL2.0 at all, since our NPOT support is limited. Arguing like that we might just as well scrap GLSL support, or heck, GL 1.1 too.


Originally posted by OneSadCookie:
Of course, then there's ARB_shading_language_100 -- currently shipped without even a software fallback AFAIK.That's not true at all.

Humus
03-10-2005, 06:35 AM
Originally posted by Korval:
Ahh. So you're effectively using texture rectangles, except that your hardware can handle (and potentially always could handle) normalized texture coordinates.Actually the other way around. We only support normalized coordinates, so for texture rectangles (which has its limitations originated from NV hardware) we AFAIK actually have to patch the shader to normalize the texture coordinates.


Originally posted by Korval:
And you don't consider these unspecificed limitations to be lying? Or, at least, intended to deceive wouldbe customers into thinking that they were getting a full 2.0 implementation?No. It's a full implementation. Some stuff runs in software though. That's the nature of GL.

I think your criticism is highly unmotivated in this situation. There have been times in the past where our PR team screwed up (like when GL 2.0 ended up on our boxes before there was even a ratified spec), but this time I don't see anything that's even remotely close to unfair against consumers or developers. No hype, no value-carrying words like "complete" or "full", it follows regular unofficial standards to only expose hardware accelerated features in the extension string and so on. I seen nothing that was done wrong this time.


But, given that they didn't this go around, yes, they should not expose GL 2.0. They should expose the extensions that their hardware can handle, and that is all.Should we expose GLSL then? We don't support noise(), very long shaders, gl_FrontFacing on R300 etc.


Wouldn't it be false advertising for a TNT2 to expose GL 2.0? Or, at least, false implication?What about the same TNT2 exposing GL 1.5? Are nVidia lying too?

Humus
03-10-2005, 06:38 AM
Originally posted by execom_rt:
- Still waiting for depth component PBuffer support: Where is that WGL_ATI_render_depth_texture ? will be ever supported ?We don't support binding a depth buffer as a texture directly, so it would be pointless.

spasi
03-10-2005, 06:53 AM
Humus,

Will render-to-depth-texture be supported with ATI's EXT_fbo implementation?

Korval
03-10-2005, 10:46 AM
Point sprite, finally, but since you cannot specify a vertex array for the point size (like glVertexPointer-like for the 'point' size), it's unusable for me (Direct3D can do it, called D3DFVFCAPS_PSIZE)I'm pretty sure it's possible, though I don't remember the extension that exposes this. Even if I'm wrong, use a generic attribute and have a vertex shader pass the size along.


No. It's a full implementation. Some stuff runs in software though. That's the nature of GL.Yes, but we were working through that kind of stuff. All the useful features of GL, the ones you could imagine sane hardware actually implementing, work in hardware. So, when you see a modern card supporting 1.5, you're pretty sure that it really supports 1.5.

There's no need to regress back to the bad old days where you had to guess what was going to be usable and what wasn't.


this time I don't see anything that's even remotely close to unfair against consumers or developers.So, the assumption should be that, if the card doesn't expose NPOT, but does expose GL 2.0, that their NPOT functionality should be considered incomplete? This is hardly a documented feature.


Should we expose GLSL then? We don't support noise(), very long shaders, gl_FrontFacing on R300 etc.Be advised you're talking to someone who would rather the ARB extend their assembly languages when they add functionality.


What about the same TNT2 exposing GL 1.5? Are nVidia lying too?Yes. OpenGL versions should not be marketting slogans; they should refer to a relative level of hardware/fast functionality.

V-man
03-10-2005, 11:44 AM
Yes. OpenGL versions should not be marketting slogans; they should refer to a relative level of hardware/fast functionality. There isn't any rule about this.
The GL version is not the same thing as hw capability.
It's like installing DX9c while having a DX8 level hw. D3D has caps, but GL doesn't.

Who's fault is that?

Besides, ATI is doing what Nvidia is doing. They don't expose the extension because the hw can't do it.
It's good enough for everyone.

Whether ATI's driver is bug free is another matter. I still see the temp register overflow error in my GLSL vertex shader.

Guardian
03-11-2005, 10:25 AM
One year ago, people here preferred buying a Radeon 9500/9700/9800 (Pro). There were the best cards, ATi was in people'e heart: because NVIDIA lied, and because of the performance results.

Today, the same people changed their mind :) ATI hardware is crapy, misses features, ...

hey ... life is life, so is marketing. Everybody is lying :D

tsz
03-11-2005, 10:50 AM
Today, the same people changed their mind :) ATI hardware is crapy, misses features, ...
I disagree. My programs run about 3 times faster on a X800 than on a Geforce 6800. ATI hardware is not crapy for sure.

Anyway I have problems with the new driver 5.3.
The performance dropped about 10% compared to the 4.12 and there must be a bug in the shader compiler.

Debug output says something like "Driver was not able to install pixel shader" and the D3D debugger says "probably unsupported shader profile".

That does not make sense. I am using shader 2 profiles only and it used to work in older driver version. And it works on Geforce6.

Anyone similar experiences?

Korval
03-11-2005, 12:00 PM
My programs run about 3 times faster on a X800 than on a Geforce 6800. ATI hardware is not crapy for sure.It depends on how you define "crappy". If what you care about is performance, ATi makes reasonable hardware. Though your specific case of a 3x performance improvement is probably something that you're doing that your GeForce doesn't like.

But, if you care about advanced features, ATi doesn't make reasonable hardware. Indeed, in the need to support both, ATi is actually slowing the progress of graphics by making feature-incomplete hardware.

Stephen_H
03-11-2005, 01:09 PM
I would like to see a list of limitations of each vendors GL implementation instead of discovering them after I have implemented something that uses a particular extension that executes something off the 'fast' path.

An good example of the information I like to see were the tables that came with ATIs 9700 devkit that lists exactly which vertex and texture formats were hardware accelerated and amount of cycles it took to execute each arb_fp/arb_vp instruction.

I would like to see information like this for all of ATIs (and Nvidia) cards. I would like to see more details about what features are emulated and which are fully hardware accelarted instead of having to guess if its a driver bug, my code, or its the emulation or my bad usage of the API slowing things down. Stuff like the expected bandwidth for various methods of upload/downloading for textures and vertex data on various cards and on AGP/PCI, so I know what kind of performance speedup I should expect by switching over my uploading to use PBOs (for example).

Btw, does this version of Catalyst support rectangular texture sampling in GLSL? (And I do agree with the other posters, the most anticipated ext is EXT_framebuffer_object)

Elixer
03-12-2005, 01:03 PM
There isn't a conformance test application for 2.0 is there?

I know there was some talk about this in last years meeting notes, but I don't recall anything that came from it.

Hopefully all the ARB members will stop the infighting and get something done in the march meeting.

Korval
03-12-2005, 02:38 PM
Hopefully all the ARB members will stop the infighting and get something done in the march meeting."Get something done" along what lines? EXT_FBO is finished; it's not up to the ARB to implement it. Sure, there's some extra extensions that they need to work on to add a number of different capabilities (selecting texture formats, render to vertex array, etc). However, outside of these, there's not a whole lot for them to do. If another programmable domain is going to be opened up, such an extension should't take too much work. Otherwise, there's just not much left to do.

Trenki
03-17-2005, 07:26 AM
Originally posted by PsychoLns:
Several of the other GL2 related extensions are not defined in the 5.3 drivers (draw buffers, float textures), but it's still a 2.0 driver as the core functions are available.
Reading the OpenGL2.0 spec and comparing it with the extension string reported by my new ATI X800 XT the following extensions are missing even though they were promoted to GL2.0:
ARB_texture_non_power_of_two
ARB_draw_buffers
EXT_stencil_two_side

The issue with NPOT textures has already been discussed but I am interested in the issues with the other two.

ARB_draw_buffers is not exposed even though ATI_draw_buffers is. Taking just a quick lock at both specs, one difference that I could spot was that ARB_draw_buffers list ARB_fragment_shader as an additional dependency. So, what does it mean when ARB_draw_buffers is not exposed but ATI_draw_buffers is? Does it mean that there is no hardware support for multiple render targets from a GLSL fragment shader but only from a fragment program?

EXT_stencil_two_side is based on the ATI extension ATI_separate_stencil with additional state. It is not exposed through the extensions string. If I use it through the OpenGL2.0 core functions does it work at reduced speed?

And there is also the ARB_texture_rectangle extension which is not expoded. The spec to this extension says it is identical to EXT_texture_rectangle, which is present in the extension string. Why is then ARB_texture_rectangle missing?

PsychoLns
03-17-2005, 02:00 PM
I don't think there's any real reason for those extensions not being there yet. But it IS confusing considering the usual way of not exposing extensions that aren't hw supported.

But afaik those extensions doesn't really have anything to do with GL2 - GL2 is just incorporating the same functionality as these extensions. And instead of querying the extensions you should simply look for the 2.x version string and then get the function pointers (glDrawBuffers instead of glDrawBuffersARB etc).

Especially EXT_stencil_two_side is not GL2, it's just nvidia's "version" of ATI_separate_stencil (why is described in the EXT_ spec), and none of them is exactly the same as the functionality in GL2. Haven't tried the new one yet instead of the annoying double implementation (no need to yet, as the old version is working and we probably won't have (official) NV GL2 drivers anytime soon)

Humus
03-18-2005, 05:22 PM
EXT_stencil_two_side is slightly more flexible than our hardware. ARB_texture_rectangle and ARB_draw_buffers interact with GLSL whereas EXT_texture_rectangle and ATI_draw_buffers do not. Both should be possible to implement on our hardware though. Rectangles in GLSL hasn't been implemented yet, not sure why ARB_draw_buffers is missing though as the same functionality should be available through GL2. I guess it could also be a matter of something not being fully implemented yet.