Catalysts 5.3 released.

The CATALYST™ Software Suite introduces OpenGL Version 2.0 support. This version of OpenGL will be in line with the latest GL ARB approved specifications. New to this specification are GLSL, non-power-of-two-textures, separate stencil, multiple render targets, and point sprites.
CATALYST™ Release Notes.
CATALYST™ Downloads

I’m pretty shocked at this boldface lie that they’ve produced here. I suppose if their drivers emulate NPOT, then they’ll be 2.0 compliant.

But then nobody will care. They don’t support NPOT in hardware. All this will do is make people afraid to check for 2.0 support or use NPOTs.

Two years ago, ATi was a company with integrety and a superior product based on clever engineering and some good design choices. Now, they’re releasing garbage (feature-wise) hardware and telling horrible lies.

Did you download and test NPOT at all?
We support NPOT textures in HW, but not 100% orthogonally. You get acceleration if you don’t use mipmaps and don’t use GL_REPEAT.

What’s your proposed solution btw? Not expose GL2.0 at all? Has any hardware ever fully supported any GL version?

Edit: And what the heck was the lie???

Where is EXT_framebuffer_object? I think it’s the most important extension that should be supported.

whats the shading version 1.00 or 1.10?

Originally posted by Gong:
Where is EXT_framebuffer_object? I think it’s the most important extension that should be supported.
Yeah, I was kinda hoping to see that included as well, but it looks like we are going to be waiting another month :slight_smile:
(I guess the driver pipeline is a little deeper than I thought)

Althought, a thought accures, would it be at all possible to get the OGL componet released seperately from the drivers? I know they are WHQL tested but afaik the OGL part isnt tested for, so it shouldnt upset things too much? (heck, even if it only went to registered devs I wouldnt be unhappy as I am one, heh)
Just an idea anyways :slight_smile:

The issue of whether to report extension support in the case of partial acceleration is always tricky, and I don’t think the criteria for doing so are the same for all extensions.

For example, ARB_vertex_program can be implemented efficiently on the CPU. It’ll never be as fast as GPU support of course, but for “most” purposes, it’s good enough.

SGIS_generate_mipmap is closer to the line – for Tex(Sub)Image2D calls, it probably doesn’t matter if the mipmaps are generated in software, but for CopyTex(Sub)Image2D, it might matter a great deal.

I think I’d personally place ARB_texture_non_power_of_two over the line – applications are used to working around the POT restriction, so the main reason to use ARB_texture_non_power_of_two is to remove the ARB_texture_rectangle restrictions. So yes, I think Korval is right, ATI has just foiled the ARB_texture_non_power_of_two extension check. We now have to explicitly test the vendor and card to know whether it’s really usable.

Of course, then there’s ARB_shading_language_100 – currently shipped without even a software fallback AFAIK. In a lot of ways that’s even less acceptable.

In conclusion, this is a general problem, and not one with an easy solution. Perhaps the right thing is to add a new API

new enums:
FULLY_SUPPORTED
PARTIALLY_SUPPORTED
NOT_SUPPORTED

FULLY_ACCELERATED
PARTIALLY_ACCELERATED
NOT_ACCELERATED

new functions:
QueryFeatureSupport(const char *feature, GLenum * supported, GLenum *accelerated);

Then each individual app can make a personalized decision over how to treat each individual extension, possibly referring to vendor and model in the partial support/acceleration situations.

To go with this, I think it would be required to define feature names (fake extensions, if you will) covering the basic OpenGL features.

What do y’all think?

pleas tell us. why no FBO in the extension list. I miss it very much ~_~

We support NPOT textures in HW, but not 100% orthogonally. You get acceleration if you don’t use mipmaps and don’t use GL_REPEAT.
Ahh. So you’re effectively using texture rectangles, except that your hardware can handle (and potentially always could handle) normalized texture coordinates.

And you don’t consider these unspecificed limitations to be lying? Or, at least, intended to deceive wouldbe customers into thinking that they were getting a full 2.0 implementation?

What’s your proposed solution btw? Not expose GL2.0 at all? Has any hardware ever fully supported any GL version?
Well, technically, my proposed solution would be for ATi to get back to making great hardware; I liked them better when they actually upgraded their hardware. But, given that they didn’t this go around, yes, they should not expose GL 2.0. They should expose the extensions that their hardware can handle, and that is all.

Wouldn’t it be false advertising for a TNT2 to expose GL 2.0? Or, at least, false implication?

What do y’all think?
On the surface, it sounds like a good idea. But, past that surface, it collapses.

Sometimes, whether a feature is “fully implemented” depends on other features. Let’s say that some hardware supported the accumulation buffer and fragment programs, but not simultaneously. Is that a “full” implementation of either? Well, if you never use accum buffers, you’ll never run into any limitations, so someone who is only looking for full fragment program support would be misinformed by seeing a “partial”. But, then again, someone using them both who found a software fallback would be quite upset, since they both said “full”.

There’s no simple answer to this. Especially when version number increases of GL bundle all kinds of features together.

pleas tell us. why no FBO in the extension list. I miss it very much ~_~
It’s probably like bobvodka said; ATi’s driver development pipeline is deeper than we would like to think. That, or implementing it isn’t trivial for them.

Besides, even nVidia’s implementation isn’t final. So it’s not like they’ve lost the race or anything.

All things that I can do is waiting .
wait for ATI’s driver developer to implement a FBO(maybe changes there mind ~_~)
and .
wait for NVIDIA’s driver for Linux can support the FBO.

ATI has a extension names GLX_ATI_Render_to_Texture. but NV has not… so I must use glCopy(Sub)TexImage under Linux… it’s very unconvenient.

Korval,

I’ve been lately disappointed from ATI too, but I don’t think you’re being fair. We have an FX here with the 75.90 driver and the GL version is 2.0. That doesn’t make it suddenly support all of ARB_tex_npot’s features. This has been done before (e.g. 3D textures) and never was a serious problem.

Originally posted by OneSadCookie:
ATI has just foiled the ARB_texture_non_power_of_two extension check. We now have to explicitly test the vendor and card to know whether it’s really usable.

Several of the other GL2 related extensions are not defined in the 5.3 drivers (draw buffers, float textures), but it’s still a 2.0 driver as the core functions are available.
But you don’t see the arb_npot extension either - so yes, it no longer returns an error when uploading npot textures, but it will probably be forced into software rendering (didn’t notice any hw rendering when I tried it, but that’s hard to spot when the rendering times are several seconds anyway :wink: .
I think this is the right way to handle such a “marker extension” which doesn’t add to the interface - it will “work”(in sw) as the core specification requires, but the extension won’t be exposed if it’s not handled in hw.

Apparently thoses new drivers, which export GL_VERSION to 2.0, so ‘GL_ARB_texture_non_power_of_two’ according the spec.

It seems that their implementation is working like the Direct3D D3DPTEXTURECAPS_NONPOW2CONDITIONAL feature.

So, finally, we don’t need to have the texture coordinates in pixel but between 0-1.0, which is fair enough.

Well, if there no ‘wrap mode’, it’s OK for me.

Also, we have a separate two side stencil (finally I can drop the GL_ATI_separate_stencil code, I hate to have two implementations for the same feature, it makes the code messy).

Point sprite, finally, but since you cannot specify a vertex array for the point size (like glVertexPointer-like for the ‘point’ size), it’s unusable for me (Direct3D can do it, called D3DFVFCAPS_PSIZE)

But, the end of the road is nowhere close:

  • Still waiting for depth component PBuffer support: Where is that WGL_ATI_render_depth_texture ? will be ever supported ? Anyway, there is even no DST support in Direct3D on ATI video cards)

  • The poor rendering quality of GL_ARB_shadow (unless you are using fragment shaders for the whole thing)

  • The bug where GLSL silently goes to software rendering AFTER linking and saying that the shader goes to hardware …

  • And an official list of all the limitations of the OpenGL implementation (like limitations of texture indirection etc… Do you know an official document about that ?)

Originally posted by zed:
whats the shading version 1.00 or 1.10?
It’s 1.10. To support OpenGL 2.0 you need to support 1.10.

Originally posted by OneSadCookie:
So yes, I think Korval is right, ATI has just foiled the ARB_texture_non_power_of_two extension check. We now have to explicitly test the vendor and card to know whether it’s really usable.
No, we don’t expose GL_ARB_texture_non_power_of_two. We expose it only through GL2.0. Korval was arguing that we shouldn’t have exposed GL2.0 at all, since our NPOT support is limited. Arguing like that we might just as well scrap GLSL support, or heck, GL 1.1 too.

Originally posted by OneSadCookie:
Of course, then there’s ARB_shading_language_100 – currently shipped without even a software fallback AFAIK.
That’s not true at all.

Originally posted by Korval:
Ahh. So you’re effectively using texture rectangles, except that your hardware can handle (and potentially always could handle) normalized texture coordinates.
Actually the other way around. We only support normalized coordinates, so for texture rectangles (which has its limitations originated from NV hardware) we AFAIK actually have to patch the shader to normalize the texture coordinates.

Originally posted by Korval:
And you don’t consider these unspecificed limitations to be lying? Or, at least, intended to deceive wouldbe customers into thinking that they were getting a full 2.0 implementation?
No. It’s a full implementation. Some stuff runs in software though. That’s the nature of GL.

I think your criticism is highly unmotivated in this situation. There have been times in the past where our PR team screwed up (like when GL 2.0 ended up on our boxes before there was even a ratified spec), but this time I don’t see anything that’s even remotely close to unfair against consumers or developers. No hype, no value-carrying words like “complete” or “full”, it follows regular unofficial standards to only expose hardware accelerated features in the extension string and so on. I seen nothing that was done wrong this time.

But, given that they didn’t this go around, yes, they should not expose GL 2.0. They should expose the extensions that their hardware can handle, and that is all.
Should we expose GLSL then? We don’t support noise(), very long shaders, gl_FrontFacing on R300 etc.

Wouldn’t it be false advertising for a TNT2 to expose GL 2.0? Or, at least, false implication?
What about the same TNT2 exposing GL 1.5? Are nVidia lying too?

Originally posted by execom_rt:
- Still waiting for depth component PBuffer support: Where is that WGL_ATI_render_depth_texture ? will be ever supported ?
We don’t support binding a depth buffer as a texture directly, so it would be pointless.

Humus,

Will render-to-depth-texture be supported with ATI’s EXT_fbo implementation?

Point sprite, finally, but since you cannot specify a vertex array for the point size (like glVertexPointer-like for the ‘point’ size), it’s unusable for me (Direct3D can do it, called D3DFVFCAPS_PSIZE)
I’m pretty sure it’s possible, though I don’t remember the extension that exposes this. Even if I’m wrong, use a generic attribute and have a vertex shader pass the size along.

No. It’s a full implementation. Some stuff runs in software though. That’s the nature of GL.
Yes, but we were working through that kind of stuff. All the useful features of GL, the ones you could imagine sane hardware actually implementing, work in hardware. So, when you see a modern card supporting 1.5, you’re pretty sure that it really supports 1.5.

There’s no need to regress back to the bad old days where you had to guess what was going to be usable and what wasn’t.

this time I don’t see anything that’s even remotely close to unfair against consumers or developers.
So, the assumption should be that, if the card doesn’t expose NPOT, but does expose GL 2.0, that their NPOT functionality should be considered incomplete? This is hardly a documented feature.

Should we expose GLSL then? We don’t support noise(), very long shaders, gl_FrontFacing on R300 etc.
Be advised you’re talking to someone who would rather the ARB extend their assembly languages when they add functionality.

What about the same TNT2 exposing GL 1.5? Are nVidia lying too?
Yes. OpenGL versions should not be marketting slogans; they should refer to a relative level of hardware/fast functionality.

Yes. OpenGL versions should not be marketting slogans; they should refer to a relative level of hardware/fast functionality.
There isn’t any rule about this.
The GL version is not the same thing as hw capability.
It’s like installing DX9c while having a DX8 level hw. D3D has caps, but GL doesn’t.

Who’s fault is that?

Besides, ATI is doing what Nvidia is doing. They don’t expose the extension because the hw can’t do it.
It’s good enough for everyone.

Whether ATI’s driver is bug free is another matter. I still see the temp register overflow error in my GLSL vertex shader.