OpenGL 3.2 updated?

I’m quite suprized by this news:

http://www.opengl.org/news/permalink/arb…ngl-shading-lan

Updating specifications to fix “bugs” is an old practice so I really wonder why a news? Checking the changed between OpenGL 3.2 core 20090803 and OpenGL 3.2 core 20091207 … The less I can say is that it’s slightly updated.

I mainly notice the update of the “Texture Completeness” which it really more clear.

PS: keep your old glspec32.core.20090803.withchanges.pdf to see the differences between OpenGL 3.1 and OpenGL 3.2.

I also thought the news was sort of odd; Did we get programmable blending ?

No no, it’s mostly speel check… but maybe I miss something.

I’ve been waiting for that for so long…not sure why they havn’t added it yet.

I’m sure it’s on schedule for OpenGL 3.3 or 4!

Does the current hardware support programmable blending?

Ditto! And I’m sure it’s on its way… :whistle:

/N

The hardware doesn’t support it completely but some sort of.

On nVidia side:
http://www.opengl.org/registry/specs/NV/texture_barrier.txt

On ATI side, I think the hardware have even more flexibility but not that much. I mean, what I really would like is a GLSL program at that stage and this is really not how it works on GPUs thoses days.

Hardware doesn’t have a programmable blend stage, really. The blending units on todays ATI hardware are not different from the ones on Radeon 9500, except the fact they’re per render target now and support floating-point renderbuffers and multisampling. And frankly I don’t believe they will make this stage programmable in the foreseeable future.

IMHO there is no need for a separate type of shader anyway. Once the hardware can do it, we’ll just get read-modify-write access to the framebuffer in the fragmentshader.

I actually think that is would be really useful to have this stage programmable especially with deferred rendering: working on pixel rather that fragments and with dedicated architecture the graphics cards may save the G-Buffers writes and reads. It might be a dream because Microsoft didn’t show any interested on this, nVidia like so waste transistor to compute doubles at half rate of floats. Maybe ATI but with they small chips policy, i’m not sure they would really fan of the big cache this idea might require.
Well still a great idea for tiled based GPUs :smiley:
Dream dream dream!

Once the hardware can do it, we’ll just get read-modify-write access to the framebuffer in the fragmentshader.

Unless of course you want to be able to separate how fragments get blended from the rest of fragment computation, so that you can use the same fragment shader with different blend shaders.

Well, at least on the project I’m currently working on, that would actually be a bit more practical than having separate shader programs for blending.

/N

I think the point is that once you can perform arbitrary read/write operations on generic buffers a blend stage will make about as much sense as a special stage to perform… arbitrary read/write operations on generic buffers.

Btw you can already do this sort of thing on SM5 hw.