PDA

View Full Version : OpenGL 3.2 updated?



Groovounet
12-10-2009, 04:20 AM
I'm quite suprized by this news:

http://www.opengl.org/news/permalink/arb...ngl-shading-lan (http://www.opengl.org/news/permalink/arb-working-group-approves-updates-to-the-opengl-3.2-and-opengl-shading-lan)

Updating specifications to fix "bugs" is an old practice so I really wonder why a news? Checking the changed between OpenGL 3.2 core 20090803 and OpenGL 3.2 core 20091207 ... The less I can say is that it's slightly updated.

I mainly notice the update of the "Texture Completeness" which it really more clear.

PS: keep your old glspec32.core.20090803.withchanges.pdf to see the differences between OpenGL 3.1 and OpenGL 3.2.

ruysch
12-10-2009, 05:55 AM
I also thought the news was sort of odd; Did we get programmable blending ?

Groovounet
12-10-2009, 06:52 AM
No no, it's mostly speel check... but maybe I miss something.

soconne
12-10-2009, 07:45 AM
I also thought the news was sort of odd; Did we get programmable blending ?

I've been waiting for that for so long...not sure why they havn't added it yet.

Groovounet
12-10-2009, 08:46 AM
I'm sure it's on schedule for OpenGL 3.3 or 4!

Godlike
12-10-2009, 09:17 AM
Does the current hardware support programmable blending?

niko
12-10-2009, 09:25 AM
I've been waiting for that for so long...not sure why they havn't added it yet.

Ditto! And I'm sure it's on its way... :whistle:

/N

Groovounet
12-10-2009, 09:48 AM
The hardware doesn't support it completely but some sort of.

On nVidia side:
http://www.opengl.org/registry/specs/NV/texture_barrier.txt

On ATI side, I think the hardware have even more flexibility but not that much. I mean, what I really would like is a GLSL program at that stage and this is really not how it works on GPUs thoses days.

Eosie
12-10-2009, 10:46 AM
Hardware doesn't have a programmable blend stage, really. The blending units on todays ATI hardware are not different from the ones on Radeon 9500, except the fact they're per render target now and support floating-point renderbuffers and multisampling. And frankly I don't believe they will make this stage programmable in the foreseeable future.

skynet
12-10-2009, 04:16 PM
IMHO there is no need for a separate type of shader anyway. Once the hardware can do it, we'll just get read-modify-write access to the framebuffer in the fragmentshader.

Groovounet
12-10-2009, 04:46 PM
I actually think that is would be really useful to have this stage programmable especially with deferred rendering: working on pixel rather that fragments and with dedicated architecture the graphics cards may save the G-Buffers writes and reads. It might be a dream because Microsoft didn't show any interested on this, nVidia like so waste transistor to compute doubles at half rate of floats. Maybe ATI but with they small chips policy, i'm not sure they would really fan of the big cache this idea might require.
Well still a great idea for tiled based GPUs :D
Dream dream dream!

Alfonse Reinheart
12-10-2009, 05:20 PM
Once the hardware can do it, we'll just get read-modify-write access to the framebuffer in the fragmentshader.

Unless of course you want to be able to separate how fragments get blended from the rest of fragment computation, so that you can use the same fragment shader with different blend shaders.

niko
12-11-2009, 01:57 AM
IMHO there is no need for a separate type of shader anyway. Once the hardware can do it, we'll just get read-modify-write access to the framebuffer in the fragmentshader.

Well, at least on the project I'm currently working on, that would actually be a bit more practical than having separate shader programs for blending.

/N

Brolingstanz
12-11-2009, 09:19 AM
I think the point is that once you can perform arbitrary read/write operations on generic buffers a blend stage will make about as much sense as a special stage to perform.... arbitrary read/write operations on generic buffers.

Btw you can already do this sort of thing on SM5 hw.