future of shaders

I would simply like to know if some of you expect the future of shaders to become more or less like renderman’s shaders.

Currently we have:

. vertex shader
. fragment shader
(. geometry shader)

Can we expect them to be more like that:

. light source shaders
. surface shaders
. displacement shaders
. volume shaders
. transformation shaders
. imager shaders

If I remember well, some people here (years ago) thought vertex and fragment shaders will be enough to make this world turns round. But with geometry shaders, vendors now allow us to add a new degree of granularity. So maybe, next shaders will cut vertex and fragment shaders into several parts in order to look more like Renderman’s ones ?

What do you think of this ?

i think the only new shader in the pipeline is the blend shader (that takes over the functionality of glBlendFunc), other than that i can only see perhaps an object shader and maybe other extra stuff like ray testing.
But that’s about it, it’s hard to add more stuff without changing the rendering method from rasterising to something else.

About rendermans shaders, no won’t happen, it works for renderman but perhaps not that well for a GPU.
Though you could possibly simulate it by merging code at compile time, but i wouldn’t recommend doing it to much.
The reason gpu’s went that way is because it’s more flexible and can be optimized a lot more then rendermans shaders.

I’d love to see something like a pre-vertex shader where you can have access to the index value for a vertice that is about to be sent to the graphics card. There you could decide not to send it, index into the vertex buffer and send a different one or actually modify the values in the vertex buffer. That would be awesome!

Originally posted by soconne:
I’d love to see something like a pre-vertex shader where you can have access to the index value for a vertice that is about to be sent to the graphics card. There you could decide not to send it, index into the vertex buffer and send a different one or actually modify the values in the vertex buffer.
New GL_EXT_gpu_shader4 and related extensions give the vertex shader ability to determine index of current vertex within the vertex array and also ability to sample from various buffers so it might be possible to do what you need.

What do you think of this ?
I think it has nothing to do with the hardware.

You can build a shading system as a layer on top of GL that allows you to talk about semantic shaders of the nature you describe. So OpenGL doesn’t really need to implement them; better to have the IHVs implementing an appropriate hardware abstraction than spending time doing your work for you.

And personally, I don’t like this particular breakdown of shader functionality.

I could use some antialiasing and blend shader too!

programable blending and AA modes would be nice (doesn’t programable AA exist anyways?)

Besides that I’m with Korval, I’m not liking the break down either

So I could freely say that current shaders (VS,FS,GS) will remain, at least for a certain amount of time. And we could expect other shaders: blending (currently quite available) and AA.

I’m neither for any breaks in shaders. But as you all know, things can evoluate quiete quickly and can even change abruptly (cf new releases of GL). So, I was wanted to ensure a bit about that point.

Thanks.

I don’t think there will ever be a separate AA shader, some AA stuff will make it into the fragment shader and possibly into the blend shader, but not as a stand alone shader.

I guess we can expect shaders in every part of the pipeline where data somehow changes in apperiance. Vertex, fragment and geometry shaders represent most “important” parts of the pipeline: vertex processing, primitive assembly and fragment processing. I would therefore also expect something like texture filter shaders (most likely implemented through fragment shaders), that was 3dlabs proposed to be image format shaders (image read/write), blend stage shaders… What do we have left in the pipeline?

How distant a future are we talking about?

If the focus of discussion is concentrated on the near term, I think Zengar summed it up nicely. For the distant future, I envision sublime coolness, though I’m not really sure what that means.

I was speaking about 40 years later, just before I retire from my profession… :slight_smile: No. I’m just kidding. Was about somewhat close future (next few years or so).

In that case I’ll respectfully reserve my dubious prognostications for another occasion.

I can see at least 3 more shaders from the current 3.

Texture shaders, replaces an image based texture, allows for procedural textures to be created with ease, e.g.

//simple 2d texture shader that is completely white
vec4 main(vec2 coord) { return new vec4(1.0); }

Sample shaders, hinted at in the OpenGL BOF a few months ago (or somewhere at least, can’t find it atm). Replaces alpha testing, depth testing/writing, stencil testing, etc…

Blend shaders, replaces alpha blending/blend ops

Regards
elFarto

Texture shaders, replaces an image based texture, allows for procedural textures to be created with ease
And what stops you from just writing something like this in the fragment shader?

vec4 my_proc_texture(vec2 coord) {
    return new vec4(1.0);
}

Originally posted by Overmind:
]And what stops you from just writing something like this in the fragment shader?

Nothing. However if the number of such procedural textures is high and are used by several shaders (e.g. for various light types), number of shader combinations that need to be compiled increases very fast and until the GLSL gets ability to retrieve and again upload compiled shader, this becomes impractical even for relatively small numbers.

Originally posted by Komat:
[quote]Originally posted by Overmind:
]And what stops you from just writing something like this in the fragment shader?

Nothing. However if the number of such procedural textures is high and are used by several shaders (e.g. for various light types), number of shader combinations that need to be compiled increases very fast and until the GLSL gets ability to retrieve and again upload compiled shader, this becomes impractical even for relatively small numbers.
[/QUOTE]What he said, also the hardware could possibly cache/optimise that shader better than a fragment shader.

Regards
elFarto

Why would the hardware cache/optimize a texture shader better than a fragment shader?

Ah, there is something else coming into my mind (after I read the discussion about index buffer offset) :slight_smile: One rather useful feature would be a “vertex source” shader, a unit that simply generates vertices. It would have full read access to memory ( buffer objects) and will replace the vertex array functionality.

I guess fragment shaders are almost well like that. But there are still bad understandings for me: having a full texture shader will allow reusability of code better than current fragment shader. But maybe this isn’t suffisant in order texture shaders to exist on their own. ?

About vertex source shader, I’m surelly not the best people to speak about it, but I don’t really see how it could work and I neither don’t see why it could be of help. To my point of view CPU side is the best for such a work (if I understood well what you meant). Otherwise, things like BSPs will simply be of no use, won’t they ?