We need your input on.....

I have an OpenGL 2.0 question for all of you. Your answers will help us iron out part of the OpenGL 2.0 spec. Here it is:

What you would do if you would be able to write a fragment shader that can read the color, depth, and stencil values out of the frame buffer for the fragment you are processing at that point? What kind of cool fragment shaders would you be able to write, if any?

I am interested in hearing applications for this other than what OpenGL gives you today for blending modes, depth, alpha testing etc. already.

Thanks!

Barthold
3Dlabs

One thing that i had some problems with and i think this would solve is when i have two scenes, both with alpha in, and want to blend between them (using the Dest-Aplha as blending value), that seams impossible now, but possible with this improvement…

do you understand this very wierd explanation?

Maybe this would help for the stuff Decimal Dave was talking about here .

Having color, depth, and stencil values accessible in fragment shaders means to me, first of all, that I wouldn’t have to worry about depending on an extension to handle some obscure blending mode that I wish to use; I can just program it myself.

Another thing that immediately comes to mind is that per-pixel volumetric fog and similar techniques would be much simpler to implement. Storing a pixel’s depth in the alpha component of the framebuffer would no longer be necessary, which in some cases could get rid of an extra pass over geometry.

Another idea which I think could be much more useful is the idea that Tim Sweeney had that was mentioned in the original white papers: multipass rendering could rely on data stored in the frame buffer, so that the geometry required for each pass is simply a quad over the whole viewport, and data that would have to be recalculated in a fragment shader for every pass need only to be calculated once. I don’t know how often multi-pass techniques would be required, considering all that can be done with fragment shaders, but in situations requiring many passes over geometry, it could be a very large performance win.

Finally, I think I could implement a limited single pass order-independent alpha blending scheme in fragment shaders if I had access to the frame buffer. Using only the depth buffer, I could do order independent transparency for fragments at 2 different depths in one pass. With auxiliary buffers, I think I could do more layers in one pass.

j

Another nice thing that could be possible if we could read the current color value would be to implement a fast B&W, Sepia, whatever rendering without modifying the scene’s shaders. Render the scene once with the normal full color shaders, and then render a quad of the size of the viewport over it all and turn all the pixels to gray scale, sepia or whatever!

annd this (http://www.delphi3d.net/articles/viewarticle.php?article=onebit.htm) could be done without rendering to a texture…

and i have often wondered why we have very many extentions that enables the most fantastic ways to apply a texture to a polygin, and still only have the common blending to apply the fragment to the buffer.

Why not also give WRITE access to it? Let people implement whatever update function they want for the stencil buffer and depth buffer.

I was reading the ObjectBuffer section and I was a bit disappointed that it is still very static.

For real programmability in Pure OGL2.0 I would have expected something like this.

You simply create a generic buffer with X bits.

All your generic buffers are accessible from fragment shader for reading and writing.

Now, if you want to bypass the fragment shader and use or mimic the standard pipeline for example, there would be hook functions like

UseAsDepthBuffer( genericBufferHandle );
UseAsStencilBuffer( genericBufferHandle );
UseAsColorBuffer( genericBufferHandle, pixelDescription );
UseAsLeftBuffer
UseAsRightBuffer etc…

I am not sure how complex it would be to implement this in hardware, but I feel that is true programmability.

So if someone wants to create an accumulation buffer, he simplys needs to create a generic buffer and use it as colorBuffer and copy things in it and do whatever he wants.

I must admit, I am not fully sure for why I would use those generic buffers, but I am sure plenty of clever people will find cool usage.

I understand there are aux buffers for that, but why not put everything in the same basket?

[This message has been edited by Gorg (edited 03-18-2002).]

Originally posted by Gorg:
You simply create a generic buffer with X bits.
[This message has been edited by Gorg (edited 03-18-2002).]

Well, for one, the color buffer is special as it is the one that will finally be used by the RAMDAC to convert to a video signal. But, ok, lets say you could specify what buffer to use for the color buffer. You’d also have the same problem for the depth buffer if you want the vendor to implement Z compression and Fast Z test optimisations transparently (or would this be programmable?). You’d have to explicitely tell OGL that “this is a depth buffer”. So, if you finally add up all these constraints you end up with what is suggested in OGL 2.0 : fixed function buffers (color, depth, stencil, accum) and general auxiliary buffers, which I think is the right thing to do.

my suggestion:

  1. stay with fixed blend/alpha/depth/stencil pipeline, leave readable gl_FBxxxx variables as an extension to gl2.0

  2. extend fixed blending stage to operate also on output to AUX buffers (in current 2.0 blending operates
    only on Color buffer)
    Consider separate blend-equotations for each output buffer (color, aux0, …, auxN)

  3. include EXT_blend_func_separate into 2.0

I could possibly implement a few special effects features in hardware, but it depends in which manner we can access the fragments.

If I can accesss any pixel in the buffer (any number of pixels) and do a few calculations and have it output into a certain location in the framebuffer, that would be sweet.

This is not absolutly necessary since I dont plan on using anything like this in real time.

I think that the possibility should be offered nonethless.

V-man

One effect that I want to get from custom shaders is realistic cloud generation.

a couple of years back I read a a masters paper on realtime photorealistic cloud generation on a VAX (or DEC or HP or something dinosaur). I can trace it if any ones interested.

The method is based on Andrew Glassners work, and I can’t quite remember how the paper went. The basic approach is to create some epsiliods and then create view dependent fractal texture maps, (I think they were perlin noise based not sure), sampling cloud colour or background colour to provide the illusion of clouds. Epsiliod surface data is used to establish the orientation of the ‘cloud point’ so that the bottom of the cloud can reflect sunset colours while the top of the cloud says whispy, it is also use to perform Sun lighting calcs.

So I guess that I want to create dynamic view dependent perlin noise functions in a fragment shader.

Do you think that you will be able to provide this kind of functionality. (Hope so!)

Leo

V-man:
Being able to access any pixel in the framebuffer sounds like a great idea, but there are a couple problems with it. First of all, being able to access any pixel in the frame buffer means that the order in which the graphics card draws the pixels becomes important, because depending on which what order the pixels are drawn in, you might get a value from a pixel in the triangle you are currently drawing, instead of a pixel value from the scene just before you draw the triangle, which is probably what you wanted.

This could be fixed by having two buffers. One would contain the scene as it was just before the current triangle has been drawn, and the other could be written to using the pixel values sampled from the first buffer.

It would work, but then it would be exactly like rendering to a texture and using dependent reads to do your image processing in a separate pass, which already can be done on current hardware.

So I think that being able to access any pixel at all in the framebuffer is functionality that we won’t be seeing in OpenGL 2.0.

Which leads me to think, being able to only access the current pixel in the framebuffer from a fragment shader can also be done using render to texture functionality. So what other benefits besides convenience and a possible speed increase does access to framebuffer values from fragment programs give?

j

This could be fixed by having two buffers. One would contain the scene as it was just before the current triangle has been drawn, and the other could be written to using the pixel values sampled from the first buffer.

That’s just fine and dandy… right until you start talking about a tile-based rendering system, which doesn’t actually draw a triangle. It draws tiles, and within a tile, triangles are drawn. Unlike conventional z-buffer renderers, you are not guarenteed that the order you submit your triangles is the order in which they will be drawn.

Which is why I was saying that having two buffers is not a good solution to the problem.

j

wasnt the question >>What kind of cool fragment shaders would you be able to write, if any?<<

*shimmer eg heat,ghost,predator
*looking through a lens eg distored glass, water
*other distortions
*line of sight tricks for eg shadowing
*halos (ie glowing objects)
*advanced lensflares

I would like the ability to move towards a more formalised image composition approach.

such as

Stage 1 - Pre-process image compositio

Use custom shaders to create a dynamic Star Field

Stage 2 - Render

Render 3D geometry

Stage 3 - Post - Process image composition

Depth of Field image process

Stage 1 and 3 would be low bus bandwidth stages with high card-side processing overheads. They would be idealling be threaded off on their own, thus from a CPU usage perspective they should come at very little cost to the applications overall performance.

Originally posted by Leo:
I would like the ability to move towards a more formalised image composition approach.

Agreed!

Plus it gives more freedom when doing multipasse shaders (eg: multiple colored shadow maps, etc…)

I don’t think it’s an absolutly needed feature because most effects can be done without it, but in some case it could allow less rendering passes, etc…

[This message has been edited by GPSnoopy (edited 03-22-2002).]

i could do perpixellighting and volumetric shadows in 2 passes… thats sort of nice enough for me…
i want a programable pixelpipeline, and a programable framebuffer acces…

look at the advanced forums, there is the question if we can get (AdotB)*C+D in 2 passes on gf2mx.
no we can’t.
why?
adotb is calculated and stored in framebuffer
next pass: multiply C to framebuffer, add D to it…
thats where the blending-function is at the end… there a little programable combiner should move in… programable color-combiner (replaces alphableding alphatesting), a programable depthtest possibly…, a programable stencil-buffer (some bitwise operators… could be a second alphabuffer in fact, no one cares really about this )

Just wanted to say thank you for all the replies so far. This is useful information for us!

Barthold
3Dlabs

Obviously procedural textures with the following properties :-

A three demensional generated(and regeneratible) texture infinitly detailed and sizable without change.