pixel shader question

I plan on purchasing a GF3 in the near future, and while I can consider myself relatively familiar with OGL coding, I have yet to even begin experimenting with the newer hardware features of the nVidia boards - namely the vertex and pixel shaders. While it may be a bit premature, I’m curious to know if a pixel shader can be applied to the entire display, or if it is limited only to geometry. Ideally, I’d like to send the final rasterized image through an algorithm or something similar before the page-flip.

I guess what I’m asking is if a pixel shader can be developed in a manner in which it behaves similar to a post-production process. An example would be simulating over-exposed film or rendering full coloured scenes in black and white. Traditionally, I would use a combination of billboards and masks as well as modify the texture images themselves. I’m hoping that with the new hardware features I would be able to develop a process that is more portable and does not require editing of the underlying scene.

Of course, I may just be completely off base in my expectations, but that’s exactly why I’m posting here.

Thanks in advance,
Dave

There are texture shaders, in which case you can manipulate the textures im guessing (not tried it yet), there are vertex programs (dont know what a vertex shader is) in which case you can transform, light, texture vertices as you wish, and then there are the register combiners, in which you can do some general effects with the color output.

Those features basically overide the opengl static pipeline, and you get to make opengl do what you want. You cant render the scene, and then process it. The GPU will process the scene as you send it geometry. The end result will be the same. That also means, that some things cant be done, such as effects that depends on previously rendered pixels.

B&W scene can easily be done.

V-man

Thanks for the input.

I erroneously assumed that the terms “vertex shader” and “vertex program” were interchangable.

After reading your response, I believe the general answer is not really for the effects I am interested in. I believe it would require a render-to-texture operation and then further manipulation. In the end, the entire scene would consist of a single, textured quad. From what I’ve read on these forums, rendering to texture does not yet seem to be a viable solution for real-time interactivity.

“B&W scene can easily be done.”
By averaging the rgb inputs of the pixel and replacing?

All in all, I think what I really need is to do more research.

Thanks again,
Dave