Fragment shader: use gl_FragCoord.z to show depth

Hey all, I’m new to this GLSL thing. I’ve got a single quad on the screen and am aiming to do everything in the fragment shader to get max performance. I’ve got one quad on screen and am basically writing a pixel shader. I’m trying to use gl_FragCoord.z / gl_FragDepth to actually represent depth in the render. That is, I would like a pixel with a depth of 0.98 to appear height than a pixel with a depth of 0.24.

Is the wrong approach? Please note that I’m only writing a pixel shader – I did everything with vertex shaders already and was not happy with the performance.

My first assumption was that I’d simply have to adjust shade based on distance, but that does not create a realistic effect. Any idea how I can use my lighting, etc along with some extra math to figure out how to represent height/depth in just the frag shader?

I’ve got a single quad on the screen and am aiming to do everything in the fragment shader to get max performance.

What is “everything” and why do you think that this is necessary for “max performance?”

That is, I would like a pixel with a depth of 0.98 to appear height than a pixel with a depth of 0.24.

How do you know which fragment has a depth of 0.98 and which fragment has a depth of 0.24?

Basically, I’m rendering a single surface. Think terrain with a lot of minute detail at 1680 x 1050. The performance I got from geometry, though appeared correctly, was unacceptable. Pixel shaders are much faster.

How do you know which fragment has a depth of 0.98 and which fragment has a depth of 0.24?

I’m setting uniforms from C++. I’m working on creating a texture work as a depth map (I think that’s how you do this), so the fragment shader can look at neighbors to create a ripple effect.

So my question is, how can I use this depth to create “shadows” in my pixel shader?

Strike that, the math for rendering is right in the water tutorial I was using:
http://freespace.virgin.net/hugo.elias/graphics/x_water.htm

But I’m lost on how to create a depth buffer texture thing.

The performance I got from geometry, though appeared correctly, was unacceptable. Pixel shaders are much faster.

And yet, games and other programs that use a lot of “minute detail” are perfectly fine with using geometry. Maybe you were simply doing something wrong.

So my question is, how can I use this depth to create “shadows” in my pixel shader?

You can’t. With the way you are sending data to the shader, the fragment simply does not have sufficient information. There are no normals, and without adjacency information there is no way to compute normals. And self-shadowing is right out.

Maybe you should look into doing this the way everyone else does. With, you know, actual geometry.

Ha!

http://vimeo.com/8071647

No geometry.

Hence the depth texture.

http://vimeo.com/8071647

No geometry.

And no interaction with anything or viewing at different angles.

My point is that you haven’t said what it is you’re trying to achieve.