order independent demo

I succeeded in compiling the nvidia demo for hardware assisted rendering of …

There is a slight problem since the “good” image doesn’t look right. Then I notice, the console displaying an error message.
“Shader Stage 3 is inconsistent!”

The code that generates this is
glActiveTexture(GL_TEXTURE3);
glGetTexEnviv(GL_TEXTURE_SHADER_NV, GL_SHADER_CONSISTENT_NV, &consistent);
if(consistent == GL_FALSE)
cerr << “Shader stage 3 is inconsistent!” << endl;

I haven’t completly absorbed the code and the technic behind it but I dont think this error should be occuring.

Can someone enlighten me about the error message. Why it occurs.

I have emulation turned on because I dont have support for NV_texture_shader (and a few others) on my Gf2.

Another question. Has this method been implemented on ATI based cards?
The NV texture_shader has a special operation that overides the GL pipeline for computing and writing the z fragment, if I understood correctly. How can the same be acheived on ATI?

[This message has been edited by V-man (edited 01-13-2003).]

The 8500 could do it, but not in OpenGL (no support for depth replace), the 9500/9700 can do it no problem though (because ARB_f_p allows depth replacement).

Infact, I was thinking of making a demo of it, but I just don’t think it’s really worth it (as I recall this method is REALLY slow…)

Depth replace should be avoided if possible. It will disable any kind of HyperZ / occlusion culling optimistions, so it’ll significantly reduce the rendering speed.

Humus is right: depth replace is not necessary anyway, because you can simply use your fragment’s window coordinates to look up the secondary depth value. No texture coordinate interpolation needed, so no invariance issues.

I’ve also done it without using any pixel shaders at all, by varying the polygon offset for each pass. Polygon offset seems far too unreliable to get consistently good results on different cards/drivers, though.

– Tom

Originally posted by Tom Nuydens:
[b]I’ve also done it without using any pixel shaders at all, by varying the polygon offset for each pass. Polygon offset seems far too unreliable to get consistently good results on different cards/drivers, though.

– Tom[/b]

That may be because of the depth bits. I think the red book recommended on figuring out the r value somehow.

Anyway, I dont want to use polygonoffset in any way.

Personally, I think it sucks not having an easy solution to this polygon sorting issue. It’s a graphics issue entirely, so why waste
CPU time sorting polygons.

So noone has a demo of alternative means?

Originally posted by V-man:
So noone has a demo of alternative means?

I certainly don’t know of any other hardware-based methods. Going forward, I think fragment program depth peeling is definitely the way to go.

Going backwards, it was less suitable for DX8-generation cards because you needed three texture units just to do the depth replace thing, which didn’t leave you much room for interesting shading.

The DX9-class cards can do it without using depth replace, so you just need one texture unit for the depth map. You can combine it with occlusion query to dynamically balance the number of “peels”.

– Tom

Tom,

yes, unfortunatly the method requires a significant number of texture units. The nvidia demo is actually using all 4, and like I said, I was getting an error message about the 4th one being inconsistent, plus getting GL errors, plus the blending didnt look right.

Now when I tried out “Force Software” with NVemulate, it was still giving error messages but for some reason the blending looks correct (not sure, because I cant rotate the scene to get a better perspective since software mode is extremely slow. I think I’m getting 1 frame rendered every 10 seconds on a 500 x 200 window)