Edge-on polys and Texture Stacks

Hi all, I’m new here, please bear with me.

I have a question that Clustying and Googling hasnt been able to answer so far, which is this:

If a texture-mapped polygon is drawn edge-on - that is, at right-angles to the viewing direction/orthogonal to the viewplane - so that in theory it is ‘infinitely thin’, how does modern texturing hardware deal with this case ?

ie. Does the polygon disappear (not drawn), or is the polygon drawn (presumably as a thin line) - and if so (if it is drawn), how is the clockwise/anticlockwise ‘sense’ of the polygon used for texturing in this case, ie. is it honoured ?

I ask because I am researching (I have done a lot more graphics reading and thinking than actual programming, bear with me) real-time Volume Rendering using 2D and 3D textures.
Even though some modern hardware is starting to support 3D textures, I am not yet convinced that 2D textures are as bad as they are made out to be.

Specifically I am not convinced that it is necessary to store 3 copies of the texture stack - one for each of the major axes - as I think this requirement is based on the needs of the good folks trying to render giant datasets (mainly for medical imaging it would appear).

If the stack is sufficiently dense (effectively making it a cell decomposition, but exploiting the hardware’s ability to draw textured quads very very fast), then surely it can be rendered from any angle, provided the sorting order and anti/clock-wise texturing sense is worked out prior to drawing, - PROVIDED, that in the case where the texture stack is orthogonal (at right-angles) to the viewing direction, the quads that hit that case are actually drawn, as thin lines, and dont disappear.

Can anyone help with an answer to this, eg. regarding OpenGL standards and/or NVidia or ATI or other chipsets ?

Have I made myself clear or muddy ? :slight_smile:

Best Regards
Jon

Didnt look into the spec, so i might be wrong, but i dont think that polygons that get projected to one line are rastered at all (the texturing would be quite difficult if for example alpha testing was enabled)

Polygons edge on have zero area when projected and are not drawn, they certainly don’t generate any pixels.

As a polygon approaches edge on MIP mapping can be used to reduce the resolution of the texture sampling using the coordinate derivatives to calculate the desired level of texture resolution. This can blur the texture excessively if for example one axis derivative is larger than the other, to correct the blurring anisotropic filtering is used.

RIP mapping is an old method which stores 3 stacks for different orientations but only works well with axis aligned viewing it still fails and blurs for arbitrary orientations. Generally this isn’t used and anisotropic filtering using multiple probes is the most popular method.

Google some of the terms in this post.

Anisotropic filtering is probably what you’re looking for.

http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_filter_anisotropic.txt

if aniostropic aint good enuf try out negative texture bias
(slower but gives a better image quality)

Thanks heaps folks, helpful stuff.

Jon

If you mean LOD bias it’s actually a positive bias that’s required, it isn’t higher quality, it aliases like crazy and is slow because it hoses your texture cache, the key is to get hardware that supports a high ratio of maximum anisotropic filtering for best quality. e.g. 8:1 should be pretty spiffy.

Just setting anisotropic filtering in the driver will also do this trivially, since that can override any application settings.