dreddlox

01-17-2004, 09:03 PM

Disclaimer: This algy requires dedicated silicon and is virtually impossible now without software rendering, so dont bother reading this if your looking for something you can do in real time.

The concept is simple, instead of implementing N-Patches(aka PN triangles) early in the pipes(ie. splitting triangles + displacing them before they even reach the graphics card), implement them as late as possible, after everything has been transformed, while it is being drawn.

This can happen in 3 easy steps:

1) For each triangle: Substitute 2D normals & positions into the N-Patch equation(look here http://www.gamasutra.com/features/20020715/mollerhaines_02.htm)

make 3 more equations represending the edges of the triangle.

Derive all 4 equations with respect to Yand find the roots to find the maximum and minimum Y points on the screen that satisfy all 4 equations,

pass all raster lines between these values on to the next step

2) For each raster line: Derive equations with respect to X to find their roots, find all pixels of the triangle on the raster line and pass to next step

3) For each pixel http://www.opengl.org/discussion_boards/ubb/tongue.giflace the x and y coords of the pixel into the origional equation, find all values of z that satisfy it, then discard all values of z that dont satisfy all 3 edge equations. pass the z values in back to front order to the pixel shader(so that alpha blending works properly)

This algorithm would mean perfect N-patches and heavily reduce vertex requirements, however it would REQUIRE that correct and appropriate normals are passed from vertex shader to pixel shader

It may even be possible on current hardware, but as I know of no open source drivers, I cant tell.

I intend to attempt this on a software renderer, if anyone knows of an open source win32 renderer with lighting and a C/C++ raster loop, please e-mail me

The concept is simple, instead of implementing N-Patches(aka PN triangles) early in the pipes(ie. splitting triangles + displacing them before they even reach the graphics card), implement them as late as possible, after everything has been transformed, while it is being drawn.

This can happen in 3 easy steps:

1) For each triangle: Substitute 2D normals & positions into the N-Patch equation(look here http://www.gamasutra.com/features/20020715/mollerhaines_02.htm)

make 3 more equations represending the edges of the triangle.

Derive all 4 equations with respect to Yand find the roots to find the maximum and minimum Y points on the screen that satisfy all 4 equations,

pass all raster lines between these values on to the next step

2) For each raster line: Derive equations with respect to X to find their roots, find all pixels of the triangle on the raster line and pass to next step

3) For each pixel http://www.opengl.org/discussion_boards/ubb/tongue.giflace the x and y coords of the pixel into the origional equation, find all values of z that satisfy it, then discard all values of z that dont satisfy all 3 edge equations. pass the z values in back to front order to the pixel shader(so that alpha blending works properly)

This algorithm would mean perfect N-patches and heavily reduce vertex requirements, however it would REQUIRE that correct and appropriate normals are passed from vertex shader to pixel shader

It may even be possible on current hardware, but as I know of no open source drivers, I cant tell.

I intend to attempt this on a software renderer, if anyone knows of an open source win32 renderer with lighting and a C/C++ raster loop, please e-mail me