Fragment program texture filtering

I’m wondering how a texture get filtered (mip mapped) when doing dependant read.

Actually I even don’t understand very well how an independant read get mipmapped. I know the basic concept, but I could never pass through the maths behind it.

Thanks for enlightening me,
SeskaPeel.

Filtering settings are taken from the texture object state, as usual.

Not sure about the mipmap selection atm, but I guess interpolated vertex w is involved (just as it is for direct texture samples or fixed function texturing).

How is the mipmap lambda parameter computed for dependent texture fetches?

  RESOLUTION:  Very carefully.  NVIDIA's implementation details are
  NVIDIA proprietary, but mipmapping of dependent texture fetches
  is supported.

From the NV_texture_shader extension.
Doesn’t seem they are willing to give you the full details

I know this isn’t really of any help but I just love the way they answered

Joel.

Mipmapping is taken from partial drivatives (du,dv).
Actually you can’t specify explicitly your own partial drivatives when sampling textures into a fragment program. At best, you can give a bias. The specifications states that a new instruction that takes into account partial drivatives as parameters could be exposed into a future extension.

You can specify your partials directly in NV_fragment_program and in D3D PS3.0.

Obviously you need some method of computing derivatives of dependent quantities for proper mipmapping. Finite difference is one attractive way of doing that, but different vendors are likely to implement this functionality in different ways.

Cass: Correct me if I’m wrong, but I think what you’re saying is something like this?

A typical hardware implementation for a 4-fragment pipe could conceivably be to rasterize four fragments that live in a 2x2 square. You can then get the derivative of S and T order out of the differences between the coordinates for those four fragments. (This is “finite differences”).

That’s my internal model, and I’ll stick to it until proven to not predict reality well anymore :slight_smile:

Note that this would mean that the MIP map selection would be the same for all four pixels, so MIP map level/filtering would be selected on a per-2x2-block level, rather than on a per-pixel level.

I suppose other ways of doing this include using store-and-forward and rasterizing in some space-filling-curve manner. This would allow you to do per-pixel MIP map level selection (again, using finite differences).

[This message has been edited by jwatte (edited 10-10-2003).]

not exactly the same for all 2x2… the top 2 get the same ddx, the bottom 2 the same ddx, the left 2 the same ddy, the right 2 the same ddy…

so, they all the different combinations of ddx and ddy, always two share one component with eachother…

but yes, you loose some precicion…

the finite differences are not really the way to do it anyways imho… at least, together with branching they will be messy