Technology - TMUs, what do they do

Does anybody know precisely what the TMUs do, what variables they hold?

Are they mini-cpus that fetch a texture (256 bit bus or something?) and do some computation based on parameters and tex environment.

In that case, what part of the GPU tells these TMUs what to fetch. Which mipmap? Anisotropy is done in TMUs?
The result goes to the back buffer or the GPU does rasterizing with the result of the TMU series?

V-man

I would guess this varies between each piece of harwdare implementation.

Typically, I’d assume there’s one or more “texture fetch” units, which feed into one or more “texture filtering” units, the outputs of which turn into one or more “fragment shading” units. Texture fetch is where you’d put your caching of swizzled texture data blocks (for caching). Interpolation is where you feed in texels from some surroundings (gotten through fetch) and spit out an actual filtered texel value. Fragment shading is where you apply combine modes etc.

Before the texture fetch, there has to be “interpolators” whose state are set up based on the texture coordinates and possible anisotropy values from the vertex processor; these are what tells each texture fetch unit what texels to fetch and the interpolator how to interpolate them. The state of the interpolators would be the du/dv for your pixel stride at the current scanline, and probably some number of derivatives thereof to do perspective correct interpolation.

Make sense?

Originally posted by jwatte:
I would guess this varies between each piece of harwdare implementation.

Probably. Each company is implementing without communication with other companies, I think.

Make sense?[/b]

Not quite.

The vertex processor is what? I’m assuming you mean the transformation unit (transforms vertices into window coordinates). Along with each vertex, there are texture coordinates.The texture coordinates are used to fetch regions of textures (for efficiency). Right here, some external unit might tell each fetch unit what mipmap is needed. The TMU’s process the selected texture blocks and store the output into temporary memory. The GPU takes the result and rasterizes the triangle. Something in the rest of the GPU does the texture filtering. Anisotropy, perspective correct too are done by that something in the rest of the GPU.

One thing remains, is that multiple mipmaps may need to be processed for the case of GL_LINEAR_MIPMAP_LINEAR. Could be that each TMU is a dual texture processor, since there is no difference between that and GL_LINEAR_MIPMAP_NEAREST and the others.

…One more issue remains. The mipmap sample may be different for each TMU. How is the computation performed?

The specs for these extension don’t talk about these details!

V-man

how is bilinear done? mipmapping is the same… you have the z or w coordinate and some transformationmatrix for it to get the mipmaplayerindex (as floatingpoint). you then blend betwen the two nearest layers linearly.

the specs don’t need to define how a hw implementation has to do it. just the results, and the interface.

For a single TMU, it becomes simple how to design this stuff, but when you have 2 of them, and the result needs to be added together (or multiplied) how do you do it if TMU0 is working on level 3 of texture 20 and TMU1 is working on level 0 of texture 21.

There is a resolution difference. Information about the level must be used to generate the output of TMU 1.

There is also the problem of clamping and repeating. I’m thinking that temporary buffers are used by the TMUs.

If those temporary buffers are preserved, then multipass hardware should be possible. Branching and looping should not be a problem.

V-man