Texture Sampler and Texture Cache

Hi,

suppose I have to access the same textur at different locations (better to say: regions) inside a shader. Would it be faster to use just one sampler or to use one shader for each sampled region?
Are samplers and the texture cache somehow coupled or completely independent from each other?

Are samplers and the texture cache somehow coupled
No. You may use a sampler any number of times you want (up to an implementation defined limit. Radeon cards only let you have 4.)

Radeon cards only let you have 4.
This is true on Radeon 9500 through Xxxx series. The X1K series allows a much larger number. However, this assumes you’re using GLSL. AFAIK, the drivers for assembly shaders only allow 4 still. Also, texture cache behavior is very implementation dependent. On NVidia GeForce 6/7 cards, it appears to me there is a texture cache per sampler. For the GeForce 6, the texture cache appears to be 8K and for the GeForce 7, it appears to be 16K. On Radeon X1K cards, I can’t tell how big the cache is or how it’s structured because the cards seem very good at hiding memory latency. That’s probably a byproduct of how they schedule their shader units.

Kevin B

Keep in mind that there’s texture unit / pipeline factor. And it’s usually 1 because it makes GPU more flexible.
I’d rather stick to standard approaches if you want to benefit from driver/GPU optimizations :slight_smile:

Originally posted by skynet:

Are samplers and the texture cache somehow coupled or completely independent from each other?

Given the big addressing flexibility of current shaders and size of texture caches on current hw (they are small) it is likely that the cache is shared by all active textures and samplers.

So, to sum it up:

Since texture cache is shared by all samplers, it doesn´t matter how many samplers are used (just locality of their access matters).

In my case its better to stick with one sampler which gets used 4 times to sample all 4 regions, because its easier to code/manage.

Right?

I’m a bit curious about why there’s a binding of the sampler and texture at all. Why don’t we have shader entry points like:

sample2D(samplerStateObj, texObj, texCoord);

That is to say, why couple the texture and sampler state, as is currently the practice? Not that there’s anything wrong with that, mind you, just a bit curious.

Originally posted by skynet:

Right?

I think so.

Originally posted by Flavious:

That is to say, why couple the texture and sampler state, as is currently the practice?

Probably for historical, backward compatibility and api cleanes reasons.

The first OGL version did not have texture objects at all. The texture objects were added in version 1.1 by defining currently bound texture on which all older operations operate. The support for multiple texture units followed in 1.2.1. So it is likely that at the time when the 1.1 was create, the binding of texture object and sampling state was reasonable thing to do.

When the GLSL was added, reusing existing API instead of creating completely new entry points to do the same work on some sampler state objects reduced cost and complexity of the implementation.

Before shaders were introduced, there was always a one to one mapping between texture images, texture units, combiner stages and texture coordinate sets. So the API did not distinguish between them.

In Long Peaks this will be corrected. There is going to be an image object for the texture itself, and a seperate sampler state object for sampling parameters. Those two will be bound to the sampler uniform directly, instead of having the texture stage in between.

Yes, that makes perfect sense.

The reason I ask is I’ve been tinkering with HLSL10, which is riddled with new, object-like syntactic candy. You can sample a texture in a shader like this:

myTextureObj.Sample(mySamplerState, myTexCoord);

which lead me to believe for a moment that there was something new under the hood in hardware, but I think now that unlikely to be the case. Given two different sampler states, one could simply create two different sampler state objects bound to the same texture, behind the scenes (not that you’d want to).

So I’m assuming that the binding of these parameters will take place in the application, not in the shader itself, via some as yet unseen additions to GLSL in Mt. Evans. Or perhaps this is the stuff of glFX (e.g. defining sampler states, etc.). :wink:

Cheers

So I’m assuming that the binding of these parameters will take place in the application, not in the shader itself, via some as yet unseen additions to GLSL in Mt. Evans.
No.

It’s been pretty clearly described how this dichotomy works in Longs Peak.

An image is an image. You can use it for whatever you want.

A sampler is another kind of object that is fundamentally separate from an image or a shader. A sampler can be bound to a particular named sampler in a shader.

An image can also be bound to a particular named sampler in a shader. Thus, when the named sampler in the shader is accessed, it will use the parameters from the bound sampler object to access into the bound image.