I can appreciate the point of view that swizzling is not sampler state, but I utterly disagree. Call me stubborn, or something elseOriginally Posted by aqnuep
. I generally view sampling as how to interpret and filter the data. The obvious motivation is that how to filter the data specifies how to interpolate the texel values, which in a one sense is specifying how to approximate a function. A nice paper, which I would bet many here have read: (pdf-warning, A Pixel is Not a Little Square) http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf really hammers that point home.
But that it might just be me... I view a "texture" as just the bytes of the texels, and the interpretation of those bytes as sampling... that hardware has "texture state" as essentially format and data source and sampling as function expansion is not awful, but well fishy to me. I highly doubt that my request would have any real serious impact on performance, it is not like one is binding zillions of samplers and textures that the extra if checks are going to hammer anything; moreover, since the use patterns of a program are usually that samplers don't change their state, then a GL implementation will likely have latching love to win anyways.
It just a request, because from a user point of view, I can do it more or less anyways, but with an extra (one time cost per texture) seems silly and a nuisance.
On the other hand, if GL had an interface in GLSL to choose the sampler in the shader, then the hardware argument wins big time... if there was an extension (or if it was in core) for just that then I'd happily take the current inconvenience for that functionality... but right now, seems silly.



. I generally view sampling as how to interpret and filter the data. The obvious motivation is that how to filter the data specifies how to interpolate the texel values, which in a one sense is specifying how to approximate a function. A nice paper, which I would bet many here have read: (pdf-warning, A Pixel is Not a Little Square)
Reply With Quote
