PDA

View Full Version : optimalization by textureCube instead of normalize



glAren
07-26-2006, 12:19 AM
I replaced three calls of normal () by textureCube () in my pixel shader - e.g.
normal = normalize (normal) by normal = textureCube (samplerCube, normal). But I saw no speed up effect in rendering, although textureCube instruction should be three times faster than normalize ().

Relic
07-26-2006, 01:02 AM
What's your question?
Did you analyze if your performance is fragment shader limited at all?

glAren
07-26-2006, 01:16 AM
The question is why there is no speed-up. I didn't do limitation tests yet, I should do them before. But when I add just one line of z-reconstruction formula to the pixel shader, the framerate slows down by about 25%, so I guess I am pixel shader limited.

Komat
07-26-2006, 03:02 AM
It is possible that the driver scheduled the instructions in such way that latencies of the normalization are hidden and normalization is not a bottleneck.

Additionally you can not count on the texture sampling being faster. Its duration depends on wheter the data are in cache and number of hw texture units available. For example the Radeon X1900HW has three times more pixel pipelines than the texture units.

glAren
07-26-2006, 05:04 AM
Hmm..it could be. Do you have any tips for shader performace debugging tool for ATI like NVidia has NVShaderPerf?

glAren
07-26-2006, 06:01 AM
Propably I should try RenderMonkey..