Scaled object faster than 'big' object.. Why?

Sorry if this is a dumb question but I don’t understand something I am seeing…

I have an object I am drawing which is effectively lots of spheres all slightly bigger then one another…

Shaders apply textures in various ways to get a blended ball.

If I draw the sphere as basically a group of roughly unit sphere size and scale it to the size I want in my environment it runs considerably faster than creating the sphere at the size I actually want it in the environment…

There is no appreciable difference in the way the textures are applied between the two methods. And I cannot see any reason why it would be any different. The number of vertices, screen real estate used etc. are all the same…

Can anyone explain to me why that is, or point me at some documentation I should have read.

Thanks.

Could you describe, in steps, both approaches?
From what you’ve written so far it’s hard to tell what could be the cause (early z-out, depth test, vertex caching, etc.).

My theory is that there is either something wrong with the code in one of the methods, or depending a little on how you create and draw these spheres, there could be some optimization the driver does when you scale them, as in the scaling method you won’t have to rebuild the data.
But that depend entirely on how it’s created or rendered.

Thanks for the replies…

Basically it’s a gluSphere. Always 100 slices by 100 stacks.
I draw it about 10 times around the same origin getting progressively bigger by a small scale factor.

I actually want the sphere to be about 100 units in diameter.

Each iteration of the sphere has a shader wrapping an NPOT texture around it, and alpha blending with the last sphere… and so on.

If I create a gluSphere which is unit size, and scale my matrix so that it is actually scaled by 100 in all dimensions then it is very very fast to render.

If I create a gluSphere which is 100 in size and do not scale then it gets very very slow real quick, exponentially as it grows in size on the screen.

It’s not a problem as either method suits my requirements… I am just really curious as I cannot see a reason for it.

My two theories are that there is either something I don’t understand about gluSphere, or perhaps the texture operations going on get more complex with the ‘big’ sphere. With the latter I can’t see how as the visual impact is not visible when you put the two side by side… Both spheres look the same…

might be a stupid post, but did you checked wireframe mode? is it tesselated same? im curious myself.