Sampling from a 2D texture as it would be 3D

I would like to sample from a 2D texture as it would be a 3D one. I want to do this because it is easier to implement the algorithms I want to use when rendering to a 2D Texture but it would be better for reading the generated data if I could use a 3d sampler. Is there a way in OpenGL to do this?
Otherwise I would end up writing the 3D lookup in the shader myself what would turn out bad for performance.

It is hard to give advice with such vague request as “sample 2D like 3D”.
And the performance should be ok even if you do that in shader.
You should benefit from better cache by using directly a 3D texture, but that depend a lot on what you “have in mind”.

So, what do you have in mind exactly, both for the 2D texture generation and 3D sampling of it ?

Well when I render to the texture I want to do it in a 2D way, and when I’m sampling out of it in a later step I want to treat it as a 3D-texture. So basically I want to have the same data be a 2D texture in one step and a 3D texture in another step. So I need some kind of “cast” to cast the data from 2D to 3D.

Well when I render to the texture I want to do it in a 2D way, and when I’m sampling out of it in a later step I want to treat it as a 3D-texture.

What does that mean? What does it mean to “treat it as a 3D-texture?”

Ingrater, you did not provide any new information.
You want to do a kind of spatial hash ?

Not to start a new topic, but this is a false assumption. We benchmark volume rendering and thus the basic 3D texturing performance under different viewing conditions. As a result we found that the caches are still optimized for 2D texturing and the 3D textures are still packed in memory like a stack of 2D images. The effect is that you have a performance drop from the optimal texture orientation the the worst of 60% (or an increase of the draw time of 200%). Which is a serious issue in volume rendering. NV40 and G70 did use a space filling layout and did not suffer that much of this issue, but every chip after that the problems got worse (ATi and Nvidia alike).

-chris

I guess you want to take advantage of the interpolation of the 3d sampler, which would be faster than doing it in shader code from a 2d sampler?

Have you tried copying the data from the 2d to the 3d texture?
Using PBOs you should be able to keep the data on GPU side. I would expect the data layout of a 3dtexture to be a sequential 2dtexture layout… Can anybody confirm this?

Yes thats exactly what I want to do. I don’t know how to describe my problem other then I already did. English is not my native language.

I would prefer a solution where I don’t have to copy the data, rather I just want to reuse it.