Cube map access in fragment shader

I have a world with lots of different objects where each object is in object space and i translate it directly to eye space using glTranslatef(…) etc. etc.

Also some of my objects can use a reflection material and their fragment program will access a cube map passed by the application for reflection texel lookup. The reflection vector is calculated per-vertex and interpolated value is used in fragment shader. Now the problem is that a cube map lookup vector must be in world space (at least thats whats documented, since a cube map is in world space) and the input positions of my vertexes are in object space whereas output vertexes are in clip space. I need to have vertexes in world space so that i can do proper cube map lookups and that’s not available in OpenGL. I want to know whether there is a way to have proper cube map lookups using a reflection vector in eye space (as that’s available in GL), or do i have to create a world space matrix logic in application? But by doing so i would be transferring load to the CPU which i don’t want!

Are there any other work arounds because cube maps are fairly common these days, and lots of games out there use them. Thanks in advance.

heres the basic method of finding a reflection with glsl. cubemaps dont have any position they just store directions

vec3 reflectColor = textureCube( environmentMap, reflect( view_dir, normal ) )

That’s the whole problem! Since they (cube maps) are in world coordinates therefore they give the proper texel if the lookup vector is in world coordinates!

I know how to do cube map lookups in a fragment shader. The code snippet that you gave will give you the proper texel if “view_dir” is in world space, whereas the lookup will be incorrect if its in eye space. That’s precisely what i wanna know, that whether its possible to do proper cube map lookups in eye space too? If yes, then please give me a method.

I am terribly sorry for posting this on Advance coding forum, i wanted to post it on GLSL forum. Its a mistake and consider this thread closed over here. I am posting a new thread on GLSL forum (moving/closing a thread is reserved for admins).

You can use the texture matrix (or corresponding vertex shader code) to transform your reflection vector back to world space. Your app only needs to determine the appropriate matrix for this once per frame.

Yeah, i know that and its mentioned in the nVidia’s white paper on cube mapping. However, i have to apply inverse view transform to get it back to world space and that’s something i want to avoid because i would then have to maintain seperate world and view matrices. If i have to maintain seperate world matrix then i would be better off passing the matrix to vertex shader and doing a world transform on the input vertex in object space and calculating reflecion vector from that world transformed vertex. Thanks for the reply.

Surely your application has some sort of “camera” concept? Well, that’s your world->view matrix right there. I can’t imagine why obtaining this matrix would pose any significant problem.

Hahaha, its not that there is a problem in keeping such a matrix, i just wanted to know whether there is a work around to it :slight_smile: . Thanks a lot.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.