environment mapping

I would like to draw a totally reflective sphere such that I can see
other objects in a scene (say other spheres), using environment mapping.
I also would like to change the viewpoint and or the sphere and have the changes reflect correctly on the sphere.

I know I need to use spheremap at some point, but I am not sure of how to build and update the env map.

Any details would be appreciated

Thanks

I’d do it with a cube map instead of a sphere map. Look in the NVIDIA SDK for demos that show how to do it. Basically you render the scene 6 times (for each cube map face) with a 90° FOV with the camera positioned at the center of the reflective sphere and then use the generated cube map when you render the scene again from the viewer’s point of view.

If you really want to use a sphere map, then search on google for “spherical environment mapping”.

Thanks. Unfortunately I need to use a sphere map.
I did a google search but everything seems to be high end.
My only difficulty is how to generate the texture of the nevironment. I have a scene with cubes, spheres , cylinders, and want it to reflect properly iin a sphere located slighlty far away. My question is how do I render the scene into a texture to be mapped onto the sphere using the openGL spheremap.
Do I render the scene normally and then write it to some sort of array or file that then is bind as a texture? Also how do I take into account the position of the reflective sphere when making the texture?

Help appreciated

spheremaps are supposed to be pictures of a small reflective sphere taken from a camera with an infinite focal length an infinite distance away (i think).

so i dunno where you put the camera - a long way away? i have never seen a spheremap that was rendered - i’ve only seen ones made from photos taken with fish-eye lenses. you cannot render a fish-eye lens scene easily - i think you need to approximate it by rendering each portion of the screen with a particular projection matrix.

OT: there is a Quake1 mod called pan quake that does this - slices the screen up into areas with different projection matrixes to make fish-eye view.

OR look in direct X 8 SDK - i think there is a fish-eye lens sample in there.

for cube maps it is easy, you put the camera @ the sphere centre.

as for the texture - you can render, then glReadPixels() into memory, then glTexImage2d() off that (be careful with your data formats - GL_RGB format GL_UNSIGNED_BYTE data is probably your easiest bet).

or you can do glCopyTexSubImage() direct from the framebuffer into a (previously created) texture. this is faster - use it for realitime. use read pixels if you want to keep the image on disk for later.

also you could draw to and read from a pbuffer instead of the framebuffer if you need that.

i have never seen a spheremap that was rendered - i’ve only seen ones made from photos taken with fish-eye lenses.

Actually, there’s a special lib in GLUT that creates sphere maps. The way it works is described in detail here http://www.opengl.org/developers/code/sig99/advanced99/notes/node180.html

Basically you render the 6 cube map faces as you would when using cubic environment mapping, and then render a sphere divided into 6 sub-meshes applying each one of the generated textures to the corresponding sub-meshes using spheric texture coordinates. That textured sphere is rendered into a 2D texture which will be the final sphere map.

So dynamic spheric environment mapping in OpenGL requires additional steps compared to cubic environment mapping and is thus slower. The advantages are that it also works on older hardware with no hardware-accelerated cube map support.

Thanks guys. I believe you gave the answer to my questions

Ok seems I will need some basic guidance.
Here it is how they do the intersection/color computation for the sphere map in the notes :

They do cube mapping by putting the camera at the center of where the perfectly reflective sphere is. So they get the six scenes (one for each face). These six rendered images are then copied into six texture objects.
I have a question here: How do I avoid these six images to
actually being drawn to the screen? (since I only want to save them not display them)
Also they say “Be sure to align the first cube face view frustum to directly face the viewer. Then the other five cube faces should be aligned with respect to the first”
Whats the code to do this?

Thanks

I have a question here: How do I avoid these six images to
actually being drawn to the screen?

There are various ways that vshader has already mentioned a few posts up. Either render them to the framebuffer and copy the images from there to a texture (note: the framebuffer is only visible in a window after you call SwapBuffers, so only call it after you’re done with the 6 environment textures and have rendered the final scene).

Another option is to use pbuffers. Those are off-screen rendering buffers that can be used for render-to-texture.

As usual, google is an invaluable resource. Also the NVIDIA SDK has demos with code showing how to do all this (although I think they all do cubic environment mapping, but since the first 6 steps are the same you can’t go wrong with them).

Also they say “Be sure to align the first cube face view frustum to directly face the viewer. Then the other five cube faces should be aligned with respect to the first”
Whats the code to do this?

You’ll have to set an appropriate view matrix, e.g. using gluLookAt. Also note that you’ll need to change the projection matrix to be 90 degrees field-of-view. Again see the NVIDIA cube mapping demos or google.

Paul Bourke has a very good page on this : http://astronomy.swin.edu.au/~pbourke/projection/spheretexture/
BTW, his site is incredible. Every time I go there, I’m amazed by the amount of valuable information gathered in a single place.

that paul bourke site sure is excellent - but i can’t see how the mapping he describes could be used for GL style sphere-map environment mapping. am i missing something?

it doesn’t look like it will map the point opposite the viewer into a singularity around the outer edge of the sphere like the one in the Siggraph paper Asgard pointed out.
it looks like he’s describing a way to map an environment to a sphere that surrounds the camera in 3D - like you might use for a skydome or something. you would need to do another remapping to make a sphere-map from it - probably by just using the inverse of the transformation done to sphere-map texture coords. ie work out what texture-coord will be fetched by a given vector from the centre of the sphere, and render the pixel on the sphere that hits that vector at that texture-coord. make sense?

btw - mikemor - if u can’t use cubemaps, could u perhaps use dual-paraboloid maps? they can be done on non-cubemap hardware, but it takes two rendering passes to draw a parabloloid environment-mapped object (instead of 1 for sphere mapped).

Dual-Paraboloid environment maps are view independent, so you don’t need a new map everytime the camera moves. there is a demo or paper about them on the nVIDIA site (i think?), also if you look for a thesis named “High-Quality Shading and Lighting for Hardware Accelerated Rendering” by a guy called Wolfgang Heidrich you will find a detailed description of the technique and how to implement it.

if you have no luck finding the thesis (u should be able to with google), mail me and i will email u the .pdf.

Originally posted by Asgard:
You’ll have to set an appropriate view matrix, e.g. using gluLookAt. Also note that you’ll need to change the projection matrix to be 90 degrees field-of-view. Again see the NVIDIA cube mapping demos or google.

Strugling to write a function that renders the six faces of the cube and puts them into texture memory. Here it is what I in my ignorance came up with. Your expert comments are deeply appreciated

-Suppose my camera is characterized by (eye,lookat,up)
-Suppose the sphere onto which I want to paste the sphere map is at the point s_c.

Then the function to render the scene as viewed from a camera facing sucessively the six faces of the cube centered at the position s_c and saving the result into texture memory is

void render_cube (void) {
glMatrixMode(GL_PROJECTION);
glPushMatrix();//save projection mat
glLoadIdentity();
gluPerspective(90,1,1,20);//Sets proj. with 90 degree fov
glMatrixMode(GL_MODELVIEW);
glPushMatrix();//save modelview mat
glLoadIdentity();
gluLookAt(s_c,eye-s_c,up);//move camera to center of sphere
//and facing viewer. Is this right??
for (int i=1;i<=6;i++) {
render_scene();
glCopyTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,x,y,
32,32,0);//save into texture memory
glRotatef(-90,up);//move to next cube face
}
glPopMatrix();//restore the Modelview matrix
glPopMatrix();//restore projection matrix

What is the x,y coordinates we pass to glCopyTexImage2D?
How do we use these textures afterwards? (I.e. how do I refer to them?)
Are the push and pop right?

Many thanks

Originally posted by vshader:
[b]that paul bourke site sure is excellent - but i can’t see how the mapping he describes could be used for GL style sphere-map environment mapping. am i missing something?

it doesn’t look like it will map the point opposite the viewer into a singularity around the outer edge of the sphere like the one in the Siggraph paper Asgard pointed out.
it looks like he’s describing a way to map an environment to a sphere that surrounds the camera in 3D - like you might use for a skydome or something. you would need to do another remapping to make a sphere-map from it - probably by just using the inverse of the transformation done to sphere-map texture coords. ie work out what texture-coord will be fetched by a given vector from the centre of the sphere, and render the pixel on the sphere that hits that vector at that texture-coord. make sense?

btw - mikemor - if u can’t use cubemaps, could u perhaps use dual-paraboloid maps? they can be done on non-cubemap hardware, but it takes two rendering passes to draw a parabloloid environment-mapped object (instead of 1 for sphere mapped).

Dual-Paraboloid environment maps are view independent, so you don’t need a new map everytime the camera moves. there is a demo or paper about them on the nVIDIA site (i think?), also if you look for a thesis named “High-Quality Shading and Lighting for Hardware Accelerated Rendering” by a guy called Wolfgang Heidrich you will find a detailed description of the technique and how to implement it.

if you have no luck finding the thesis (u should be able to with google), mail me and i will email u the .pdf.[/b]

I agree that Paul Burke’s code might not be exactly the easiest way to do sphere mapping, but it is quite good to paste a texture onto a sphere. Thanks for the thesis pointer . I did locate a pdf version of it. I might try to follow your suggestion in a few weeks and use dual-paraboloid env map, but right now I am on a run against time and probably will stick with sphere map.
Thanks

Don’t use the projection matrix to change orientation of what you’re rendering; it won’t come out right. Instead, think of rendering each cube face as rendering the same scene with a camera pointing in different directions; i e change the “VIEW” part of “MODELVIEW” matrix.

Also, rotating 6 times around the up axis isn’t going to give you all six faces; you need a table of orientations to get it all come out correctly. Also, you leave the PROJECTION matrix as the current matrix in your render-scene loop; that’s probably not what you want.

Originally posted by jwatte:
[b]Don’t use the projection matrix to change orientation of what you’re rendering; it won’t come out right. Instead, think of rendering each cube face as rendering the same scene with a camera pointing in different directions; i e change the “VIEW” part of “MODELVIEW” matrix.

Also, rotating 6 times around the up axis isn’t going to give you all six faces; you need a table of orientations to get it all come out correctly. Also, you leave the PROJECTION matrix as the current matrix in your render-scene loop; that’s probably not what you want.[/b]

I t doesnt seem to me that PROJECTION is active during the loop. Rather it is modelview which is active. Dont understand why rotating about the up vector will not give me the 6 faces of the cube???Can somebody take a look at the code and give me some details?
Thanks


-Suppose my camera is characterized by (eye,lookat,up)
-Suppose the sphere onto which I want to paste the sphere map is at the point s_c.

    Then the function to render the scene as viewed from a camera facing sucessively the
    six faces of the cube centered at the position s_c and saving the result into texture
    memory is
    -------------------------------------------------------------------------------------
    void render_cube (void) {
    glMatrixMode(GL_PROJECTION);
    glPushMatrix();//save projection mat
    glLoadIdentity();
    gluPerspective(90,1,1,20);//Sets proj. with 90 degree fov
    glMatrixMode(GL_MODELVIEW);
    glPushMatrix();//save modelview mat
    glLoadIdentity();
    gluLookAt(s_c,eye-s_c,up);//move camera to center of sphere
    //and facing viewer. Is this right??
    for (int i=1;i&lt;=6;i++) {
    render_scene();
    glBindTexture(GL_TEXTURE_2D,i);
    glCopyTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,x,y,
    32,32,0);//save into texture memory
    glRotatef(-90,up);//move to next cube face
    }
    glPopMatrix();//restore the Modelview matrix
    glPopMatrix();//restore projection matrix

Look here:
http://www.dorbie.com/envmap.html

Thanks dorbie!! Nice stuff you have there

Finally figured out how to do cubic env mapping.
The last piece is to map the cube onto a sphere and put it into a texture.
Not sure how to do this following all the advice I got from Asgard and Dorbie. This basically involves mapping each side of the cube onto one submesh of the sphere with special care to the back face, indexing the texture in the process.
Does anybody know how to implement this?
Thanks

The last piece is to map the cube onto a sphere and put it into a texture.

How to do that is pretty well described in the link I posted before (the Siggraph 99 advanced graphics course).
Also you can grab the source code of GLUT (e.g. here http://www.xmission.com/~nate/glut.html)) and look in the /lib/glsmap directory, which has complete source code for generating the sphere mesh and mapping the 6 cube textures on it.
Cheers.

[This message has been edited by Asgard (edited 10-06-2002).]

Thanks Asgard! I finally (?) got it.

I draw my scene in a 700x700 window but to produce each cube face I draw it in a 64x64 window. I then noticed a weird thing: When I run the program it displays my scene (inverted)in a 64x64 window and the rest of the 700x700 window screen is empty other than the sphere with the sphere map. Anything has a clue of what might be wrong?