Dynamic Cube Mapping

I’ve noticed that during rendering the 6 maps needed for dynamic cube mapping, the pixels are actually the reverse to the format needed by hardware implementation on my Geforce 2 MX. At the moment I use glScale to flip the coordinates of the objects and disable backface culling, as glDrawPixels and glReadPixels are too slow for interactive use. Does anyone know of a better way as I find this method a bit ugly, also I know it is meant to be consistant with the renderman cube mapping format but why is it like this it makes no sense to me.

glCopyTexSubImage2D
The glCopyTexSubImage2D function copies a sub-image of a two-dimensional texture image from the frame buffer.

void glCopyTexSubImage2D(
GLenum target,
GLint level,
GLint xoffset,
GLint yoffset,
GLint x,
GLint y,
GLsizei width,
GLsizei height
);

Parameters
target
The target to which the image data will be changed and can only have the value GL_TEXTURE_2D.
level
The level-of-detail number. Level 0 is the base image. Level n is the nth mipmap reduction image.
xoffset
The texel offset in the x direction within the texture array.
yoffset
The texel offset in the y direction within the texture array.
x, y
The window coordinates of the lower-left corner of the row of pixels to be copied.
width
The width of the sub-image of the texture image. Specifying a texture sub-image with zero width has no effect.
height
The height of the sub-image of the texture image. Specifying a texture sub-image with zero width has no effect.
Remarks
The glCopyTexSubImage2D function replaces a rectangular portion of a two-dimensional texture image with pixels from the current frame buffer, rather than from main memory as is the case for glTexSubImage2D.

A rectangle of pixels beginning with the x and y window coordinates and with the dimensions width and height replaces the portion of the texture array with the indexes xoffset through xoffset + (width – 1), with the indexes yoffset through yoffset + (width – 1) at the mipmap level specified by level. The destination rectangle in the texture array cannot include any texels outside the originally specified texture array.

The glCopyTexSubImage2D function processes the pixels in a row in the same way as glCopyPixels except that before the final conversion of the pixels, all pixel component values are clamped to the range [0, 1] and converted to the texture’s internal format for storage in the texture array. Pixel ordering is determined with lower x coordinates corresponding to lower texture coordinates. If any of the pixels within a specified row of the current frame buffer are outside the window associated with the current rendering context, then their values are undefined.

If any of the pixels within the specified rectangle of the current frame buffer are outside the read window associated with the current rendering context, then the values obtained for those pixels are undefined. No change is made to the internalFormat, width, height, or border parameter of the specified texture array or to texel values outside the specified texture sub-image.

You cannot include calls to glCopyTexSubImage2D in display lists.

Note The glCopyTexSubImage2D function is only available in OpenGL version 1.1 or later.

Texturing has no effect in color-index mode. The glPixelStore and glPixelTransfer functions affect texture images in exactly the way they affect the way pixels are drawn using glDrawPixels.

The following functions retrieve information related to glCopyTexSubImage2D:

glGetTexImage

glIsEnabled with argument GL_TEXTURE_2D.

Error Codes
The following are the error codes generated and their conditions.

Error Code Condition
GL_INVALID_ENUM target was not an accepted value.
GL_INVALID_VALUE level was less than zero or greater than log sub 2(max), where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE xoffset was less than border, (xoffset + width) was greater than (w + border), yoffset was less than border, or (yoffset + height) was greater than (h + border), where w is GL_TEXTURE_WIDTH, h is GL_TEXTURE_HEIGHT, and border is GL_TEXTURE_BORDER. Note that w includes twice the border width.
GL_INVALID_VALUE width was less than border or y was less than border, where border is the border width of the texture array.
GL_INVALID_OPERATION The texture array was not defined by a previous glTexImage2D operation.
GL_INVALID_OPERATION glCopyTexSubImage2D was called between a call to glBegin and the corresponding call to glEnd.

See Also
glBegin, glCopyPixels, glCopyTexSubImage1D, glDrawPixels, glEnd, glFog, glPixelStore, glPixelTransfer, glTexEnv, glTexGen, glTexImage2D, glTexSubImage2D, glTexParameter

glCopyTexSubimage2D is much faster than glCopyTexImage2D because the texels don’t need to be freed/allocated, just overwritten. glCopyTexImage2D would free any previous texels used by the current texture, allocate the new texels, then copy the pixels from the read-buffer. glCopyTexSubimage2D doesn’t free or allocate texels, it simpy copies them from the read-buffer. So the first time you dynamically generate a texture, you’ll need to call glCopyTexImage2D to initialize the texture’s dimensions, format and border. But subsequent updates of that dynamic texture should use glCopyTexSubimage2D, unless the dimensions, format or border change. But beware, there was a bug in nVidia drivers that would update only the first face of the cube map when using glCopyTexSubimage2D. The latest Windows drivers fix this, but I’m not aware of a fix for Mac OS 9 or X (for the GeForce 2MX). I have no idea if other vendors have had similar issues.

glCopyTexSubimage2D is much faster than glCopyTexImage2D because the texels don’t need to be freed/allocated, just overwritten. glCopyTexImage2D would free any previous texels used by the current texture, allocate the new texels, then copy the pixels from the read-buffer. glCopyTexSubimage2D doesn’t free or allocate texels, it simpy copies them from the read-buffer. So the first time you dynamically generate a texture, you’ll need to call glCopyTexImage2D to initialize the texture’s dimensions, format and border. But subsequent updates of that dynamic texture should use glCopyTexSubimage2D, unless the dimensions, format or border change. But beware, there was a bug in nVidia drivers that would update only the first face of the cube map when using glCopyTexSubimage2D. The latest Windows drivers fix this, but I’m not aware of a fix for Mac OS 9 or X (for the GeForce 2MX). I have no idea if other vendors have had similar issues.

glCopyTexSubimage2D is much faster than glCopyTexImage2D because the texels don’t need to be freed/allocated, just overwritten. glCopyTexImage2D would free any previous texels used by the current texture, allocate the new texels, then copy the pixels from the read-buffer. glCopyTexSubimage2D doesn’t free or allocate texels, it simpy copies them from the read-buffer. So the first time you dynamically generate a texture, you’ll need to call glCopyTexImage2D to initialize the texture’s dimensions, format and border. But subsequent updates of that dynamic texture should use glCopyTexSubimage2D, unless the dimensions, format or border change. But beware, there was a bug in nVidia drivers that would update only the first face of the cube map when using glCopyTexSubimage2D. The latest Windows drivers fix this, but I’m not aware of a fix for Mac OS 9 or X (for the GeForce 2MX). I have no idea if other vendors have had similar issues.

I’m not sure if this has answere my question or not.

My problem is that say I take a render for the left face of the cube map, and there is an object with writing on it in view saying “HELLO”, because I want it to look like it is reflected, on the cube mapped object I should see this word backwards but I don’t. It still reads “HELLO”. Therefore I use glScale and disable backface culling to give the mirror effect.

Are you saying I can put a mirror image of the frame buffer into a texture using glCopyTexSubImage2D, if so how???

hm… all i did whas answering to this part: as glDrawPixels and glReadPixels are too slow for interactive use

render the image and just use glCopySubImage instead of reading and drawing etc… its MUCH faster like that…

You seem to be slightly mistaken about how cube mapping works. You shouldn’t mirror the framebuffer and use it as a cube map.

What you should do is render your scene from the point of view of the reflective object. You should render everything except the reflective object itself. Also, you need to render six views (6!), facing in both directions along each of the three primary axes. You need to copy all these views to textures, as explained in abundant detail by davepermen (and also a few times by maxuser), and use those textures as a cube map.

Note that you can probably backface-cull three of the six views, and not update those sides of the cubemap.

  • Tom

[This message has been edited by Tom Nuydens (edited 03-29-2001).]

Been reading some Nvidia GDC stuff about pbuffers. They sound ideal for creating dynamic cube maps in.

I believe it may be faster than using your normal back buffer.

Tom Nuydens wrote:
> You need to copy all these views to textures

It is my understanding that a cube map is one (1) texture which contains all six faces in a sort of texture sheet layout. Am I mistaken?

> Note that you can probably backface-cull three of the six views

I do not think you can. For example, if I’m looking at a sphere, I can clearly see parts of all six faces of a surrounding box, because the reflected ray around the perimeter of the sphere (as seen from the eye) approach the actual eye rays. I e, where the incoming eye ray is oblique to the surface, the reflection is going to come from behind the object at a similarly oblique angle.

I generally understand how cube mapping works and I have my code working at a nice level of performance.

I do render my scene from the point of view of the reflective object(I translate view to the x, y, z position of the centre of the reflective object) and I do render everything except the reflective object itself. I also, render six views, facing in both directions along each of the three primary axes. I do copy all these views to textures, in the same way as explained by davepermen and maxuser, and I use those textures as a cube map(using glCopyTexSubimage2D). I don’t mean to sound arsey, I’m just trying to explain the problem.

If I do all this each of the cube maps are the mirror of what they should be, this is why I have to use glScale and disable backface culling.

for left and right I use
glScalef(1.0f,1.0f,-1.0f);

for front and back I use
glScalef(-1.0f,1.0f,1.0f);

for top and bottom I use
glScalef(1.0f,-1.0f,1.0f);

before I render any objects (the above scales may be wrong because I’m doing this off the top of my head), so the resulting image in the frame buffer is actually the mirror of what it would have been, this method must work because I get the correct results, so unless this is something you are MEANT to do there must be some other problem, I don’t know what it is.

Am I forgetting something that could cause this effect?

Could it be a driver problem?

I don’t know how to explain this any better short of postting all my code(which I’m sure nobody cares to see).

Also I may not be the only one with the problem, Kronos seems to have the same thing(look at problem with cube maping discussion).

Ps. I never actually used glReadPixels and glDrawPixels, I was just using them as an example to say I could read the buffer into an array, modify the array and write back to the buffer to give the mirror effect, but obviously I would never actually try this.

Cube map image orientation is a bit strange. If you look at the table in the extension spec, the images along the X and Z axes have an “up” vector of (0, -1, 0).

There is a ppt in the OpenGL SDK documentation from NVIDIA that may help: cubemap_image_orientation.ppt. It’s in the “misc” directory.

Once you understand the orientation, it’s very easy to render dynamic cubemaps.

Hope this helps -
Cass

Cass,
Is this why alot of cube maps supplied in the nvidia demos are upside down?

I found something weird about cube maps. Using fixed function pipeline cubemapping seemed okay… but when I leeched the algorithm out of the vertex shader demo, and used it in OpenGL, I had to flip the images back the right way to make it work… very odd.

or maybe I was being lame or something…

What is this upside down cube map malarky all about anyway?

Cheers,
Nutty