I want better cubemap egdes

It is a known issue that each side of a cubemap is treated as a separate texture when filtering and mipmapping, causing a hard edge to appear in reflections and other effects. Also, there are a lot of nice lighting tricks that could be achieved with shaders using mipmap-biased cubemap lookups, but these hard edges become even more apparent when artificially raising the mipmap level.

Has anyone out there given any thought to (or done any work on)
a) graphics hardware that interpolates between sides on cubemap lookups, or
b) a cubemap mipmap extension that averages edge pixels together at each mipmap level in order to hide the edges (this would cause duplicate pixels at the edges, but that would be a lot more difficult to notice than a hard edge), or
c) any other fix that I haven’t thought of?

a) graphics hardware that interpolates between sides on cubemap lookups, or
I believe the newer 3dlabs boards do this.

b) a cubemap mipmap extension that averages edge pixels together at each mipmap level in order to hide the edges (this would cause duplicate pixels at the edges, but that would be a lot more difficult to notice than a hard edge),
This is basically a texture border. Bet you never thought that would be useful, huh? I’m not sure if anyone hardware accelerated texture borders on cubemap faces, but I know that Nvidia cards accelerate texture borders on “regular” 2D textures, so it might be worth a shot.

How are cubemap seams handled when doing omni directional shadowbuffer shadows?

John Carmack mentioned this problem a few months ago, and also mentioned that future hardware should fix it. (http://www.armadilloaerospace.com/n.x/johnc/recent%20updates/archive?news_id=290)

If you need to do it right now, and border texels don’t cut it, I would think you might be able to do your own mipmap generation at load time where you manually sample the other faces and feed those results into the border texels in the various mipmap LOD’s. Wouldn’t really work for dynamic cube maps though.

P.S. - Ames, IA? Are you somebody I know?

I believe the latest hardware supports cube map edge texels, and next generation stuff will likely be able to filter between the actual maps. Thus, if you upload border texels for your cube map faces (on all MIP levels) most of the problem is likely to go away.

A useful tool for (static) cube maps:

ATI CubeMapGen

Thanks for the tips everyone. And yes, Jeff, I’m Terry. We met at the coffee shop a couple weeks ago.

NVIDIA supports borders on cubemaps. This solves the problem you’re talking about, but does use quite a lot of additional memory because of the way cubemaps are arranged in memory.

The code to fill in the borders from the adjacent faces is a bit tricky, but Cass has already written it for you:
http://cvs1.nvidia.com/DEMOS/OpenGL/src/shared/cubemap_borders.cpp

Originally posted by simongreen:
[b]NVIDIA supports borders on cubemaps. This solves the problem you’re talking about, but does use quite a lot of additional memory because of the way cubemaps are arranged in memory.

The code to fill in the borders from the adjacent faces is a bit tricky, but Cass has already written it for you:
http://cvs1.nvidia.com/DEMOS/OpenGL/src/shared/cubemap_borders.cpp [/b]
There’s a subtle bug in that code–note that there are two places that set the texels on the bottom of face 5, but none that set the top. (i.e. bimg[5][B_BOT_ROW(i)+c] is assigned to twice.)

One of those should presumably be B_TOP_ROW instead. (Not sure which, trying to figure cube map addressing out always makes my head hurt.)

-matt

Originally posted by idr:
I know that Nvidia cards accelerate texture borders on “regular” 2D textures
On what specific hardware? Just curious, I thought this caused a software fallback.

Originally posted by simongreen:
NVIDIA supports borders on cubemaps.]
Are they accelerated? On what specific hardware?

Cass or Mark would know for sure, but, AFAIK, texture borders are hardware accelerated at least as far back as the original Geforce. They may even be accelerated on the TNT family, but I’m not sure.