GL_CLAMP*... is there a another/better way?

Okay, first off, I don’t want to use GL_CLAMP_TO_EDGE because it’s not widely supported and I’m writing a mass market app that I want to work without problems on the widest set of hardware possible.

Here’s my issue:
On some hardware (like Voodoos) textures above 256x256 are not supported. So in some cases, I’m taking larger textures (just bitmap data) and cutting it up into smaller chunks and then making individual textures out of these chunks. Then I render quads tiled together with these textures to reproduce what would normally have been a single large quad with a single large texture.

Naturally, I’m running into the problem of having border lines appearing in my texture. I once solved this problem without GL_CLAMP_TO_EDGE on my sky box by tweaking the texture coords so that I drop the outer pixel off of each texture. But when I take that approach in my app this time, I notice that it’s clearly dropping needed pixels and it is quite noticable. If I leave the texture coords at 0-1, it looks just fine with the exception of the border lines running through my image.

So what is a solution? The last post in this thread gave a possible solution: http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/007306.html

But this seems a bit much… there’s got to be a better way. I just can’t believe that OpenGL would FORCE you to have SOME kind of border and blend it into your texture.

Any help would be very appreciated!

Can’t you only use clamp to edge if it is available and fall back to clamp?

The only mass-market HW I can think of that doesn’t support the clamp to edge extension implemnts clamp as clamp to edge.

-Evan

Nope. It’s actually the opposite. GL_CLAMP_TO_EDGE works like GL_CLAMP - even on nVidia hardware unless you change the setting in the registry or in the driver config. I have tested on several video cards and GL_CLAMP_TO_EDGE has yet to solve this problem for me. Nothing I’ve tested it on (and I haven’t tested on GF3 and above) has worked. It still has the border line. VERY frustrating!!!

Actually, on nVidia GL_CLAMP works as GL_CLAMP_TO_EDGE should, not the other way around. GL_CLAMP_TO_EDGE should be supported on pretty much all hardware worth mentioning. If GL_CLAMP_TO_EDGE doesn’t solve your problem though, that’s another thing. It really should though AFAICT, there should be no border line with GL_CLAMP_TO_EDGE.

Well, without manually changing the driver setting in the nVidia drivers, using the enum GL_CLAMP_TO_EDGE doesn’t do the trick. Plus, doing the same on Matrox hardware I know for sure doesn’t work either… And for non-accelerated hardware, who knows. Or would that fall back on a MS implementation? If so, would it support it or not?

I just can’t believe that OpenGL would FORCE you to have SOME kind of border and blend it into your texture.

OpenGL is very old. The basic implementation does some things and expects things that don’t really make sense these days, but probably made sense back when it was created.

Plus, doing the same on Matrox hardware I know for sure doesn’t work either

He said all hardware worth mentioning. Matrox has poor GL support.

As for Microsoft’s implementation, are you developing an application that actually requires some kind of performance above 2fps? If so, I wouldn’t worry about whether or not CLAMP_TO_EDGE is support on that.

Originally posted by Korval:
[b] He said all hardware worth mentioning. Matrox has poor GL support.

As for Microsoft’s implementation, are you developing an application that actually requires some kind of performance above 2fps? If so, I wouldn’t worry about whether or not CLAMP_TO_EDGE is support on that.[/b]

The key is that I’m using OpenGL as a simple means of displaying an image on screen with texture filtering, resizing etc… My app doesn’t even have to be animated, so in a very real way, FPS is not an issue since as long as I get more than about 1 frame for every 10-15 seconds, I’ll be doing fine. However, it CAN be animated, but that’s an option for those with the luxury of more modern hardware.

But the bottom line is that I want this to work on just about ANYTHING out there. Even old, unaccelerated cards.

So back to the issue at hand, is there not an alternative to GL_CLAMP_TO_EDGE? And does the solution described in the last post of the thread I linked to above sound like a viable one? I’ve heard that using texture borders isn’t accelerated in alot of hardware, so by using the method described in that thread, would that mean that even my users who DO have modern hardware and want to animate my app would encounter performance problems due to the use of texture borders?

As far as I can tell (not official by any means) some of this was oversight, or lack of understanding. The whole clamp with borders was trying to solve texture tiling issues, and infact implementors intuited that clamp_to_edge was actually what was desired for most other things. Most implementations as someone observed, actually implemented clamp to edge for clamp.

There are a few areas in OpenGL with similar legacy implementation vs spec issues, offset polygon is one that springs to mind.

Most notoriously texture binds were removed for OpenGL 1.0 (they were in IrisGL) to keep it as a clean state engine where display lists were intended to encapsulate stuff like this if you needed it. But the omission is understandable when you consider that subloading was only added at the very end of IrisGL’s lifespan and almost nobody used it. You used to define then bind everything from materials to texture environments in IrisGL, you don’t in OpenGL.

Okay I just tried the solution mentioned in that thread… but it appears to kill hardware acceleration! CRAP! There has GOT to be a solution!!! This is such a simple thing I’m trying to do…

Actually even Matrox does support texture edge clamp. It depends on which Matrox card, though.

You can emulate GL_CLAMP_TO_EDGE with GL_CLAMP by simply filling texture borders.
For instance, if your texture image is 256x256, create a 258x258 image and fill the first+last row+column (copy the nearest row/column) and send it to glTexImage2D with specifying a one-pixel border.

Thanks, but I tried that, and everything suddenly ground to a screeching halt. Apparently, the hardware doesn’t like that. And I’ve heard before that most hardware doesn’t accelerate texture borders.

Any other ideas? I’m running short… :slight_smile:

Don’t fill your borders, it will either kill performance, waste gobbs of texture memory or both.

What’s the problem? Just use clamp to edge if it’s supported and clamp if it’s not. Most of the time both will be correct anyway.

[This message has been edited by dorbie (edited 04-30-2003).]

The problem is that neither of these (clamp or clamp to edge) fix the problem. Neither will get rid of the border lines. I’ve tried both and neither works.

What’s more is that even on cards where it is supported, it doesn’t actually work unless you have a registry key set right.

[This message has been edited by Punchey (edited 04-30-2003).]

Okay, I just realize I wasn’t setting the parameter while the texture was bound. DOH!!! So… never mind… :slight_smile: