2d Seams

Edit: New Question below :smiley:

I am in the process of coming up with a way to get rid of the use of GLBegin and GLEnd in my program.

Now in my game, each map has multiple layers(top, middle, bottom and players)(each map may even have layers below layers, but ignore that for now) that need to be rendered. Before when I was using GL_Begin it was as easy as figuring out what part of the map I was rendering and then calling GLVertex for each tile that is visible.

I have tried GLDrawArrays but the problem is that my program then uses a lot of memory as I have vertexArrays and TextureArrays for each layer, and this is a lot more than just storing the X and Y and textureID of each tile in memory and rendering each frame.
As I am using quads, it’s 8 different floats for each tile that have to be stored in there and then another 8 for the textures as well.

So I guess my question is, what would be the best way to store the data, I know that Vertex Data can be stored on the graphics card using a VBO but I’m not entirely sure how to go about it.Ideally I’d like to be able to render just bits of the map.

Also, the map will not be static, users will be able to modify the tiles as they play.

There’s all kinds of ways to go about it. If your data were static AND it fit easily on the GPU, you could upload your data to the GPU in static VBOs and then just issue draw calls on it. But your data is dynamic.

Another approach at the complete opposite end of the spectrum is to store nothing on the card and just stream what you need dynamically from CPU memory. There are several ways to do this. The traditional (deprecated) way is to use client arrays. This is simple and documented well all over the place. However, the forward looking approach is to stream your data to the GPU in dynamic VBOs, using an approach like what’s described in this thread.

And in the middle, if say your vertex data is static but just which vertices you draw is dynamic, you could upload your vertex attributes to static VBOs on the GPU, and then just periodically change your index VBOs (used for DrawElements) to change which vertices/triangles you draw.

Lots of options. Which one is right for you depends on the details of you problem.

Well, I came up with an idea, the camera moves and as it moves the players tile location changes. At the start of the program a vertex array with around 40 tiles in it is created.

It is updated when the player moves out of where the current vertex array is rendering. It is also updated manually when a object/tile changes. This means that I dont have to render everything, I only have to have a smaller set of vertices at a time, but it is a bit harder to manage(make sure everything gets updated)

Now I have a new problem, I have set my game to have a viewport based on the current Window Width and Height. I have also set Ortho to be set to one value that never changes.

Basically this means that when I make the window smaller it stretches, and expands when I make it bigger.

When I originally rendered my textured quads, each texture was split up from the original tileset and then bound to it’s own texture ID.

In the new system using Vertex Arrays, I have used texture coordinates so that I only use the one 512x512 texture sheet.

Now the problem is that if I dont use ClampToEdge I get seams, but if I do, I get almost like shimmering all over the screen.

I guess my new question is, is this a limitation of using a big texture instead of splitting them up? Here’s a picture of the seams, I would show you a picture of the 2nd one but it is hard to screenshot. It’s almost like there are lines going through all my tiles but they are coloured the same as the textures.

I’ve been scouring the internet for a while and some people say that you need to put a 1 pixel border around your tiles but this seems a bit annoying to have to do.

CLAMP_TO_EDGE is almost always the right solution. I don’t understand the shimmering thing. Never seen that, over many years and many CLAMP_TO_EDGE textures.

Are you using linear filtering?

When I load in the image I set it to Clamp to Edge and also to Nearest.

The shimmering I see only occurs when I do two things, when I translate the map and when I am either at a small window size or at a window size that does probably not divide equally.

I’d post my code but it’s in C# and OpenTK so I dont know how much use it would be, but here’s a demo you can try…

http://impacttechnique.com/VertexTest.rar

That was using Linear filtering and a 1 pixel border.

Run the program and make the window smaller. Press G to start scrolling.

Maybe I’m just being pedantic and expecting it to look right in every resolution is impossible.

Edit: Here is a demo using the proper texture coordinates and nearest filtering. It is really apparent in this demo when you scroll and change the window size.

http://impacttechnique.com/VertexTestNearest.rar

The former (texture minification) is when you want MIPmapping. Otherwise, it becomes a crapshoot which texels you end up seeing represented in pixels, when you really want an average of all of the texels underneath that pixel (or close as we can get to that in realtime).

Run the program and make the window smaller. Press G to start scrolling.

Maybe I’m just being pedantic and expecting it to look right in every resolution is impossible.

No, you’re not. Same issue. Use MIPmaps and LINEAR_MIPMAP_LINEAR filtering. You might also find you want multisampling or other edge-AA technique.

Alright I have enabled mipmaps, and there isnt any sort of full screen shimmering, but when scrolling transparent textures shimmer. Would it be smarter to generate the mipmaps manually and putting them into a dds?

With transparent textures you mean alpha-blended ?
Then make you you use premultiplied alpha, as explained by Tom Forsyth :
http://home.comcast.net/~tom_forsyth/blog.wiki.html#[[Premultiplied%20alpha]]

Sorry I wasnt clear, I mean textures that have parts that have 255 alpha and bits that have 0 alpha so that when they are rendered, anything behind is only partially visible.

I dont want to render an quad transparent but rather render a quad with a transparent bitmap(which I can do but it shimmers).

So it is alpha tested and not alpha blended ?
Then add alpha blending, to soften the edge, in premultiplied alpha mode.

You should post a screenshot if you can not better describe your problem than “shimmering”