Whats more costly?

Hi there,

I just wanted to ask what approach is the best for drawing a 2D tilemap.

  1. Have one big buffer with vertices for all tiles of the entire map.
    Draw the entire buffer each frame, probably having many tiles outside of the clipping area.

  2. Have one small buffer which can only hold as many tiles as can be seen at any point in time.
    Constantly update the buffer contents to show the tiles which are currently visible from the players perspective.

I know, I should benchmark myself, but I dont have different kinds of hardware which I can test on.
I would like to get an answer for the majority of setups, like a little pro-con list of each method.

Thank you all in advance.

P.S. I am currently going with the second option because it is independent on map size.

I think the best solution is to store the map in chunks, say 32x32 tiles, and render those only when they are visible.

Another option is to create a grid which which just covers the screen regardless of alignment (so e.g. if you were using 16x16 tiles with a 640x400 screen, the grid would be 41x26 tiles) and set all of the vertex attributes in the vertex shader based upon the scroll offset.

Thats a really smart idea, I didnt even think about it.
Only problem is, I want it to be able to run on things as simple as weak little netbooks. So I can not use custom shaders.

[QUOTE=Cornix;1257079]Thats a really smart idea, I didnt even think about it.
Only problem is, I want it to be able to run on things as simple as weak little netbooks. So I can not use custom shaders.[/QUOTE]

Yes you can, even “weak little netbooks” support shaders these days.

Maybe it is true for the models currently on the market, but only two years old netbooks with Intel GMA 3150 certainly not (even GL 1.5 is not fully supported - there is no occlusion query, and driver reports GL 1.4).
Also, for any Intel’s GPU older than HD 2000+ you need an enthusiast to make them support GL 2.0.

I am sorry, but the model I am testing with does not.
Still, given the 2 options above (or maybe another one I didnt think about) what would you say is more cost effective?

It will support GL_ARB_vertex_program however (and this hardware does support HLSL with SM2 under D3D so it’s a lot more capable than what’s exposed via Intel’s GL driver).

Really, if the OP is aiming for this kind of hardware level, then he’s going to be fighting Intel driver bugs every step of the way; the only realistic options I’d consider would be (1) revising the hardware requirements, or (2) porting to D3D.