Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 8 of 8

Thread: Whats more costly?

  1. #1
    Intern Contributor
    Join Date
    Sep 2013
    Posts
    81

    Whats more costly?

    Hi there,

    I just wanted to ask what approach is the best for drawing a 2D tilemap.
    1) Have one big buffer with vertices for all tiles of the entire map.
    Draw the entire buffer each frame, probably having many tiles outside of the clipping area.

    2) Have one small buffer which can only hold as many tiles as can be seen at any point in time.
    Constantly update the buffer contents to show the tiles which are currently visible from the players perspective.

    I know, I should benchmark myself, but I dont have different kinds of hardware which I can test on.
    I would like to get an answer for the majority of setups, like a little pro-con list of each method.

    Thank you all in advance.

    P.S. I am currently going with the second option because it is independent on map size.

  2. #2
    Junior Member Regular Contributor
    Join Date
    Mar 2012
    Posts
    129
    I think the best solution is to store the map in chunks, say 32x32 tiles, and render those only when they are visible.

  3. #3
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    490
    Another option is to create a grid which which just covers the screen regardless of alignment (so e.g. if you were using 16x16 tiles with a 640x400 screen, the grid would be 41x26 tiles) and set all of the vertex attributes in the vertex shader based upon the scroll offset.

  4. #4
    Intern Contributor
    Join Date
    Sep 2013
    Posts
    81
    Quote Originally Posted by GClements View Post
    Another option is to create a grid which which just covers the screen regardless of alignment (so e.g. if you were using 16x16 tiles with a 640x400 screen, the grid would be 41x26 tiles) and set all of the vertex attributes in the vertex shader based upon the scroll offset.
    Thats a really smart idea, I didnt even think about it.
    Only problem is, I want it to be able to run on things as simple as weak little netbooks. So I can not use custom shaders.

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,201
    Quote Originally Posted by Cornix View Post
    Thats a really smart idea, I didnt even think about it.
    Only problem is, I want it to be able to run on things as simple as weak little netbooks. So I can not use custom shaders.
    Yes you can, even "weak little netbooks" support shaders these days.

  6. #6
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,144
    Quote Originally Posted by mhagain View Post
    Yes you can, even "weak little netbooks" support shaders these days.
    Maybe it is true for the models currently on the market, but only two years old netbooks with Intel GMA 3150 certainly not (even GL 1.5 is not fully supported - there is no occlusion query, and driver reports GL 1.4).
    Also, for any Intel's GPU older than HD 2000+ you need an enthusiast to make them support GL 2.0.

  7. #7
    Intern Contributor
    Join Date
    Sep 2013
    Posts
    81
    Quote Originally Posted by mhagain View Post
    Yes you can, even "weak little netbooks" support shaders these days.
    I am sorry, but the model I am testing with does not.
    Still, given the 2 options above (or maybe another one I didnt think about) what would you say is more cost effective?

  8. #8
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,201
    Quote Originally Posted by Aleksandar View Post
    Maybe it is true for the models currently on the market, but only two years old netbooks with Intel GMA 3150 certainly not (even GL 1.5 is not fully supported - there is no occlusion query, and driver reports GL 1.4).
    It will support GL_ARB_vertex_program however (and this hardware does support HLSL with SM2 under D3D so it's a lot more capable than what's exposed via Intel's GL driver).

    Really, if the OP is aiming for this kind of hardware level, then he's going to be fighting Intel driver bugs every step of the way; the only realistic options I'd consider would be (1) revising the hardware requirements, or (2) porting to D3D.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •