Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 14

Thread: instancing with geometry shader

  1. #1
    Junior Member Regular Contributor
    Join Date
    Apr 2010
    Posts
    127

    instancing with geometry shader

    I reformulates a question.
    hello , i have a lot of 2d shapes of 4 point , are always parallelograms
    With the extrusion they have 8 points and my idea is to do the extrusion on the cpu, save all vertexes of extrusion in a texture and do a instancing drawing and in in the geometry shader I wish take the points from the texture so i can do a batch draw of many many 1 point and in the geometry shader read the points from the texture and emit the vertexes.
    I try to do this because any my geometry is similar and i have a lot of meshes.
    Another approach is do an instance with a scale matrix (that is a 3d vector) then do only a transformation on all points but i can't create a scale matrix from a base rectangle and a parallelogram that aren't similar but for rectangular shapes of different sides I think that this can be done.
    Naturally i have a texture with the scale matrix or extruded vertexes and an instance id for take the vertex or the matrix from the texture wit a sampler.
    Thanks

  2. #2
    Junior Member Regular Contributor
    Join Date
    Apr 2010
    Posts
    127
    Naturally the problem is the size of texture but with vulkan can upload more memory?

  3. #3
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,475
    Quote Originally Posted by giuseppe500 View Post
    I reformulates a question.
    What was your question? I didn't see a question mark.

    Also, I have a few questions.

    What are your goal(s) here?
    What are your constraint(s)?
    Exactly how many extruded parallelograms (parallelipipeds) are we talking about here?
    Is the shape data static or dynamic?
    What kind of lighting/shading do you want to do on each shape?
    What is the vertex format of each vertex (e.g. vec3 positions, vec3 colors)?

    Naturally the problem is the size of texture but with vulkan can upload more memory?
    I don't understand this at all. In OpenGL you can create massive textures that can consume virtually all of your GPU memory. Though it's unclear at this point whether this is a reasonable approach to your problem.

  4. #4
    Member Regular Contributor
    Join Date
    Jul 2012
    Posts
    459
    This sounds related to this.

  5. #5
    Junior Member Regular Contributor
    Join Date
    Apr 2010
    Posts
    127
    Quote Originally Posted by Dark Photon View Post
    What was your question? I didn't see a question mark.

    Also, I have a few questions.

    What are your goal(s) here?
    What are your constraint(s)?
    Exactly how many extruded parallelograms (parallelipipeds) are we talking about here?
    Is the shape data static or dynamic?
    What kind of lighting/shading do you want to do on each shape?
    What is the vertex format of each vertex (e.g. vec3 positions, vec3 colors)?


    I don't understand this at all. In OpenGL you can create massive textures that can consume virtually all of your GPU memory. Though it's unclear at this point whether this is a reasonable approach to your problem.

    What are your goal(s) here?
    My goal is to limitate the drawing call


    Exactly how many extruded parallelograms (parallelipipeds) are we talking about here?
    from 100000 to 300000


    Is the shape data static or dynamic?
    The shape data are only static


    What kind of lighting/shading do you want to do on each shape?
    a simple directional light without shadow


    What is the vertex format of each vertex (e.g. vec3 positions, vec3 colors)?
    only vec3 position and vec3 normal.


    I try to limitate drawing call and also i need a secondary texture for states , selected/unselected color and only few other state var


    My questions are:
    1)i can draw all of my meshes with a huge buffer in only one drawing call?
    2)i'm also thinking to do instancing render because I have an instancing id that can bind to the secondary texture with states.Is a possible solution?
    3)if i use huge buffer how i can bind the shapes to the secondary states texture?


    sorry dark photon and sorry for my english.
    by

  6. #6
    Senior Member OpenGL Guru Dark Photon's Avatar
    Join Date
    Oct 2004
    Location
    Druidia
    Posts
    4,475
    Quote Originally Posted by giuseppe500 View Post
    What are your goal(s) here?
    My goal is to \[limit] the drawing call
    Ok. We'll go based on that.

    (Side-question: is your goal really to minimize draw calls, or to minimize frame time?)

    My questions are:
    1)i can draw all of my meshes with a huge buffer in only one drawing call?
    Probably. Doing some back-of-the-napkin estimates, it looks like it'd take ~206 MB of VBO memory for vertex and index lists, worst case (estimate details below).

    You can reduce this worst-case estimate quite a bit just by applying some simple tricks (e.g. don't explicitly store normals, use smaller vertex format than vec3, etc.), and even that is assuming a simple rendering approach without GPU instancing and without a geometry shader. Though more complex, applying geometry instancing could save you a great deal more memory and bandwidth.

    As first cut, you might just try the simple non-instanced approach because it's easy, and then extend to instancing as needed. If you're a more advanced OpenGL user, feel free to skip directly to using geometry instancing (without a geometry shader).

    2)i'm also thinking to do instancing render because I have an instancing id that can bind to the secondary texture with states.Is a possible solution?
    A geometry instanced solution definitely seems practical here. Though I'm unclear what you mean by states.

    If it's something with per-instance data you want to pull into the shaders (based on the instance ID) or push into the shaders (via instanced arrays), then sure.

    3)if i use huge buffer how i can bind the shapes to the secondary states texture?
    I'd suggest pushing your vertex and index data into the shaders via buffer objects rather than pulling them into the shaders via textures.

    If/when you apply geometry instancing, I'd still recommend this for the basic instance definition (e.g. parallelipiped). For your per-instance data, you have a choice on how to feed that in: 1) Push it in the same way using Instanced Arrays, or 2) pull it in via texture lookups based on gl_InstanceID.


    Code :
    WORST CASE VERTEX SIZE:
    -----------------------
     
             12   tris/obj
             24   verts/obj  (4*6)
      *      24   bytes/vert (2*3*4)
      ---------
            576   bytes vertex data/obj (24*24)
      * 300,000   objects/dataset
      ---------
    172,800,000   bytes vertex data/dataset
     
    WORST CASE INDEX SIZE:
    ----------------------
     
             12   tris/obj
     *        3   indices/tri
      ---------
             36   indices/obj
      * 300,000   objects/dataset
      ---------
     10,800,000   indices/dataset
      *       4   bytes/index
      ---------
     43,200,000   bytes index data/dataset
     
     
    WORST CASE VERTEX+INDEX SIZE
    ----------------------------
    172,800,000   bytes vertex data/dataset
     43,200,000   bytes index data/dataset
    -----------
    216,000,000   bytes/dataset
     
    ~= 206 MB
    Last edited by Dark Photon; 12-19-2016 at 07:22 PM.

  7. #7
    Junior Member Regular Contributor
    Join Date
    Apr 2010
    Posts
    127
    thanks Dark Phothon.
    Very thanks.
    Now i start and use the normal drawing for do practicing with vulkan subsequently if i have a low frame rate i try with geometry instancing i saw many examples on nvidia site.
    1)for state in the second texture i mean visible/invisible , color ecc...., but my problem is no this, my problem is that the parallelograms in my casistic are not equals or are not all equals.
    My idea was to send via texture and instancing id a scale matrix for each rectangle for changing it from rectangle to parallelogram like attached image .
    But i'm very poor on math and don't know how generate this scale matrix .
    Then if i be able to generate this matrix the vertex shader fetch the matrix from the texture and multiply it to each vertex in the rectangle.
    So i use only 3 vec3 (the diagonal of matrix for each shape).
    If i do the same but with all vertex i have a redundancy and the adjusting offsets increase to 8 vec3.
    but these may be speculations for now.
    I ask only :you what you does in the same situation?
    Now i start to write code i'm not a professional programmer on vulkan then start with the easy approach and write code.
    If you have time i'm very interessed in geometry instancing with scale matrix or offset.
    very thanks for now.
    Giuseppe.
    Image of different parallelograms

  8. #8
    Member Regular Contributor
    Join Date
    Jul 2012
    Posts
    459
    Quote Originally Posted by giuseppe500 View Post
    Now i start to write code i'm not a professional programmer on vulkan then start with the easy approach and write code.
    My two cents are, do that in OpenGL instead of Vulkan. The API is still in development, it is subject to a lot of changes, and programming on vulkan is, to my opinion, less easy that programming on OpenGL. Plus, you can do instancing with OpenGL too.

    Quote Originally Posted by giuseppe500 View Post
    If you have time i'm very interessed in geometry instancing with scale matrix or offset.
    The main issue with scaling is about lighting. You'll generally have to recalculate all the normals. If you have an homothety, then it will just be a matter of scaling all the normals with the same factor since you will keep angles between edges.

  9. #9
    Member Regular Contributor
    Join Date
    May 2016
    Posts
    467
    i'd try to use instanced rendering, but without the geometry shader

    how can you describe a parallelogram?
    -- vec3 position attribute
    -- mat4 matrix to describe global position / rotation / scale / camera

    if you'd use simple "ambient" lighting, you wouldn't need a normal (and you can avoid deferred shading)


    300.000 x (sizeof(mat4) + 24 * sizeof(vec4)) ~= 134 Mbyte (array buffer)
    --> so memory shouldnt be a big problem

    before you render anything, sort the different shapes, you should end up with:
    std::vector<Vertex> shape1;
    std::vector<Vertex> shape2;
    std::vector<Vertex> shape3;

    std::vector<mat4> allshapes1;
    std::vector<mat4> allshapes2;
    std::vector<mat4> allshapes3;

    to render all shape1 parallelograms:
    --> put shape1 into the vertex buffer
    --> put allshapes1 into the instance buffer
    --> glDrawArraysInstanced(GL_QUADS, 0, 24, allshapes1.size());

    that way you reduce the number of function calls to the number of different shapes you have
    try it with a lower number of instances first (like 10.000), check if it acomplishes the task in acceptable time


    why trying to do it with vulkan ?
    you'd have to learn + understand a completely new API that is more complicated than OpenGL
    if you cant do it with OpenGL in proper time, it is likely that vulkan cant do it either in proper time
    Last edited by john_connor; 12-20-2016 at 02:48 AM.

  10. #10
    Junior Member Regular Contributor
    Join Date
    Apr 2010
    Posts
    127
    Quote Originally Posted by Silence View Post
    My two cents are, do that in OpenGL instead of Vulkan. The API is still in development, it is subject to a lot of changes, and programming on vulkan is, to my opinion, less easy that programming on OpenGL. Plus, you can do instancing with OpenGL too.



    The main issue with scaling is about lighting. You'll generally have to recalculate all the normals. If you have an homothety, then it will just be a matter of scaling all the normals with the same factor since you will keep angles between edges.
    my idea is to find a scale matrix and do an extrude on the cpu then i can calculate the normals for one time always on the cpu all is static and no more recalculate the normal, is wrong?
    But my problem is to create this scale matrix.
    the scale matrix is always the same nothing change.
    I have only an orbit camera that Interact with the model all the meshes are static ,there is no animation.
    What do you mean that i don'rt understand?
    thanks.

    john_connor : no the shape are all different i cant group by family of shape , thanks.
    Last edited by giuseppe500; 12-20-2016 at 06:59 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •