Instanced Drawing with a texture atlas results in a weird output

Hello,

in short: I am trying to draw a tile map with instanced drawing and the different tiles are all within one texture atlas.

My original idea was to create a tile object, instance-draw it and then move each tile by manipulating the position with a vector array.
The different tiles would then result from manipulating the texture coordniates.
As each tile I want to draw right now is right from the original, I just manipulate the x-coordinate of the texture.

Here is the source code. I first create the translation vectors & then create the texture offset vectors (to manipulate the texture coordinates):



for(signed int x = -64; x < 64; x++)
    {
        for(signed int y = -64; y <64; y++)
        {
            glm::vec3 translation;
            translation.x = (float)x / 64.0f;
            translation.y = (float)y / 64.0f;
            translation.z = 0.0f;
            //translation = glm::normalize(translation);
            translations[transIndex] = translation;
            transIndex += 1;
        }
    }

    glm::vec2 TexOffset[16384];
    int texIndex = 0;
    for(unsigned int x = 0; x < giv_GroundTileData.size()+1; x++)
    {

        for(unsigned int y; y < giv_GroundTileData.size()+1; y++)
        {
            float xCoord = giv_vStartingPointOnTileMap.x + giv_GroundTileData[x][y] / 10.0f;
            glm::vec2 TexCoordinates = glm::vec2(xCoord, giv_vStartingPointOnTileMap.y);
            TexOffset[texIndex++]=TexCoordinates;
        }

    }


My vertex shader:



#version 330 core
layout (location = 0) in vec3 Position; 
layout (location = 1) in vec2 tex;
layout (location = 2) in vec3 pOffset;
layout (location = 3) in vec2 tOffset;

out vec2 TexCoords;

uniform mat4 model;
uniform mat4 projection;
uniform mat4 view;


void main()
{
    TexCoords = tex + tOffset;
    gl_Position = projection * view * model * vec4(Position + pOffset, 1.0); 
}


My “Render-Data function”:



    //Create the instance Buffers
	glGenBuffers(1, &this->m_posVBO);
	glGenBuffers(1, &this->m_texVBO);

	//Bind the position Buffer (Offset)
	glGenBuffers(1, &this->m_posVBO);
	glBindBuffer(GL_ARRAY_BUFFER, m_posVBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3) * 16384, &translations[0], GL_STATIC_DRAW);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    //Bind the texture offset buffer
	glGenBuffers(1, &this->m_texVBO);
	glBindBuffer(GL_ARRAY_BUFFER, m_texVBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec2) * 16384, &TexOffset[0], GL_STATIC_DRAW);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    
	glGenVertexArrays(1, &this->m_VAO_std);
	glGenBuffers(1, &this->VBO_std);


	
	glBindVertexArray(this->m_VAO_std);

	
	glBindBuffer(GL_ARRAY_BUFFER, this->VBO_std);
	glBufferData(GL_ARRAY_BUFFER, sizeof(VertexArray), VertexArray, GL_STATIC_DRAW);

	
	glEnableVertexAttribArray(0); 
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5*sizeof(GLfloat), (GLvoid*)0);


	
	glEnableVertexAttribArray(1); 
	glVertexAttribPointer(1,2, GL_FLOAT, GL_FALSE, 5*sizeof(GLfloat), (GLvoid*)(3*sizeof(GLfloat)));


	
	glEnableVertexAttribArray(2);
    glBindBuffer(GL_ARRAY_BUFFER, this->m_posVBO);
    glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), (void*)0);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glVertexAttribDivisor(2, 1);

   
	glEnableVertexAttribArray(3);
    glBindBuffer(GL_ARRAY_BUFFER, this->m_texVBO);
    glVertexAttribPointer(3, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), (void*)0);
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glVertexAttribDivisor(3, 1);

	glBindVertexArray(0);   


In my thought this would lead to an addition of my translation-vectors (m_posVBO) to the position vector & of the texture-offset (m_texVBO) to the texture coordinates which are passed to the fragment shader.

However this leads to this result:

[ATTACH=CONFIG]1612[/ATTACH]

I experimented a little bit with the translations vectors, but they have to be normalized vectors don’t they?
Each of the tiles is 64x64px and I try to render a 128*128 TileMap.
In my brain that should lead to the vector-creating-loop I coded, but obviously there seems to something wrong in my understanding.

The Offset btw. the tiles should be 64px - the length/height of a tile. with 1 being equal to 128*64 (128 Tiles, each 64px?) the offset would be 0.015625.

I also tried to increase the offset which lead to the result of 3 “stuffed blocks” of tiles, each being roughly the value of the added offset away from each other.

So my question:
Where am I wrong here? Which part of the instancing concept do I get wrong?

No. The scale factor is entirely up to you.

What matters is the coordinate system set up by your matrices (and if you aren’t using a perspective projection, you only need one matrix, not three).

The instancing appears fine, but your offsets are wrong.

Thank you very much for the reply!

What matters is the coordinate system set up by your matrices (and if you aren’t using a perspective projection, you only need one matrix, not three).

Yep, that did it. Thank’s for the hint!

Now I found my right offset for most of the tiles.
The weird thing is, that the translation for the outer “line” of the tile map seems to be different from the rest of it:

[ATTACH=CONFIG]1613[/ATTACH]
[ATTACH=CONFIG]1614[/ATTACH]

I’m not sure if this still belong here - forgive me - as it is not directly related to OpenGL I guess, but I cannot spot my error.

I still have the same double looped algorithm to fill the translation data - now with the right/working offset.

Why is it make every three tiles stick together?

These are my texture params:


    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, this->Wrap_S);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, this->Wrap_T);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, this->Filter_Min);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, this->Filter_Max);

PS:

(and if you aren’t using a perspective projection, you only need one matrix, not three).

Right now I use an ortho matrix for projection… the view matrix scales and translates my “original” tile (the one I am instancing).
In my thinking I need to translate it to move around via keyboard, as by that I can simply move the map, not the camera.

I am still new to this so sorry if it may sound dumb. Why is it better to stuff the model matrix, ortho and the “camera/view” matrix (it is basically a glm::lookAt) matrix into one?

PPS:
I found out that my quad was a rectangle because my viewport was 4:3. Is it better to adapt the viewport or to adapt the vertex data?
I’d say it’s better to keep the Viewport bound to the size of the Window for the overall rational, but when I am already asking nooby questions - why not one more? :stuck_out_tongue:

It boils down to what you need in your shader. If you only need to transform things to clip coordinates, just pass in a single composite MVP matrix. Then transforming each point or vector in the shader is cheaper because you only have one matrix-vector multiply instead of 3 (i.e. 3-4 assembly language dot product instructions instead of 9-12), and you pass less uniform data into the shader as well (1 matrix instead of 3).

I found out that my quad was a rectangle because my viewport was 4:3. Is it better to adapt the viewport or to adapt the vertex data?

Not the latter. And the former is only one option. It’s easy to change the projection transform so that square objects end up square (or any aspect ratio you’d prefer), regardless of the aspect ratio you choose for your viewport.

I’d suspect an issue with the loops which populate the vertex arrays.

Because it avoids performing two additional matrix multiplies for every vertex.

Your vertex shader is only using one matrix: projection * view * model. So you can multiply those matrices together in the client code and send the final product to the shader as a single matrix.

Legacy (fixed-function) OpenGL used two matrices: model-view and projection (the model and view matrices are combined). The reason that the projection is kept separate is because the lighting calculations need to be performed in a space which is affine to “reality”, and a perspective projection breaks that.

If you look at shaders which implement the Phong lighting model (either per-vertex or per-fragment), you’ll note that they produce two transformed vertex positions (one transformed into eye space by the model-view matrix, another further transformed into clip space by the projection matrix) and a transformed vertex normal (transformed into eye-space by the inverse-transpose of the model-view matrix). The lighting calculations only use the eye-space vectors; the clip-space position is only used by the hardware for rasterisation.

If you aren’t performing lighting (or similar) calculations, or if the projection is orthographic, there’s no need to separate the projection transformation. You can just go directly from object space to clip space in one step (and perform lighting calculations, if needed, in clip space).

The projection transformation should handle the aspect ratio.

Thanks a lot for the quick and helpful answers!

That helped a lot and will keep me busy for the weekend! :slight_smile:

First off thanks again!
I made it work to only use one matrix and manipulate the position in the way that it is a quad now w/o touching the vertex data nor the viewpoint.

However my Coordinates are still broken.
I logged the result of my translation filling loop to see if it gives strange results, but it does not.

It coherently adds the offset in the exact way I want it.

But there are two problems:

I fill the translations y-dominated (don’t know another way to say this) so for every y I give it 128 x coordinates to get a 128*128 map.
Starting with the y(0) I assign the x-offset by adding +1 one each. This returns 128 tiles in a perfect row.
The same goes for x(0). Every tile is instanced exactly at the offset of one tile.

I then reset xoffset to 0 and start the loop for y(1). The first tile (0/1) is drawn exactly where I want it to be drawn. However every following tile has an offset of two Tiles now - meaning there are blank spots in the map:

[ATTACH=CONFIG]1615[/ATTACH]

I checked the logged translation vectors and they are fine, meaning for each 128 vectors their x-Coordinate increments by 1. Then the y coordinate increments by one and so on.

If I set up my loop so that the first row has an x-offset of 1 and the following ones just by 0.5f everything fits perfectly except the x(0) column.
The first tile is drawn at (0/0) but the second one at (0/0.5) giving each following tile the y coordinate of (y-0.5):
[ATTACH=CONFIG]1616[/ATTACH]

In each scenario I tried it also seems that the 0-row/colum ((0/y) & (x/0)) have more tiles than the others and additionally their offset increases the higher the index is:
[ATTACH=CONFIG]1617[/ATTACH]

As I said in any tried case (tried some more variations) I logged the translation vectors and for every vector there is exactly the wanted incrementation of the x & y value.
That means for every row there are 128 translation vectors for the regarding x value that have always the same offset.
But still it is instancing them differently - especially the 0 column/ row.

Here is how my translation loop looks like (I just experimented a little with the offset values in the ways stated above, otherwise it stays the same):


glm::vec3 translations[16384];
unsigned int transIndex = 0;
GLfloat xoffset, yoffset = 0.0f;


for(unsigned int y = 0; y < 128; y++)
    {
        xoffset = 0.0f;
        for(unsigned int x = 0; x < 128; x++)
        {
            glm::vec3 translation;
            translation.x = xoffset;
            translation.y = yoffset;
            translation.z = 0.0f;
            translations[transIndex] = translation;
            transIndex += 1;
			xoffset += 1.0f;
        }
        yoffset += 1.0f;
    }

My Shaders are the same except that I only have one matrix in the vertex shader now, so instead of three uniforms its just one :slight_smile:

I know that in pretty much all cases it is a flaw in my logic, but I really don’t get what I am doing wrong with the translation vectors here.
As far as I understood it, it takes the following vector out of the array for each new instanced tile, as long as I set my divisor to 1.
So in theory every tile row should be drawn like the first one, or am I wrong?

I missed this part the first time; I don’t know if you’ve fixed it since:

The stride should be 3sizeof(float), not 2.

Or you could change [var]translations[/var] to [var]glm::vec2[/var], as you aren’t using the Z component (you’d need to change the second parameter of glVertexAttribPointer() to 3 if you want the Z component to be used).

The stride should be 3sizeof(float), not 2.

Or you could change translations to glm::vec2, as you aren’t using the Z component (you’d need to change the second parameter of glVertexAttribPointer() to 3 if you want the Z component to be used).

That did it. Don’t know why I missed this myself, I read over it so many times to check if it is set correct -.-

Thank’s a lot! Now everything works as it is supposed too!