glVertexAttribPointer with separated vertices and texture coordinates vectors

Hello

I generated a terrain and I want texture it.

My vertex shader is like that :


#version 330 core

layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
layout (location = 2) in vec2 texCoord;

out vec2 TexCoord;
out vec3 ourColor;

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

void main()
{
    gl_Position = projection * view * model * vec4(position, 1.0f);
	ourColor = color;
    TexCoord = vec2(texCoord.x, 1.0 - texCoord.y);
	
}

My fragment shader is like that :


#version 330 core

out vec4 color;

uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;

in vec2 TexCoord;

void main()
{
    color = texture(ourTexture1, TexCoord);
} 

I have all the terrain’s vertices in a vector.

I don’t know how to store texture coordinates. I guess in another vector but i think there will be an issue with

glVertexAttribPointer
call.

I mean, in the vertices vector, there is no texture coordinates so how will I call glVertexAttribPointer when i’ll just want render the terrain ? and i’ll just want render the texture ?

My code is like that in the case of vertices, color and texture coordinates are in the same vector and VBO.


glVertexAttribPointer(0, 3, GL_FLOAT, GL_TRUE, 8 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);

// Color attribute
glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, 8 * sizeof(GLfloat), (GLvoid*)(6 * sizeof(GLfloat)));
glEnableVertexAttribArray(1);

// TexCoord attribute
glVertexAttribPointer(2, 2, GL_FLOAT, GL_TRUE, vertices_terrain.size() * sizeof(GLfloat), (GLvoid*)(6 * sizeof(GLfloat)));
glEnableVertexAttribArray(2);

En résumé : I want separate vertices and color, texture coordinates.

Because of shaders and my architecture, there will be not be colors, texture and position at every call. Am I right ?

If you don’t understand tell me !

Thanks a lot

Your vertex shader has three input variables: position, color and texCoord. In order to render anything with that shader, you have to provide values for all three.

If they are in separate VBOs, then you need to call glBindBuffer(GL_ARRAY_BUFFER) before each call to glVertexAttribPointer().

A call to glVertexAttribPointer() stores the information about how to obtain the data for a given attribute. The data stored for each attribute includes the parameters to glVertexAttribPointer(), the GL_ARRAY_BUFFER binding which was current at the time of the last glVertexAttribPointer() call, the enabled state (glEnableVertexAttribArray() or glDisableVertexAttribArray()) and the divisor (glVertexAttribDivisor()).

Alternatively, you can store the data for all of the attributes in a single VBO, but in separate regions rather than interleaved. Each possibility (interleaved, non-interleaved or multiple VBOs) has advantages and disadvantages. Interleaved attributes are optimal from the GPU side. Separate regions facilitate replacing the data for some attributes while leaving others unchanged. Separate buffer objects allow different usage flags to be specified for different buffers and allow you to discard the entire data store and replace it with a new one; they may also be more efficient when using glMapBuffer() if you don’t have (or can’t rely upon having) glMapBufferRange().

In OpenGL 4.3 and later, the VBO can be separated from the other attribute parameters by using glVertexAttribBinding() and glVertexAttribFormat() rather than glVertexAttribPointer().

Thanks for your reply !

I understand know :slight_smile:

Maybe you can help me one more time.

In my vector I have the vertices of a terrain and I want texture it with the same texture.

How do I find texture color ?

I think i’m going to put them in the same vector. The vector is like that :

vertices | color | coordinates

The vertices describe triangles that’s why the first call is the size of 3 and a stride of 3. Am I right ?

The color is RGB so the second call is the size of 3 and a stride of 3 with an offset of b (where b is the vector size without color and coordinatres) . Am I right ?

The coords describe a triangle so the third call is the size of 2 (because they are coordinates) and a stride of 2 with an offset of c (where c is the vector size without coordinatres but) . Am I right ?


glVertexAttribPointer(0, 3, GL_FLOAT, GL_TRUE, 3 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);

// Color attribute
glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, 3 * sizeof(GLfloat), (GLvoid*)(b * sizeof(GLfloat)));
glEnableVertexAttribArray(1);

// TexCoord attribute
glVertexAttribPointer(2, 2, GL_FLOAT, GL_TRUE, 2 * sizeof(GLfloat), (GLvoid*)(c * sizeof(GLfloat)));
glEnableVertexAttribArray(2);

EDIT:

My program doesnt crash anymore but there is no texture displayed. My code works because it comes from a tutorial.

Is it possible that the texture is only displayed on the first triangle ? because of my glVertexAttribPointer calls ?

No. The size and stride is 3*sizeof(GLfloat) because each vertex position has 3 components (X, Y and Z).

The stride parameter is the offset in bytes between consecutive values. If the data is packed (as is the case here), you can pass 0 for stride, and the correct stride will be deduced from the size and type parameters. E.g. if size is 3 and type is GL_FLOAT, then the stride will be 3*sizeof(GLfloat).

The offset parameter is the offset in bytes of the first value from the start of the buffer. If the position, colour and texture coordinates are stored as separate regions with no space between them, and all components are of type GLfloat, the offsets should be 0, N3sizeof(GLfloat) and N6sizeof(GLfloat), where N is the number of vertices.

Have you stored the index of the appropriate texture unit in the uniform variable ourTexture1? E.g. for texture unit 0 (GL_TEXTURE0):


glUniform1i(glGetUniformLocation(program, "ourTexture1"), 0);

Does that texture unit contain a valid texture? If the minification filter uses mipmaps (the default setting is GL_NEAREST_MIPMAP_LINEAR, which uses mipmaps), all mipmap levels must have been created, either by calling glTexImage2D() for each mipmap level or by calling glGenerateMipmap() to generate the other mipmap levels from the base level.

Ok so in all calls I replaced by : 0 * sizeof(GLfloat)

Yes

glUniform1i(glGetUniformLocation(program, “ourTexture1”), 0);
and
glGenerateMipmap()
are called.


GLuint texture1;
GLuint texture2;

glGenTextures(1, &texture1);
glBindTexture(GL_TEXTURE_2D, texture1);
	
// Set our texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

// Set texture filtering
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	
// Load, create texture and generate mipmaps
int width, height;
unsigned char* image = SOIL_load_image("../imgp0913.jpg", &width, &height, 0, SOIL_LOAD_RGB);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glGenerateMipmap(GL_TEXTURE_2D);
SOIL_free_image_data(image);
glBindTexture(GL_TEXTURE_2D, 0);


while (!glfwWindowShouldClose(window))
{
	glfwPollEvents();
	glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
	glClear(GL_COLOR_BUFFER_BIT);

	glm::mat4 view;
	glm::mat4 model;

	// Projection
	model = glm::scale(model, glm::vec3(0.9f));
	model = glm::rotate(model, -20.0f, glm::vec3(1.0f, 0.0f, 0.0f));
	view = glm::translate(view, glm::vec3(0.0f, 0.0f, -10.0f));

	glm::mat4 projection;
	projection = glm::perspective(20.0f, (GLfloat)WINDOW_WIDTH / (GLfloat)WINDOW_HEIGHT, 0.1f, 100.0f);

	// Get the uniform locations
	GLint modelLoc = glGetUniformLocation(shader.Program, "model");
	GLint viewLoc = glGetUniformLocation(shader.Program, "view");
	GLint projLoc = glGetUniformLocation(shader.Program, "projection");
	
       // Pass the matrices to the shader
	glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));
	glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));	
		
	shader.Use();

        glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, texture1);
	glUniform1i(glGetUniformLocation(shader.Program, "ourTexture1"), 0);
		
	glBindVertexArray(VAO);

        //Position attribute
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_TRUE, 0 * sizeof(GLfloat), (GLvoid*)0);
	glEnableVertexAttribArray(0);
	
        // Color attribute
	glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, 0 * sizeof(GLfloat), (GLvoid*)(a * sizeof(GLfloat)));
	glEnableVertexAttribArray(1);
		
        // TexCoord attribute
	glVertexAttribPointer(2, 2, GL_FLOAT, GL_TRUE, 0 * sizeof(GLfloat), (GLvoid*)(b * sizeof(GLfloat)));
	glEnableVertexAttribArray(2);

	glDrawElements(GL_TRIANGLES, faces_terrain.size(), GL_UNSIGNED_INT, 0);

	glBindVertexArray(0);

        glfwSwapBuffers(window);
}

The terrain’s mesh is well generated but no texture… :frowning: the picture is also well loaded because image is not null.

vertices_terrain is filled like that :


//Before that there is an obj loader

//Color at 0 but it is not used in the shader
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);

// Texture coordinates describe a triangle
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.0f);
vertices_terrain.push_back(0.5f);
vertices_terrain.push_back(1.0f);
vertices_terrain.push_back(1.0f);
vertices_terrain.push_back(1.0f);

The above code is calling glVertexAttribPointer() but not calling glBindBuffer(GL_ARRAY_BUFFER) first. If it works, it’s because the buffer happens to still be bound.

When using VAOs, you’d normally have the glVertexAttribPointer() and glEnableVertexAttribArray() calls in the initialisation routine. During the rendering routine you’d just call glBindVertexArray(). That’s the point of VAOs: they collect all of the attribute-related state in one place so that you can set it with a single call.
ause image is not null.

Your entire mesh is just 3 vertices?

If you have more than 3, you have to provide a colour and texture coordinates for every vertex.

BindBuffer is used above the routine but i did not copy pasted it.

Thanks to your advice i moved glVertexAttribPointer() calls out of the routine and it still works.

[QUOTE=GClements;1279669]

Your entire mesh is just 3 vertices?

If you have more than 3, you have to provide a colour and texture coordinates for every vertex.[/QUOTE]

No, i have thousands of vertices. How can I provide a colour and texture coordinates for every vertex ?

I thought the way i’m calling glVertexAttribPointer() was the right way :frowning:

Do I need to push_back() colour and coordinates as much as there is vertices ?

Exactly the same way as you provide a position for each vertex.

Yes.

If you’re obtaining the data from an OBJ file, it may already contain texture coordinates. However, if it does, and it has separate indices for the position and texture coordinates, you’ll need to de-index (or re-index) the data before feeding it to OpenGL (OBJ allows each vertex to have different indices for position, texture coordinates and normal, while OpenGL uses a single index for all attributes).

Alright !

I added two basic (and very awful) for loops and something appeared but very stange…


for (int i = 0; i < a; i += 3)
{
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
}	

long int b = vertices_terrain.size(); // Offset

for (int i = 0; i < a; i += 3)
{
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(0.0f);
	vertices_terrain.push_back(1.0f);
	vertices_terrain.push_back(0.0f);
        vertices_terrain.push_back(0.5f);
	vertices_terrain.push_back(1.0f);
}

Textures seem very stretch, here is a screenshot.

[ATTACH=CONFIG]1285[/ATTACH]

If you want the texture to be mapped correctly, you need to assign meaningful texture coordinates.

For a terrain mesh, it should suffice to use the position’s horizontal coordinates (typically either X,Y or X,Z depending upon whether the vertical coordinate is Z or Y) with an appropriate scale and offset determined from the bounding box.

In the more general case, texture coordinates are normally assigned as part of the modelling process and stored in the model file.

The coordinates i provided describe a triangle in the image : 0,0 / 0.5,1 / 0,1. Aren’t they meaningful ?

These coordinates are the texture mapping and when i change them, the color and the shape (but still stretched) change as well it looks like the problem comes from somewhere else don’t you think ?

Could it be coming from perspective/projection etc view ?

[QUOTE=j_sb22;1279680]The coordinates i provided describe a triangle in the image : 0,0 / 0.5,1 / 0,1. Aren’t they meaningful ?
[/QUOTE]
Judging from the posted image, no.

No.

You need to bear in mind that the topology is defined by the indices in the faces_terrain vector. Assuming that this comes from the face descriptions in the OBJ file, it probably won’t define a disjoint set of triangles (where triangle n is defined by the vertices 3n, 3n+1 and 3*n+2), so providing texture coordinates which are appropriate for disjoint triangles won’t produce a meaningful result.

Even if you were rendering disjoint triangles, your texture coordinates would map the same portion of the texture to each individual triangle, rather than mapping the mesh as a whole to the texture as a whole.

So… the coordinates 0,0 / 0.5,1 / 1,0 can change according to the texture ?

Texture coordinates determine how the texture is mapped to the mesh, i.e. which point on the texture is mapped to a given vertex.