# Thread: How to calculate texture coordinates dynamically on the vertex shader?

1. ## How to calculate texture coordinates dynamically on the vertex shader?

This is my current vertex shader:

Code :
```#version 410

layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
layout(location = 1) in vec3 norm; //a 3D vertex representing the normal to teh vertex
layout(location = 2) in vec2 texture_coordinate;

out vec3 normal;
out vec3 vertexPos; //projected vertex
out vec2 texture_coord;

uniform mat4 model = mat4(1); //Position and orientation of the current object
uniform mat4 view = mat4(1); //Camera orientation and position
uniform mat4 proj = mat4(1); //the projection parameters (FOV, viewport dimensions)

uniform uint face_type[2048];

void main()
{
uint orientation=face_type[gl_InstanceID];
vec2 face_texture = vec2(orientation*(1/6.f),0);

vec2 fml = texture_coordinate + face_texture;

gl_Position = proj*view*model*vec4(position, 1.0);
normal = vec3(model*vec4(norm,1.0));
vertexPos = vec3(model*vec4(position, 1.0)); //calculate the transformed pos
texture_coord = fml;
}```

All I am trying to do is pass onto the fragment shader the texture coordinate, texture_coord+vec2(orientation*(1/6.f),0)

The reason for this is that, this is a texture for a cube that is 192*32 pixels long, and each 32x32 area of the texture is a face of the cube. I need to dynamically select which face of the cube is getting drawn, so I need to dynamically calculate the texture coordinates based on the face type. However OpenGL doesn't let me do this.

Any help is appreciated.

2. Originally Posted by Makogan
Code :
`uniform uint face_type[2048];`
This may exceed the value of GL_MAX_VERTEX_UNIFORM_COMPONENTS, which is only required to be at least 1024. If you need a uniform array which exceeds implementation limits, you can use a texture instead.

Originally Posted by Makogan
Code :
`    uint orientation=face_type[gl_InstanceID];`
You're rendering each face as a separate instance? In that case, you should probably make the face type a per-instance vertex attribute. But I believe that such small instances may be inefficient. Even without instancing, I'd be inclined to make the face type an integer vertex attribute (you only need to use a byte per value).

3. Originally Posted by GClements
This may exceed the value of GL_MAX_VERTEX_UNIFORM_COMPONENTS, which is only required to be at least 1024. If you need a uniform array which exceeds implementation limits, you can use a texture instead.

You're rendering each face as a separate instance? In that case, you should probably make the face type a per-instance vertex attribute. But I believe that such small instances may be inefficient. Even without instancing, I'd be inclined to make the face type an integer vertex attribute (you only need to use a byte per value).
You are right the issue definetely comes from the face_type array being too big. How can I go around passing at least 2048 indexed bytes?

EDIT:

I have rewritten my code to pass the face_type as a vertex attribute and then I tried using glVertexAttribDivisor to specify that this attribute is to be used once per instance insteadof one per vertex.

The issue that I am having right now however, is that the face types (which are an integer from 0 to 5) are not getting to the shader as they should (i.e they are not a number from 0 to 5)

This is how I am passing the face types into the vertex shader:

Code :
```vector<Face> face_types = {Top, Bottom, Left,};
glEnableVertexAttribArray(3);
glBindBuffer(GL_ARRAY_BUFFER, experiment);
glVertexAttribPointer(3, 1, GL_INT, GL_FALSE, sizeof(Face), (void*)0);
glBufferData(GL_ARRAY_BUFFER, face_types.size()*sizeof(Face),
face_types.data(), GL_DYNAMIC_DRAW);
glVertexAttribDivisor(3, 1);```

And this is the vertex shader that processes it:

Code :
```#version 450

#define PI 3.1415926535897932384626433832795

layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
layout(location = 1) in vec3 norm; //a 3D vertex representing the normal to teh vertex
layout(location = 2) in vec2 texture_coordinate;
layout(location = 3) in int face_type;

out vec3 normal;
out vec3 vertexPos; //projected vertex
out vec2 texture_coord;

uniform mat4 model = mat4(1); //Position and orientation of the current object
uniform mat4 view = mat4(1); //Camera orientation and position
uniform mat4 proj = mat4(1); //the projection parameters (FOV, viewport dimensions)

//Taken from: [URL]https://gist.github.com/neilmendoza/4512992[/URL]
mat4 rotationMatrix(vec3 axis, float angle)
{
axis = normalize(axis);
float s = sin(angle);
float c = cos(angle);
float oc = 1.0 - c;

return mat4(oc * axis.x * axis.x + c,           oc * axis.x * axis.y - axis.z * s,  oc * axis.z * axis.x + axis.y * s,  0.0,
oc * axis.x * axis.y + axis.z * s,  oc * axis.y * axis.y + c,           oc * axis.y * axis.z - axis.x * s,  0.0,
oc * axis.z * axis.x - axis.y * s,  oc * axis.y * axis.z + axis.x * s,  oc * axis.z * axis.z + c,           0.0,
0.0,                                0.0,                                0.0,                                1.0);
}

void main()
{
mat4 rotation = mat4(1);
switch(face_type)
{
case 0:
rotation = mat4(1);
break;
case 1:
rotation = rotationMatrix(vec3(0,0,1), -PI/2.f);
break;
case 2:
rotation = rotationMatrix(vec3(0,0,1), PI/2.f);
break;
case 3:
rotation = rotationMatrix(vec3(0,0,1), PI);
break;
case 4:
rotation = rotationMatrix(vec3(1,0,0), PI/2.f);
break;
case 5:
rotation = rotationMatrix(vec3(1,0,0), -PI/2.f);
break;

default:
rotation = mat4(vec4(face_type,0,0,0),
vec4(0,1,0,0),
vec4(0,0,1,0),
vec4(0,0,0,1));
break;
}

/*rotation = mat4(vec4(5,0,0,0),
vec4(0,5,0,0),
vec4(0,0,5,0),
vec4(0,0,0,1));*/

vec4 projection = proj*view*model*rotation*vec4(position, 1.0);
gl_Position = projection;

normal = vec3(model*vec4(norm,1.0));
vertexPos = vec3(model*vec4(position, 1.0)); //calculate the transformed pos
texture_coord = texture_coordinate + vec2(face_type*(1/6.f),0);
}```

4. Originally Posted by Makogan
I have rewritten my code to pass the face_type as a vertex attribute and then I tried using glVertexAttribDivisor to specify that this attribute is to be used once per instance insteadof one per vertex.

The issue that I am having right now however, is that the face types (which are an integer from 0 to 5) are not getting to the shader as they should (i.e they are not a number from 0 to 5)
Without even seeing you're code, I'm 95% certain of the problem ...

Originally Posted by Makogan
This is how I am passing the face types into the vertex shader:

Code :
```vector<Face> face_types = {Top, Bottom, Left,};
glEnableVertexAttribArray(3);
glBindBuffer(GL_ARRAY_BUFFER, experiment);
glVertexAttribPointer(3, 1, GL_INT, GL_FALSE, sizeof(Face), (void*)0);```
Yup. You need to use glVertexAttribIPointer() (note the "I") to pass integer attributes. Otherwise, the values will be passed as floats. So e.g. 5 will become either 5.0 if the normalized parameter is GL_FALSE or 5.0/231 if it's GL_TRUE. Integer uniforms didn't exist prior to OpenGL 3.0, hence the need to use a different function.

Also, if the values are bytes, the type parameter needs to be GL_BYTE (or GL_UNSIGNED_BYTE) rather than GL_INT.

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•