Referencing different VBOs

Hello,

I try to set up a scene consisting of different models. All of the corners of the surfaces can appear several times in different surfaces so I decided to use indexed rendering. But even between different models always the same points in a “point grid” are used. So one single VBO is enough for all the points, and every model defines its own IBO.
BUT: The texture coordinates differ in different models, even if the concerning vertex references point to the same vertex. Additionally always specifying the whole texture coordinates (2 floats) coasts a lot of memory. References for that purpose would be better, too.

So I’d actually like one reference buffer to point to all the vertices and another one to point to all the texture coordinates. And when rendering a point is composed of the reference in one IBO to the vertex and a reference in another IBO to the texture coordinate. Is this possible and if no, what would be an alternative?
Thx for your help!:slight_smile:

All of the corners of the surfaces can appear several times in different surfaces so I decided to use indexed rendering

This might be helpful for you for setting the texture coordinates also.

I can suggest you to have a look at instancing.

Additionally you can read this topic. For instance, you can set one VBO for the vertices, one for the texture coordinates, all of this in a single VAO. This might be useful for you to set-up different VAOs for your different kind of renderings.

Thx for the reply :slight_smile:

Ok so my current idea is to use shaders (similiar to your instancing proposal). But I can’t get all of my required data out of a single instance. Maybe I could simply fill in the VBO a lot of vertices, but they aren’t really used as vertices. It’s a ‘trick’ to save the references. For example I could save a 2D integer vector, with x-coordinate representing the index of the actual vertex to use and y-coordinate referencing the texture coordinates to use. Last the shader needs access to these elements, being referenced by the ‘fake vector’. But I think there should be data types for the shader making this possible.

Is this a common strategy or is it even possible? :smiley:

no, you cant use “vector<type>” (the c++ standard library type) in shaders
you cant use pointers neither, nor references (the c++ feature “&”)

to access data in shaders, you have 3 options:
– you send the data via “vertex attributes” to the vertex shader
– you bind a “uniform block” and access the data in it via “vertex attributes” dynamically
– you bind a “shader storage block” and access the data in it via “vertex attributes” dynamically
– textures would be another way, however …

examples:
https://sites.google.com/site/john87connor/home/tutorial-11-1-uniform-block
https://sites.google.com/site/john87connor/home/tutorial-11-2-shader-storage-block

“indexed rendering” can use the same vertex data multiple times
but i disagree: 2 floats are not really "much"memory consumption: sizeof(float) * 2 = 8 bytes

example: you have a (typical) vertex like this

struct Vertex {
vec3 position;
vec3 normal;
vec2 texcoord;
};

that makes 8 floats = 32 byte

a model that uses 100k vertices consumes 3.2Mbyte, and with 100k vertices you can draw “really detailed” models like that:
http://tf3dm.com/3d-model/uh-60-blackhawk-helicopter-93546.html

and 3.2Mbyte isnt that much for recent graphics cards, e.g. my NVIDIA GeForce GT 640 (3years old) has about 2Gbyte memory

It’s still likely to be excessive for texture coordinates, for which 16-bit normalised values are almost certainly sufficient.

[QUOTE=john_connor;1285007]no, you cant use “vector<type>” (the c++ standard library type) in shaders
you cant use pointers neither, nor references (the c++ feature “&”)[/QUOTE]

I’m not quite sure if I could explain the idea good enough:rolleyes:. At first you could initialize a VBO and make OpenGL interprete it as a list of 2D-integer-vertices. But then you write a shader and use the coordinates in a completely different way. You’re using the x-coordinate to access a seperate vertex buffer and the y-coordinate to access a seperate texture coordinate buffer… Would that be possible?

but i disagree: 2 floats are not really "much"memory consumption: sizeof(float) * 2 = 8 bytes

Oook, maybe I’m a little bit too exact concerning the memory issue. But there’s no harm in that…:smiley:

sure, but …

that vertexbuffer has to be bound anywhere, so that you can access it in your shader
as i said before:
– binding it to a “uniform block”, you’ll have fast, read-only access and the storage amount is (very?) limited
– binding it to a “shader storage block”, you’ll have slower, read-&-write access and the storage amount is huge

example using uniform block:

#version 450 core

in layout (location = 0) uint in_index;

layout (std140, binding = 5) uniform MyVertexBuffer {
vec4 vertices[MAX_VERTICES];
};

void main()
{
	vec4 vertex = vertices[in_index];
	gl_Position = vec4(vertex.xyz, 1);
}

then in your c(++)-application, you do this:

GLuint uniformbuffer = 0;
glGenBuffers(1, &uniformbuffer);
glBindBufferBase(GL_UNIFORM_BUFFER, 5, uniformbuffer);
glBufferData(GL_UNIFORM_BUFFER, ... here your data ...);

Thanks for the effort again! :slight_smile:

Yeah ok, didn’t think of how to access the memory. Assumed there should be a possibility…
-Hm theoretically the storage in a uniform block should be enough ( if I didn’t do anything wrong 64kb for my computer)
-So will a shader storage block be slower then “normal” rendering in the end?

Thx for the example code. I’ll very likely use it :slight_smile:

Why I’m so ‘curious’ about the referencing method: I calculated that with 4gb graphic memory you could increase the sight distance by a factor of 2 to 3km in the world of my application, when using the referencing model instead of the “normal” way. Of course that’s only theoretical, because rendering would be nether fast enough either…:whistle:

Btw: In your example really only indices are send to the rendering pipeline and not sth which pretends to be vertices for the shader. I don’t know much about this strategy so how would you store the index data into a VBO and render it?

as usual: put the attribute data (in this case: the indices) in array buffers (GL_ARRAY_BUFFER)
then invoke the vertexshader by calling:

glDrawArrays(…);

use glVertexAttribIPointer(…) instead of glVertexAttribPointer(…) to set the attribute “pipe” for integer-type variables

example:

GLuint arraybuffer = 0;
glGenBuffers(1, &arraybuffer);
glBindBuffer(GL_ARRAY_BUFFER, arraybuffer);
glBufferData(GL_ARRAY_BUFFER, ... here the indices you want to stream as attributes ...);

Ok I’ll try at the weekend. Thanks! :slight_smile:

OK after hours of fixing infinite problems, I think there’s only one thing left, which doesn’t work: uniform blocks containing arrays. If I use a uniform block like this…


uniform vertexBuffer {
   float a;
   float b;
   float c;
}

… it works, but when I try to use an array…


uniform vertexBuffer {
   vec3 vertices[3];
};

… it doesn’t. I assumed that I can fill a FloatBuffer with nine values to initiate this uniform buffer, maybe that’s the fault. Any suggestions? :slight_smile:

You need to either query the array stride or use the std140 layout. Either way, there is likely to be padding between the elements (std140 guarantees it, but it’s likely with an implementation-dependent layout).

https://www.khronos.org/opengl/wiki/Interface_Block_(GLSL)#Memory_layout

Warning: Implementations sometimes get the std140​ layout wrong for vec3​ components. You are advised to manually pad your structures/arrays out and avoid using vec3​ at all.

try using vec4 instead of vec3, declare the uniform block like this:

layout (std140, binding = 0) uniform MyVertexBuffer {
    vec4 Vertices[MAX_VERTICES];
};

Thank you, now it work’s :slight_smile:

Next question :smiley: :
Is it possible to define an input variable for a shader of type unsigned integer, which gets its values out of a vertex buffer object, which saves short values? So something like an automatic conversion between the buffer short content and the integer variable in the shader?

By the way: does the padding problem also exist for vec2-arrays? Should I use vec4’s there too? But isn’t that again waste of memory…?:whistle:

do you mean a “vertex attribute” ?! if so: yes
https://www.opengl.org/sdk/docs/man4/html/glVertexAttribPointer.xhtml

it all depends on the “memory layout”, read the section i’ve linked prevously
there are 10 rules about the “standard memory layout” (std), read them in page 137/138

Ok I’ll have a look to the second page. :slight_smile:

Back to the vertex attributes: So when I for example define a uint in a shader, because there’s no suitable better integer type, could I pass GL_SHORT to the concerning vertexattrib-method? Will the value be converted automatically and correctly? :confused:

i’d try glVertexAttribIPointer(…, GL_UNSIGNED_SHORT, …)
since there is no 2 byte integer type in glsl, i assume that it will be converted into 4 byte integer type (int or uint)

Yeah works.

Thank you :slight_smile: