2D Texture Data Storage Per Vertex

My problem is to be able to give each vertex a certain amount of constant data. Example, 4 integers and those corresponding integers have floats associated with them between 0 and 1.

I, also, need to associate data that can be updated per frame with each vertex that can correspond to the constant integers. Example, integer 1 has an updating value of 0.5, which can vary per frame between 0 and 1.

I believe 2D textures are the only way to achieve this if I want to stay away from Texture Buffer Objects. On a former post, I was informed TBO’s are for newer hardware and I’m trying to make this cross-platform but more importantly, able to run on most graphic cards.

Are 2D Textures the way to proceed with this? And if so, is it possible to assign the x-coord as the vertex coordinate so I could essential index them. And use the y-coordinates as a list of the values needed? If this is possible how would I pass in the data so it is not normalized between 0…1?

I want to think everyone in advance, I have gotten so much knowledge from my previous posts. I’m glad this information can be accumulated to for people to find in the future.

My problem is to be able to give each vertex a certain amount of constant data. Example, 4 integers and those corresponding integers have floats associated with them between 0 and 1.

I, also, need to associate data that can be updated per frame with each vertex that can correspond to the constant integers. Example, integer 1 has an updating value of 0.5, which can vary per frame between 0 and 1.

You mean here that integer vertex attributes are used as index to a array of floats?

I believe 2D textures are the only way to achieve this if I want to stay away from Texture Buffer Objects. On a former post, I was informed TBO’s are for newer hardware and I’m trying to make this cross-platform but more importantly, able to run on most graphic cards.

Yes VTF is more widely supported but if I am not mistaken only on unified shader architectures which correspond to quite recent graphic card.
You can get more information about this googling and also on the Opengl wiki:

http://www.opengl.org/wiki/Vertex_Texture_Fetch

Can you elaborate a bit more what you are intending to do with your shader so that we can be sure that VTF is “the only way”.

Note that if you need several scalar attributes per vertex, you can pack them in a vec4 since the maximum number of components per attribute is 4.

Now for textures it depends on the nature of the attribute data you need in your vertex shader.
For example you can use a 2D texture whose each line correspond to a vertex ( indexed using an integer attribute per vertex) and all line element the vertex attributes list.
You will also need to pass the list size as another vertex attribute to fetch only relevant data.

You mean here that integer vertex attributes are used as index to a array of floats?

Precisely! I need to be able to assign each vertex n number of these integer indices that correspond to a float, not an array of floats but rather an int, float pair. The floats will range from 0…1

Can you elaborate a bit more what you are intending to do with your shader so that we can be sure that VTF is “the only way”.

What I have coded is a program and shader that will find n number of vertices surrounding any given n number of vertices on a model. So I end up with, for example, 10 search vertices, and each vertex has n number of “found” vertices. Given those n number of vertices found I need to store the “search” vertices that correspond. If a certain vertex was within the radii of the 1st and 2nd vertices to be used for search then I would want to store int 1 and 2 with the “found” vertex. Then with the corresponding ‘int’ stored I need to store a float with that integer, between 0…1

Finally, based upon these values, each render/frame I need an array that contains these 10, or however many, search vertices since their “value” will change each frame. The index, or which element will stay constant, but the corresponding float value for each of the search vertices will vary. By correlating the values per vertex I will be able to perform the necessary transforms in real-time.

I hope I described this clearly, if not let me know!

I need to be able to assign each vertex n number of these integer indices that correspond to a float, not an array of floats but rather an int, float pair. The floats will range from 0…1

The more I read it, the less I understand… :slight_smile:

Are float values stored in an array?
You need several integers to determine this float value?
I do not understand the pair thing.
If the case of only one float correspond to each integer, why not pass floats instead of integers as vertex attributes?

The searching pass that find the ten vertex neighbors is preprocessed by your application program (assuming that vertex position data is static), isn’t it?

The more I read it, the less I understand…

I am trying! I’ll attempt to make my problems and implementation clearer.

Are float values stored in an array?

If your referring to in my main application, their stored in a map I believe now. If you mean in the shader itself, I am trying to have each vertex access the specific integer/float pair list.

You need several integers to determine this float value?

When I perform calculations before runtime, I determine all vertices in the search vertices radii. I now want to, prior to rendering, assign each vertex the integer of the search vertex, such as an ID for the vertex, and then this integer will have a corresponding floating point value, the distance between the vertices. So that during rendering each vertex can access the list of what integer ID’s were found for the search vertices, and the distance from each one. The int/float pair.

I do not understand the pair thing.
If the case of only one float correspond to each integer, why not pass floats instead of integers as vertex attributes?

The amount of search vertices can go up into the 100’s. Which means that each search vertex can have hundreds of cooresponding vertices within its radius. Each time a vertex is “found” I need to store which search point it is within distance to and store the distance from that point.

The searching pass that find the ten vertex neighbors is preprocessed by your application program (assuming that vertex position data is static), isn’t it?

Yep, as mentioned before.

Thanks for helping out dletozeun.

I think I understand your problem now.

I think that the integer atttributes you want to give to your vertex shader are useless, you better pass directly the float values since at least on the programmer point of view, occupy the same amount of memory space.

Anyway, IMO, you have to think again the data arrangement of float attributes in a way that fits hardware requirements and limitations.
I am not sure that even with VTF, your hardware would handle such amounts of attribute data.
I mean, almost each vertex has a combination of a certain amount (the nighboorhood size) of float values. You simply can’t create a 2D texture containing for example on each line the vertex attribute list. It is too much tied to the vertex amount and would end with huge textures.

Perhaps, you have to preprocess again vertex distance value list in a way that require much less attribute information for each vertex. What are you doing with these distances?

I think that the integer atttributes you want to give to your vertex shader are useless, you better pass directly the float values since at least on the programmer point of view, occupy the same amount of memory space.

I am not sure how else I could refer directly to what “search” vertex value is changing.
Example: I have 24 Search Vertices. Each search vertex finds 450 vertices within its radii. For each one of these vertices that is within any search vertex radius, I have to store the search vertex number and the distance between the two vertices. Each vertex on the model could, therefore, be within the radius of many vertices. Each has to be stored so at runtime when “search” vertex 1 has a float value of 0.5, I can make sure any vertex that has that integer will multiply the distance by the 0.5 value.

…you have to think again the data arrangement of float attributes in a way that fits hardware requirements and limitations.

If each vertex has approx. 20 integers and each integer has a float value, that would be approx. 40 values total, 20 int’s and 20 floats. I hope this isn’t to large of a data set, since I can’t fathom how powerful shaders and graphics are done without such a feature.

…almost each vertex has a combination of a certain amount (the neighborhood size) of float values. You simply can’t create a 2D texture containing for example on each line the vertex attribute list. It is too much tied to the vertex amount and would end with huge textures.

First, are you stating that each vertex would roughly have the same amount of float data?
Second, why could I not store per line of a texture the list of 20 int’s and 20 floats?
Finally, I thought large textures would be processed fine? I thought GPU’s were intended to use texturing to process data/texture information.

Perhaps, you have to preprocess again vertex distance value list in a way that require much less attribute information for each vertex. What are you doing with these distances?

I can’t see anyway I could cut down the total amount of int’s and float’s needed per vertex. I have to be able to have each vertex access the changing data of the “search” vertices per frame and multiply that by the individual distance between the two points.

Thanks again for helping me out with this. I hope we can figure this out!

I have to store the search vertex number and the distance between the two vertices

Ok, i did not understand it before this post. I thought that your vertex shader only requires the distance data and not the vertex ids.

First, are you stating that each vertex would roughly have the same amount of float data?

No that’s why i was refering to the “neighborhood size”. I meant, the number of vertices in the radii of a particular vertex.

Second, why could I not store per line of a texture the list of 20 int’s and 20 floats?

I was implying that the number of lines in a such a texture would correspond to the vertices number. I do not how much vertices you have to process but assuming you have 1000 of these ones and a neighborhood size up to 20, you will already end up with a 1000x20 texture! As an example, high end graphic hardware support up to 4096x4096 2D textures.
The solution depends on the scale of the data you h

You problem sounds really GPGPU programming like. Do you use the graphic for rendering or just to process data? In this case, you may be interest by OpenCL or Cuda which perhaps would better fit your needs.

8096x8096 is standard on modern GPUs, so lots of texture room there. Could be sufficient for about 10 million vertices if you are creative enough. All what is needed would be a vertex ID passed as vertexAttribute.

The question is, how much of this data is static and how much dynamic? Updating an 8k texture in realtime would be a different story…

I have to store the search vertex number and the distance between the two vertices

Ok, i did not understand it before this post. I thought that your vertex shader only requires the distance data and not the vertex ids.

First, are you stating that each vertex would roughly have the same amount of float data?

No that’s why i was refering to the “neighborhood size”. I meant, the number of vertices in the radii of a particular vertex.

Second, why could I not store per line of a texture the list of 20 int’s and 20 floats?

I was implying that the number of lines in a such a texture would correspond to the vertices number. I do not how much vertices you have to process but assuming you have 1000 of these ones and a neighborhood size up to 20, you will already end up with a 1000x20 texture! As an example, high end graphic hardware support up to 4096x4096 2D textures AFAIK.

The solution depends on the scale of the data you have to process. If vertex amount if really to much for the hardware you may have to plan to split your vertex data in multiple batches which have its own texture.

You problem sounds really GPGPU programming like. Do you use the graphic for rendering or just to process data? In this case, you may be interest by OpenCL or Cuda which perhaps would better fit your needs.

As an example, high end graphic hardware support up to 4096x4096 2D textures.

I was looking for that information earlier today, kind of limiting in what I was trying to use it for.

Do you use the graphic for rendering or just to process data?

Right now for example here is a screenshot:

Right now I pass a uniform array which has the contribution values into the vertex shader, the data is updated per frame. I only use one of these values for right now, for each vertex. Each vertex, stored in its UV coordinate is a total distance float value from the search vertices. This is not how I want it to work but for the purposes of illustration, it shows the outcome. Right now, I multiply the float value in the UV coord by the activation to determine a final color.

What I need to do is have each one of the activation points shown have its own unique contribution that varies per frame. Therefore, any vertex within that range will need to be multiplied by the contributions of one or more search vertices in order to determine its final color, which is in a 1D texture.

So since its not purely computational, rendering is involved, I can’t use OpenCL or CUDA, I’m assuming?

Since your are already determining at each vertex the neighborhood that may contribute to its final color on CPU you seem to be not that far from computing the final vertex color resulting of the contribution if all its activated vertex neighbors still on CPU side.

About OpenCL ad CUDA, I did not had the chance yet to play with these one, but AFAIK, OpenCL (Which is cross-platform BTW) can be combined with OpenGL. So, it may be worth trying. :slight_smile:

dletozeun,

Right now I have the vertices that are to be colored and their associated information on the CPU side, problem is I want to minimize the CPU load on the system so I used a glCallList to optimize the rendering of the brain model. I use a 1D texture in the vertex shader to find the associated color and pass that to the fragment shader.

Is there a way you can think of where I could pass the calculated value per vertex, using a call list, without overloading the CPU?

Also, with a texture if I can store even a 2048x2048, with all that memory it seems as if I could store enough values in bits for my purposes?

Edit OpenCL is not supported on Intel Macs, which is what I’m programming on currently, and is only on newer graphic cards. Unfortunate :frowning:

Is there a way you can think of where I could pass the calculated value per vertex, using a call list, without overloading the CPU?

When you create a display list all vertex attributes specified will be passed to the vertex shader, however you can’t change these attributes without rebuilding the display list which is obviously not interesting for performance reasons.

To use dynamic vertex attributes you need to create a vertex attributes array and render you vertex using vbo.

Also, with a texture if I can store even a 2048x2048, with all that memory it seems as if I could store enough values in bits for my purposes?

Probably… the problem is how to store your data efficiently in a way that you can easily fetch it in a vertex shader.

I think I’ve realized a solution, because of your help!, to this problem. If I can use a dynamic vertex attribute, and change this value per frame then I can simply use the values on the CPU side to calculate the final color value. And merely pass this into the shader to use the 1D texture to find the appropriate color.

Anyway you could point me in the right direction on how to use dynamic vertex attributes with a VBO. Will I have to do away with the call list entirely? If so, will using a VBO provide the same or better performance to a call list?

Thanks again!

From one of your old posts dletozeun on VBO Question:

I found that X300 supports the vertex_buffer_object extension from the realtech-vr glView database

I think you just misuse vbo here. If you need to update the entire vertex buffer data every frame or almost every frame depending on how much data need to be uploaded without stalling the application you can :

  • Roughly call glBufferData with the vertex array location to trash the current buffer object and set its new data for the next drawing call. This way you are sure to not stall the program because the hardware is still using your BO. But take care of memory consumption.
  • If you need to do sophisticated things like write little chunks of data you can mirror your data in two vertex buffers. The first one would be used for rendering. the second one mapped for data updates. Once the second buffer can be unmapped switch between them and use the second one for rendering. Ping pong between vbos each time you need to update vertex data.

Note that with these two approaches vertex buffers are not guaranteed to be updated every frame but as soon as possible almost without affecting the framerate.

I understand this applies to my situation, but is there an efficient way to merely update the vertex attribute per vertex each frame using a VBO? I realize computing the single float value on the CPU will not cause much of a performance hit, and then all I need to figure out is how to update that data per vertex each frame.

// position vertex attribute
int index = glGetAttribLocation(g_shaderProgram, “position”);
glEnableVertexAttribArray(index);
glVertexAttribPointer(index, kVertices, GL_FLOAT, 0, BUFFER_OFFSET(0));

If I create two VBO’s, one for the list of vertex positions and another bound to the attribute location, as in the above code, will this work? Could I update the vertexattribute array each render but keep the VBO for the positions static?

Yes, you can use distinct input streams. This will likely yield better performance in this case.

The buffer objects setup is missing in the above code. You need to create two VBO one for vertex position and one for vertex attributes and fill them calling glBufferData.

You may also need another VBO for vertex normals.

So, before the above piece of code, you have to at least, bind the vertex buffer filled with your custom attributes and that will be fine.

Could I update the vertexattribute array each render but keep the VBO for the positions static?

Yes you can, you simply need to specify the vertex buffer object usage when calling glBufferData on a bound VBO. This way, the driver can optimize your buffer objects and put it directly on video memory if it is static for example.

For VBO update at every frame or almost every frame, consider using multiple VBO as I said in the “ping/pong” section to avoid application stalls.
To update data, you may map the buffer calling glMapbuffer or glBufferSubData if you don’t want to do sophisticated things on the buffer data.

dletozeun solved this. Wish I could close topic somehow? But for anyone in the future who references this, data is stored in a Vertex Attribute Buffer Object. This way it can be dynamically updated per frame, VBO’s are more widely used than calllists. OpenGL Spec 3.0 states call lists are deprecated.

Using VBO’s with a STATIC_DRAW allows for optimized rendering, since there is no need to change the model

Closed