Hi all. I have data where there many ‘z’ values for each xy. This typically happens for soil samples with chemical analysis. (ok, x,y,z, chem1, chem2, … chem9999 … )
PS. Sorry for crazy c like code, but trying to post C code is not working in certain places.
So we sort this data as
struct xy {
double x,y;
};
xy data[large_number];
and then
double chemical_reading_1[];
double chemical_reading_2[];
…
double chemical_reading_9999[];
I cannot see any way to use vertex buffers, to allow for this “distributed” format. I’d like to load the xy values, then each “z” property, and finally render each “surface” without having to manually looping
for i=0;i<NNNN;i++ {
glVertex x[i],y[i],chem1[i];
}
Actually, my “real” problem is more difficult because I really have
struct xyz {
double x,y,z;
}
and I’d really like to be able to program OpenGL to be access data at any offset from the beginning of a structure. So to be able to pass in the length of the structure, and the offset into it for a particular property. So
xyz *array;
double *chem7;
// and using GNU’s offsetof
short stride_xyz = sizeof xyz;
short offset_x = offsetof xyz , x;
short offset_y = offsetof xyz , y;
short stride_z = sizeof z
short offset_z = 0;
glSetXBuffer array,stride_xyz,offset_x;
glSetYBuffer array,stride_xyz,offset_y;
glSetZBuffer chem7,stride_z ,offset_z;
so
param 1 is the address of the buffer,
param 2 is the stride length bytes
param 2 is the offset for each stride bytes
Hoping for too much?
The other reason I hope for this is that I can have 10’s or 100’s of millions of points, so having to use glVertex is really slow
If the new standard can handle this, fantastic. If not can it be considered?
Finally, so many people store their data in so many different ways, if we could program OpenGL in this way, everyone would get much better performance