Buffer reads to shader

So I have set up a buffer which I use for instanced data. It is filled with vector’s data and 1 vector item contains 1 GLuint and 1 GLushort making its size 6 bytes. I then apply correctly vertex attribute which is 3 numbers big, type of GL_UNSIGNED_SHORT, stride is 0, offset is 0 and divisor is 1. The location is hard coded so it is definitely not a problem. The used array pointer function is glVertexAttribIPointer.
So in shader I use uvec3 to receive the values. I have come to understanding that OpenGL will take/read the 2 bytes and and casts them into unsigned integers and places them into the vector uvec3 in their place. First 2 bytes being put to uvec3.x second set to uvec3.y and third to uvec3.z.
The data in the buffer seems to be intact since I have pulled the information out to see that its in right order and right values are in it.

What makes this even weirder is that the first instance uvec3.x value should be 0 but its not.
Shader version is 430.

So help would be appreciated.

EDIT:
The uvec3.x value is 16383 which is 2^14 - 1 and that is the GLuint’s value when its uploaded to the buffer so instead of the uvec3.x being 0 which the first 2 bytes are when combined it reads from the beginning.

It is filled with vector’s data and 1 vector item contains 1 GLuint and 1 GLushort making its size 6 bytes.

Stop. Don’t do that.

Vertex attributes should be at least 4-byte aligned. In order to maintain that alignment, you’ll have to throw away 2 bytes anyway. So just make it 2 GLuints and your problem is solved.

However, if you absolutely insist on doing this:

So in shader I use uvec3 to receive the values. I have come to understanding that OpenGL will take/read the 2 bytes and and casts them into unsigned integers and places them into the vector uvec3 in their place. First 2 bytes being put to uvec3.x second set to uvec3.y and third to uvec3.z.

You need to account for endian issues. When you wrote that GLuint to memory, you probably wrote it as a GLuint, not as a struct containing two GLshorts. Which means that your GLuint will be written as a 32-bit value. But it’s being read as two 16-bit values.

If your CPU is little-endian, then a 32-bit value of oxFFFEFDFC will look like this in memory:


| byte 1 | byte 2 | byte 3 | byte 4 |
|  0xFC  |  0xFD  |  0xFE  |  0xFF  |

However, if you interpret this as two 16-bit little-endian values, then you will get value 1 as 0xFDFC and value 2 as 0xFFFE.

[QUOTE=Alfonse Reinheart;1264018]
You need to account for endian issues. When you wrote that GLuint to memory, you probably wrote it as a GLuint, not as a struct containing two GLshorts. Which means that your GLuint will be written as a 32-bit value. But it’s being read as two 16-bit values.

If your CPU is little-endian, then a 32-bit value of oxFFFEFDFC will look like this in memory:


| byte 1 | byte 2 | byte 3 | byte 4 |
|  0xFC  |  0xFD  |  0xFE  |  0xFF  |

However, if you interpret this as two 16-bit little-endian values, then you will get value 1 as 0xFDFC and value 2 as 0xFFFE.[/QUOTE]

So I can just write the values as

union intShort
{
GLuint ints;
GLushort shorts[2];
};
struct values
{
intShort state;
GLushort shorty;
};

And that reads them right?

EDIT: The thing is that these values represent Voxel’s status, id, stress and material so if I can save memory in anyway then I should aim at that. 646464*6 is 1.5 MB worth of data and filling GPU with this multiple times will reserve quite a lot of memory.

And that reads them right?

There’s no specification to guarantee that (neither C nor C++ will say what the result of such implicit casting is), but generally speaking, yes, that should work.

646464*6 is 1.5 MB worth of data

No, it’s 0.196608 MegaBytes. 6 bits per voxel, and 8 bits per byte. So if you use 8 bits per voxel, it takes up 0.262144 MB.

Also, please tell me you’re not trying to render a Minecraft clone through instanced rendering of cubes…

[QUOTE=Alfonse Reinheart;1264021]There’s no specification to guarantee that (neither C nor C++ will say what the result of such implicit casting is), but generally speaking, yes, that should work.
[/QUOTE]
Ok will test it, thanks.
EDIT: Didn’t work. Shame. I guess I have to create 3 GLushorts and combine 2 of them when necessary.

No, it’s 0.196608 MegaBytes. 6 bits per voxel, and 8 bits per byte. So if you use 8 bits per voxel, it takes up 0.262144 MB.

6 bytes per voxel, not 6 bits. 18 bits for ID (64 * 64 * 64, 262144, 2⁶ * 2⁶ * 2⁶), 6 bits for state, 8 bits for stress and 16 bits for material.

Also, please tell me you’re not trying to render a Minecraft clone through instanced rendering of cubes…

I promise that the instancing is only temporary and I’m not rendering whole blocks since that is insane performance dropper (tested it, not good). I have read that people should create a mesh batches from the voxel data but my main target is to use OpenCL which shares everything with OpenGL and I use voxels for this since they are easily scaled to measure performance in different scales.