Is there a better way to send data to a shader?

I’m using transform feedback to perform physics calculations on a set of thousands of particles. I’m sending particle data to the shader in the form of a 256x256 texture but my platform only supports 16-bit floating point textures. Is there a more efficient way to send this amount of data to the shader?

i have read about a particle system that uses also transform feedback to capture “world geometry” and applies collision detection particle <-> world geometry (some objects in the scene)
it uses “uniform samplerbuffer” in the shader to access the whole geometry of the world and “streams” the particle data (like position, velocity) through as regular attributes
it also uses “buffer flipflopping”, meaning every frame the system writes the particle data into another buffer (“flop”) while reading the data from buffer “flip”
therefore (i assume) it changes the transform feedback buffer each frame, or maybe it uses 2 different transform feedback object, … no clue

you can read about that in “OpenGL Programming Guide (8th Edition)”

I managed to send a few values to the shader as a

uniform vec4 data[255];

But the shader crashes if I declare the array to have more than 255 values. Any hints?

you want to access a bunch of particle data in you vertexshader, right?
as i’ve said, declare a uniform samplerBuffer MyParticleData; in your vertexshader, bind the buffer to GL_TEXTURE_BUFFER
https://www.opengl.org/wiki/Buffer_Texture


int location = glGetUniformLocation(program, "MyParticleData");
glUniform1i(location, 0);
glActiveTexture(GL_TEXTURE0 + 0);
glBindBuffer(GL_TEXTURE_BUFFER, m_texturebuffer);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA32F, m_particlebuffer_flip);	// and next frame: m_particlebuffer_flop

m_texturebuffer is a texture (empty, not allocated anything, not bound to any target, just generated)
m_particlebuffer_flip contains your particle data, by calling glTexBuffer(…) you make it accessible via the texture m_texturebuffer
then bind m_texturebuffer to GL_TEXTURE_BUFFER, and you can access your data as floating-point RGBA-texels in your shader:


uniform samplerBuffer MyParticleData;   // 1D texture, consider it as array

void main()
{
int index = 0;     

vec4 data_in_texel = texelFetch(MyParticleData, index);

EDIT:

if you can only access 16bit textures, try using another internal format:
https://www.opengl.org/wiki/Buffer_Texture#Image_formats

Why? If you’re using transform feedback, presumably you’re using a vertex shader where each particle is a vertex. In which case, it would seem more logical to use vertex attributes.

what if the vertex shader is calculating (for example) gravity?
to do that, it must access all other particles in the scene, calculating F = sumofallparticles(fm1m2/distance^2)

I would guess that if you’re limited to FP16 textures, and limited to 255 uniforms, then your target platform is just not gonna support any potentially more efficient ways.

I’m using a uniform buffer ofbject and now it seems I’m limited to 1024 vec4s.

layout (std140) uniform Data
{
vec4 data[1024];
};

[QUOTE=john_connor;1283137]
if you can only access 16bit textures, try using another internal format:
https://www.opengl.org/wiki/Buffer_Texture#Image_formats[/QUOTE]

I can do 32-bit ints but not floats.

[QUOTE=john_connor;1283139]what if the vertex shader is calculating (for example) gravity?
to do that, it must access all other particles in the scene, calculating F = sumofallparticles(fm1m2/distance^2)[/QUOTE]

If you’re doing computations where every particle needs to know the location of every other particle, then shaders are probably not going to be a particularly good solution for you. Not unless compute shaders are available.

Wait a minute. What OpenGL implementation are you running on that supports transform feedback (primarily a GL 3.x feature), but doesn’t support 32-bit floating point textures? They’re required texture formats for GL 3.2 implementations.

ES 3.0 on iOS.

All iOS devices (starting with the SGX 535 in iOS4) support sampling from 32-bit floating point textures, via OES_texture_float in ES2 and inherently in ES3.
They do not, however, support filtering 32-bit floating point textures or rendering to them. Both filtering and rendering are supported for 16-bit float (starting with the SGX 543 in iOS5.)

[QUOTE=arekkusu;1283149]All iOS devices (starting with the SGX 535 in iOS4) support sampling from 32-bit floating point textures, via OES_texture_float in ES2 and inherently in ES3.
They do not, however, support filtering 32-bit floating point textures or rendering to them. Both filtering and rendering are supported for 16-bit float (starting with the SGX 543 in iOS5.)[/QUOTE]

Ok, I set up a 32-bit floating point texture and it’s still reading it as a half float.

Input: 0.001000, 0.000100, 0.000010, 0.000001
Output: 0.000999, 0.000100, 0.000000, 0.000000

So I’m losing quite a few bits of precision for some reason.

I tried

CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBufferCompute, NULL, GL_TEXTURE_2D, GL_RGBA, computeTextureSize, computeTextureSize, GL_BGRA, GL_FLOAT, 0, &videoTextures[VIDTEX_COMPUTE]);

and

    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 128, 128, 0, GL_RGBA, GL_FLOAT, particleData);

And both of those are giving me half floats, so apparently iOS only supports half floats?

What makes you think that reading it is the problem? You’re using OpenGL ES. And ES has a number of precision qualifiers for variables. So what you read in a shader is only the first step of what you write.

Show the actual shader that generated the output. The full source code for it.

What Alfonse means is: the ES3 GLSL spec says that sampler2D defaults to “lowp” precision, so unless you explicitly declare it “highp”, you shouldn’t expect to get a 32-bit float.

(And if you’re using 16-bit float textures, you should explicitly declare the sampler “mediump”, and not depend on internal details of the compiler / GPU architecture to do that. SGX & A7/A8/A9 will behave differently.)

(((Of course, this would be obvious if Apple showed you the compiled ISA of your shader, and let you single-step through it.)))

[QUOTE=Alfonse Reinheart;1283163]What makes you think that reading it is the problem? You’re using OpenGL ES. And ES has a number of precision qualifiers for variables. So what you read in a shader is only the first step of what you write.

Show the actual shader that generated the output. The full source code for it.[/QUOTE]

It’s the vertex shader. There are no precision qualifiers.

But I got it working on my iPad. My iPhone gives the precision error. Could be either a hardware or software issue or maybe it’s intended.

Thanks, everyone.