View Full Version : glsl array access index - ATI vs NVIDIA

01-12-2005, 02:07 PM
does someone knows whose of ATI or nVidia is right?:
for an array access within a fragment shader (at least, I didn't try for the vertex shader) the index need to be known at the compile time for nVidia card (5700 (61.77) and 6800 (66.93)) and not for ATI 9800XT ?

01-13-2005, 01:44 AM
I have been experiencing problems with array access on GF FX5900. In my case it only seems to work when array indices are known at compile time. This is very frustrating since I want to do procedural array accessing (a very basic functionality?).

01-13-2005, 02:51 AM
The spec allows it of course, but if the hardware doesn't support it (there is no address register in the fragment programs, vertex programs have A0), the workaround would be to have an if-statement per allowed index and that seems futile.
uto, does the shader run in hardware on the ATI 9800XT?
thinks, you could put your arrays in a nearest filtered 1D texture and use dependent texture reads.

01-13-2005, 02:54 AM
My arrays are in a vertex program. I don't think I have access to textures from a vertex program?

Seems like I'll be using a lot of else-if... =

01-13-2005, 05:03 AM
But I said array indexing with uniforms works in a vertex shader.
Check the vertex noise example here:

Texture reads in a vertex shader would work, if you were on a GeForce 6, and on any hardware in a fragment shader.

01-13-2005, 09:33 PM
Array index access should be hardware accelerated in the vertex shader. In the fragment shader you can implement similar functionality by looking up in a 1D texture.

01-14-2005, 04:08 AM
Problem was solved sending arrays as uniforms instead of declaring them in the vertex shader. Thanks for your time! It really would be nice to
have texture access in vertex programs... :-)

/ T