DrawElements index access in vertex shaders

My indices are going to be the same as the texcoords I need. Therefore I have to send the same data twice - once as glTexCoordPointer and once as the indices in glDraw(Range)Elements. It’d be nice to only need to send my glVertexPointer data and glDraw(Range)Elements indices and be able to lookup my texture based on the indices. I don’t think that this is a one-off case - I can imagine this could apply to pretty much any vertex texture displacement mapping approach. One vertex == one vertex texture texel, same index for both.

You could try to actually make them both. That is, put them in one VBO, bind the texture and indices with the same offset, and try rendering with it. It might work…

Originally posted by Korval:
You could try to actually make them both. That is, put them in one VBO, bind the texture and indices with the same offset, and try rendering with it. It might work…
Ummm … it would work if array buffers and element buffers shared the same namespace. But I don’t think they do.

ffish,
a “workaround” would be to fill an extra VBO with linearly ascending values (single floats). Ie a one-to-one mapping of index and fetched result. If it’s large enough for all cases you’re going to throw at it, you would never need to update it, and could reuse it for everything.

Korval, maybe I’ll try it.

zeckensack, in this case the workaround doesn’t work since I’m rendering a bunch of wireframe cubes so each one needs correctly mapped indices, not linearly increasing ones. My “workaround” is just to send the linearly increasing values in a vertex attribute slot and access them with the indices then use the accessed values to access the texture. Which is what upsets me - I’m using indices to access indices. I don’t care - I mean it works - but it seems more elegant and intuitive to send the indices once.

Anyway, I crossposted to NVIDIA devrel and Simon Green pointed out an article to me saying that this feature may be introduced in the new Windows Graphics Foundation stuff. Dunno, I didn’t read the article yet.

If I understand you correctly, your texture coordinates are 1D?

You send your vertices as the 4th parameter of the vertices.

Now that’s a good idea. Especially since I was currently sending bgra data with a == 1.0 trading bandwidth for (my non-tested belief) speed. Very nice solution to my problem V-man. Thanks.

I moved to a bunch of 1D textures. Before I was storing various data in a rectangular texture, but I separated it out into 1D textures. 1 “node” of a mesh is associated with 1 displacement. Displacement texture and vertices are n*1. Other textures representing other parameters are mapped similarly.