PDA

View Full Version : accessing texture data in a vertex shader



g0l3m
12-19-2006, 04:49 AM
Hello all,

I have a texture (that I bind to tex unit 1), which contains some data I'd like to use in the vertex shader (e.g. some rand number sequences or normals, etc.)

To my knowledge, I can do this in the vertex shader:

gl_TexCoord[1] = gl_MultiTexCoord1;

and then from the frag shader I can get to my data by doing:

vec4 texel = texture2D(tex,gl_TexCoord[0].st);

(where tex is a sampler2D that points to the right place).

BUT, I need the data in the vertex shader.
(For example in the case of normal map info, I need to set my normal in the vert shader).

Any ideas on how I can achieve this?

cheers,
g.

Relic
12-19-2006, 05:25 AM
There are some restrictions you need to know, but it's not too difficult:
- First, you need a GeForce 6 and up, no other boards support this today.
- Check the value for GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS.
- Use a floating point 32-bit texture format with either four or one component GL_RGBA32F-like types for your case.
- Use nearest filtering.
- In your vertex shader just write
vec4 texel = texture2D(vertexTexture, gl_MultiTexCoord1.st);
and you look up the texel at the position you send with the texture attribute 1 (or whatever you chose).
(If you have mipmaps, use an explicit level of detail in the texture lookup!)

Done.

Read this:
http://developer.nvidia.com/object/using_vertex_textures.html

For normal maps you don't actually do that. Those work in the fragment shaders.
Vertex tetxure are nice for displacement mapping.
How many vertices do you plan to send?

g0l3m
12-19-2006, 06:22 AM
Ouch! Need to support older hardware (Geforce 4 I think is the min spec).

Still useful to know...thx for the info.

I'm sending around 90K verts btw.

Relic
12-19-2006, 07:03 AM
BUT, I need the data in the vertex shader.
(For example in the case of normal map info, I need to set my normal in the vert shader).
Any ideas on how I can achieve this?Not sure why you need that in the vertex shader if you use bumpmapping on the fragment shader, but one way would be to actually lookup the data on the CPU and send it as the texcoord attribute if that's all you do anyway. Support multiple render paths in your app, so that newer HW is used appropriately.


Ouch! Need to support older hardware (Geforce 4 I think is the min spec).What type of app is that?
GeForce 4 doesn't even support fragment shaders. Just don't think about GF4MX.
I would forget about everything below OpenGL 2.0 capable HW if I'd start writing an app today. That starts with GF5, better GF6, because of the non-power-of-two support in OGL 2.0 core.
Read this as well: http://developer.nvidia.com/object/nv_ogl2_support.html

g0l3m
12-19-2006, 08:05 AM
I can cope with GF5 I guess...
It's for a game, and it needs to support low spec graphics cards (but I can do with GF5 as a min spec).

I definitely don't want to look at anything below OGL2.0 ;-)

Once I've established that OGL2.0 is supported, do I then still need to run tests for support for say

glUseProgram (and all the other shader stuff), etc.?

In the past when I was using ARB extentions I had to pretty much test for everything, e.g. multi-texturing support, etc.

I'm assuming if OGL2.0 is supported I can take certain things for granted, right?

thx again,
g.

Relic
12-19-2006, 09:03 AM
Not really. Core OpenGL functionality is granted, but you can't predict its performance.

Read the documents I linked and you'll see that for example NPOT textures are core functionality, but GeForce 5 can't do it on the GL_TEXTURE_2D target. It can do GL_TEXTURE_RECTANGLE though which works differently.

Differences don't stop there. The GL Shading Language is abstract enough to allow constructs current HW can not fulfill.

You still need to develop and test your stuff on various hardware.