I have a texture (that I bind to tex unit 1), which contains some data I’d like to use in the vertex shader (e.g. some rand number sequences or normals, etc.)
To my knowledge, I can do this in the vertex shader:
gl_TexCoord[1] = gl_MultiTexCoord1;
and then from the frag shader I can get to my data by doing:
vec4 texel = texture2D(tex,gl_TexCoord[0].st);
(where tex is a sampler2D that points to the right place).
BUT, I need the data in the vertex shader.
(For example in the case of normal map info, I need to set my normal in the vert shader).
There are some restrictions you need to know, but it’s not too difficult:
First, you need a GeForce 6 and up, no other boards support this today.
Check the value for GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS.
Use a floating point 32-bit texture format with either four or one component GL_RGBA32F-like types for your case.
Use nearest filtering.
In your vertex shader just write
vec4 texel = texture2D(vertexTexture, gl_MultiTexCoord1.st);
and you look up the texel at the position you send with the texture attribute 1 (or whatever you chose).
(If you have mipmaps, use an explicit level of detail in the texture lookup!)
For normal maps you don’t actually do that. Those work in the fragment shaders.
Vertex tetxure are nice for displacement mapping.
How many vertices do you plan to send?
BUT, I need the data in the vertex shader.
(For example in the case of normal map info, I need to set my normal in the vert shader).
Any ideas on how I can achieve this?
Not sure why you need that in the vertex shader if you use bumpmapping on the fragment shader, but one way would be to actually lookup the data on the CPU and send it as the texcoord attribute if that’s all you do anyway. Support multiple render paths in your app, so that newer HW is used appropriately.
Ouch! Need to support older hardware (Geforce 4 I think is the min spec).
What type of app is that?
GeForce 4 doesn’t even support fragment shaders. Just don’t think about GF4MX.
I would forget about everything below OpenGL 2.0 capable HW if I’d start writing an app today. That starts with GF5, better GF6, because of the non-power-of-two support in OGL 2.0 core.
Read this as well: http://developer.nvidia.com/object/nv_ogl2_support.html
Not really. Core OpenGL functionality is granted, but you can’t predict its performance.
Read the documents I linked and you’ll see that for example NPOT textures are core functionality, but GeForce 5 can’t do it on the GL_TEXTURE_2D target. It can do GL_TEXTURE_RECTANGLE though which works differently.
Differences don’t stop there. The GL Shading Language is abstract enough to allow constructs current HW can not fulfill.
You still need to develop and test your stuff on various hardware.