GL_RGBA textures

I have a 512*512 grayscale GL_RGBA noise texture that I lookup in the vertex program:


struct vertex_input
{
  float4 position  : POSITION;
  float2 uv        : TEXCOORD0;
  float4 normal    : NORMAL;
};


struct vertex_output
{
  float4 position : POSITION;
  float3 normal   : TEXCOORD1;
  float change    : TEXCOORD2;
};

vertex_output main(  
        vertex_input       IN 
      , uniform float4x4   ModelViewProj
      , uniform float4x4   ModelViewIT      
      , uniform sampler2D  height_texture
)
{
	vertex_output OUT;

	float h = tex2D(height_texture, IN.uv).a;

	OUT.position = mul(ModelViewProj, IN.position);
        OUT.normal = normalize(mul(ModelViewIT, IN.normal).xyz);
	OUT.change = h;
        return OUT;
}

I would like to know the following:

  1. What are the dimension of the ‘height_texture’? In the application it was 512*512, but in the OpenGL manual I have read that textures are in the range

u \in [0,1], v \in [0,1]

How do I verify what the valid ranges are to lookup in the texture?

  1. The texure is grayscale so I assume what tex2D returns is something like:

(0.0, 0.0, 0.0, a)

where ‘a’ is the intensity of the texel ranging from black (0) to white (1), but is this correct and how do I verify this?

  1. What are the range of the texture coordinates IN.uv? From the application I have:

        static int offset_coord     = 0;
        static int offset_normal    = sizeof(vector3_type) + offset_coord;
        static int offset_color     = sizeof(vector3_type) + offset_normal;
        static int offset_tex0      = sizeof(vector4_type) + offset_color;
        static int offset_tex1      = sizeof(vector2_type) + offset_tex0;
        static int offset_tex2      = sizeof(vector2_type) + offset_tex1;

        GLsizei stride         = sizeof(vertex_type);
#define OFFSET(x) ((char *)NULL+(x))
        glClientActiveTexture(GL_TEXTURE0);
        glEnableClientState(GL_TEXTURE_COORD_ARRAY);
        glTexCoordPointer( 2, GL_FLOAT, stride, OFFSET(offset_tex0)   );


But I don’t see anything about the range of the texure coordinates.

  1. [0,1] guaranteed. test is simple, sample from -10 to 10 across a full screen quad, check how it behaves.
  2. is false.
    You should know how you define your texture. if it is RGBA, you will get r,g,b,a with r,g,b at the same level between 0 and 1, and a the alpha value.
    if it is RGB : r,g,b,1
    if it is LUMINANCE : i,i,i,1
    if it is LUMINANCE_ALPHA : i,i,i,a
  3. see 1).

The texture is RGBA. You write that r=g=b and has a value between 0-1. So I assume that I can use either component as the intensity for the looked up texel (0 = black and 1 = white). I don’t know what the alpha value ‘a’ is and why it should be used when the texture is a heightmap.

  1. Dimension of the texture is always what you specify it to be (in your case 512x512). Hovewer, texture coordinates are specified in the range 0…1 (to be texture-size independent)

  2. I don’t understand why you use RGBA for grayscale textures, where there are one-channel-textures in the first place. Anyway, the internal values in the texture depend on how you uploded the data (and what the data contained). So we can’t answer this question.

Also, please post such questions in the beginner forums in the future.

Yes you might be able to read from either component, test it to find out, and the alpha value is normally used for transparency, however in your case you could use it to store extra data about the height map, like the slope gradient or things like that.
Another idea is you could also put the normal data in the green and blue components.
But if not then disregard it and only use a LUMINANCE texture

I will remember that in the future. You write that the values in the texture depend on how I “upload” the data. I just read the grayscale image from disk and since the image is grayscale the data should be from 0 to 255. But have I missed some details when reading the image?

By uploading he means putting the data where it is accessible to the GPU (it is sometimes also called unpacking).

In general, OpenGL can perform a number of operations on the pixel data you supply from application memory. For instance, you can use an internal format that compresses the data on the fly in which case you would (probably) loose data. I guess what Zengar could also be referring to, is the (external) format that you specify as well as the current Pixel Transfer state (see http://www.opengl.org/sdk/docs/man/xhtml/glPixelTransfer.xml for details). If you generate the pixel data in application memory (in contrast to say, reading it from disk), you as the programmer should also pay attention to the signed or unsigned version of the data type. AFAIK, all operations are done at the driver level (e.g. they’re not hardware accelerated), they’re merely convenience for you as the programmer!

Well, greyscale images could also be in the range 0 to 65535 or something else if you use an internal format with more than 8 bits. When you sample a texture in a shader these details are usually abstracted by always returning a value between 0.0 and 1.0. This more elegant than having to rewrite your shader if you change from 8 bit to 16 bit. But again, what you actually recieve from the sampler depends on whether you use fixed or floating point internal texture formats.

So there’s many ways to make it easier on yourself, but at the same time there’s plenty of opportunity for shooting yourself in the foot :slight_smile:

Maybe you want to read chapter 8 and 9 in the Red Book (OpenGL Programming Guide : Table of Contents).

http://www.opengl.org/wiki/index.php/Texture_Sampling