[0,1] guaranteed. test is simple, sample from -10 to 10 across a full screen quad, check how it behaves.
is false.
You should know how you define your texture. if it is RGBA, you will get r,g,b,a with r,g,b at the same level between 0 and 1, and a the alpha value.
if it is RGB : r,g,b,1
if it is LUMINANCE : i,i,i,1
if it is LUMINANCE_ALPHA : i,i,i,a
The texture is RGBA. You write that r=g=b and has a value between 0-1. So I assume that I can use either component as the intensity for the looked up texel (0 = black and 1 = white). I don’t know what the alpha value ‘a’ is and why it should be used when the texture is a heightmap.
Dimension of the texture is always what you specify it to be (in your case 512x512). Hovewer, texture coordinates are specified in the range 0…1 (to be texture-size independent)
I don’t understand why you use RGBA for grayscale textures, where there are one-channel-textures in the first place. Anyway, the internal values in the texture depend on how you uploded the data (and what the data contained). So we can’t answer this question.
Also, please post such questions in the beginner forums in the future.
Yes you might be able to read from either component, test it to find out, and the alpha value is normally used for transparency, however in your case you could use it to store extra data about the height map, like the slope gradient or things like that.
Another idea is you could also put the normal data in the green and blue components.
But if not then disregard it and only use a LUMINANCE texture
I will remember that in the future. You write that the values in the texture depend on how I “upload” the data. I just read the grayscale image from disk and since the image is grayscale the data should be from 0 to 255. But have I missed some details when reading the image?
By uploading he means putting the data where it is accessible to the GPU (it is sometimes also called unpacking).
In general, OpenGL can perform a number of operations on the pixel data you supply from application memory. For instance, you can use an internal format that compresses the data on the fly in which case you would (probably) loose data. I guess what Zengar could also be referring to, is the (external) format that you specify as well as the current Pixel Transfer state (see http://www.opengl.org/sdk/docs/man/xhtml/glPixelTransfer.xml for details). If you generate the pixel data in application memory (in contrast to say, reading it from disk), you as the programmer should also pay attention to the signed or unsigned version of the data type. AFAIK, all operations are done at the driver level (e.g. they’re not hardware accelerated), they’re merely convenience for you as the programmer!
Well, greyscale images could also be in the range 0 to 65535 or something else if you use an internal format with more than 8 bits. When you sample a texture in a shader these details are usually abstracted by always returning a value between 0.0 and 1.0. This more elegant than having to rewrite your shader if you change from 8 bit to 16 bit. But again, what you actually recieve from the sampler depends on whether you use fixed or floating point internal texture formats.
So there’s many ways to make it easier on yourself, but at the same time there’s plenty of opportunity for shooting yourself in the foot