Texture memory on the GPU

Hi guys,

I know that opengl converts images with a byte per channel into normalized floats for display. I have a floating point gray-scale image. Is there any advantage to storing it as a single-byte gray-scale, or should I leave it as a floating point since it will be converted to float anyway? Would they both occupy the same amount of GPU memory?

It’s GPU memory will be based on the target format of the texture you use with glTexImage, so it should be the same size on the GPU. The only difference you may see is conversion times when creating the image.

By target format, are you are referring to the internal format, i.e, the third parameter of glTexImage2D?

Yep.

So GL_R32F_ARB and GL_RED should take the same amount of space in memory because both are converted to normalized floats internally? I would have thought GL_R32F_ARB would take 4 times more memory since that would be 32 bits, while GL_RED should be 8.

This is done when the GPU reads GPU memory. Its stored as bytes/channel if that’s what you ask it to do.

It doesn’t just pre-expand the data and store your texture as floats in GPU memory, unless you ask it to.

I have a floating point gray-scale image. Is there any advantage to storing it as a single-byte gray-scale, or should I leave it as a floating point since it will be converted to float anyway?

Refer to the above. In one case you’ll be consuming 1 byte/texel GPU memory. In the other you’ll be consuming 4 bytes/texel GPU memory.

It’s only when you sample the texture in the shader that the GPU will hot-convert the ubyte representation to float for purposes of that specific texture lookup only!

So GL_R32F_ARB and GL_RED should take the same amount of space in memory because both are converted to normalized floats internally?

No, not unless the GPU defaults GL_RED to GL_R32F. GL_RED is not a specific internal texture format. GL_R8 is. GL_LUMINANCE8 is. GL_R32F is. They imply a specific format for the texel data, not just the number of channels.

I would have thought GL_R32F_ARB would take 4 times more memory since that would be 32 bits, while GL_RED should be 8.

That’s what I’d guess too, but it depends on what internalFormat your driver comes up with when you say “GL_RED”. Better to tell it GL_R8 or GL_R32F if you have something specific in mind (or GL_LUMINANCE8/GL_LUMINANCE32F).

Perfect! I always thought the GPU should do the normalization upon sampling (and it does). Pre-expansion wouldn’t have made any sense whatsoever.

Thanks Dark Photon, and Stratton.