PDA

View Full Version : OES extensions in NVIDIA 280.xx Beta



kRogue
07-31-2011, 09:02 PM
The 280.xx beta release lists a number of OES extensions, the ones I am staring at are: GL_OES_texture_float and
GL_OES_texture_half_float. The question I have is that in OpenGL ES2, the "type storage" of a texture (i.e. uint8, float, etc) is determined by the format in which the texture data is specified [i.e. the 8th and 8th arguments of glTexImage2D(GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid * data) ] where as in OpenGL the "type storage" is determined by the 3 parameter, internalformat. What is the expected interaction behavior of the floating point OES extensions in non-ES OpenGL? [Also note that GL_HALF_FLOAT_OES is 0x8D61 where as GL_HALF_FLOAT is 0x140B].

Xmas
08-01-2011, 04:30 AM
The question I have is that in OpenGL ES2, the "type storage" of a texture (i.e. uint8, float, etc) is determined by the format in which the texture data is specified
Actually, that's not quite true (even though it's what most implementations do):

The GL stores the resulting texture with internal component resolutions of its own choosing. The allocation of internal component resolution may vary based on any TexImage2D parameter (except target), but the allocation must not be a function of any other state and cannot be changed once established.

kRogue
08-01-2011, 10:19 AM
True, very true. Also horribly in GLES2 internalFormat and format must be identical and one of GL_RGBA, GL_RGB, GL_LUMINANCE, GL_ALPHA or GL_LUMINANCE_ALPHA. However there is this little nugget from the man pages of glTexImage2D [http://www.khronos.org/opengles/sdk/docs/man/]:



internalformat must match format. No conversion between formats is supported during texture image processing. type may be used as a hint to specify how much precision is desired, but a GL implementation may choose to store the texture array at any internal resolution it chooses.


so basically, a GLES2 implementation deserves to be frowned upon deeply if it chooses an internal format that does not match type. Oh well, though back to my question of GL_OES_texture_float and GL_OES_texture_half_float in the NVIDIA driver: if one passes GL_FLOAT or GL_OES_HALF_FLOAT for the type parameter and GL_RGBA for both internalFormat and format, does that make the texture a floating point (respectively half-floating point) texture?