Render to texture - texture formats and parameters?

When I render to a texture (stored in a bound framebuffer object), do any of the following texture parameters matter?

  • GL_TEXTURE_WRAP_S
  • GL_TEXTURE_WRAP_T
  • GL_TEXTURE_MIN_FILTER
  • GL_TEXTURE_MAG_FILTER
    It’s also redundant to generate mipmaps, right? (Might be a stupid question, but I’m just making sure!)

What about data types? (the “type” parameter)
Does type have to be GL_FLOAT? If not, what’s the difference between specifying type as GL_FLOAT and GL_UNSIGNED_BYTE?

Also, every doc I find on the web regarding Texture2D info (e.g. https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml) is missing some info. (namely the GL_DEPTH_COMPONENT16/24/32, and GL_RGB16 flags).
Is there a source for complete info on these stuff? (preferably specialized for the render-to-texture technique)

[QUOTE=Pilpel;1281261]When I render to a texture (stored in a bound framebuffer object), do any of the following texture parameters matter?

  • GL_TEXTURE_WRAP_S
  • GL_TEXTURE_WRAP_T
  • GL_TEXTURE_MIN_FILTER
  • GL_TEXTURE_MAG_FILTER
    [/QUOTE]
    No.

The mipmap level you attach the the framebuffer needs to exist. Other levels don’t.

Which function? glTexImage2D() (or whatever you use to allocate the storage)? The format and type parameters don’t matter if data is null (which is fairly typical if you’re going to generate the texture’s contents by rendering into it).

If you provide data, you can typically use any format, and the data will be converted accordingly. Conversions between fixed-point (normalised) and floating-point are specified in section 2.3.5 of the specification. Conversion between fixed-point representations is as if the values are converted from one fixed-point representation to floating-point then to the other fixed-point representation.

[QUOTE=Pilpel;1281261]
Also, every doc I find on the web regarding Texture2D info (e.g. https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml) is missing some info. (namely the GL_DEPTH_COMPONENT16/24/32, and GL_RGB16 flags).
Is there a source for complete info on these stuff? (preferably specialized for the render-to-texture technique)[/QUOTE]
The source for complete information is the specification. Table 8.13 (which is missing from the reference page you linked) lists the depth and stencil texture formats. Table 8.12 lists the uncompressed colour texture formats and whether they are renderable (can be used as a framebuffer attachment). GL_RGB16 isn’t required to be renderable (use GL_RGBA16 instead; most 3-component formats aren’t renderable).

In older OpenGL you could set a texture flag that tells the driver to auto generate the mipmaps when the texture data changed. That is not used anymore.
Now you initiate the generating yourself after rendering, if you need it. You can use this functions, they all do basically the same.

Core since 3.0
glGenerateMipmap(target);

GL_EXT_direct_state_access
glGenerateTextureMipmapEXT(id, target);

GL_ARB_direct_state_access (Core since 4.5)
glGenerateTextureMipmap(id);

This two parameters still must be “valid” values even when data is null. Just setting them to something like GL_RED and GL_UNSIGNED_BYTE will do.

When setting internalformat as “Base Internal Format” (unsized), to what size does it set each component? (i.e GL_RGB, GL_DEPTH_COMPONENT)

It sets it to whatever size it wants. Which is why you shouldn’t use them; always use sized internal formats.

Are you sure? It doesn’t set the internal format to some GPU-friendly type or something? :confused:

Unsized formats are deprecated. Texture storage functions for example only takes sized formats.

I did, however, get framebuffer incomplete errors when calling glTexImage2D with arbitrary “type” and “format” parameters.
It worked, however, when I set “type” to GL_FLOAT and “format” to something that relates to “internalformat”. (e.g. GL_RGB when internalformat is RL_RGB8)

Can you explain that?

Hard to give a meaningful answer with that amount of information. For all I know your “arbitrary” values are not valid in the way I mentioned and therefore the texture creation, and subsequently the framebuffer object are not fully setup.

[QUOTE=Pilpel;1281269]I did, however, get framebuffer incomplete errors when calling glTexImage2D with arbitrary “type” and “format” parameters.
It worked, however, when I set “type” to GL_FLOAT and “format” to something that relates to “internalformat”. (e.g. GL_RGB when internalformat is RL_RGB8)
[/QUOTE]
Did glTexImage2D() generate an error (use glGetError() to check)? If the call failed, then the texture will still be in its initial state, which won’t be valid as a framebuffer attachment.

There are some constraints on the relationship between format and internalFormat, specifically

An INVALID_OPERATION error is generated if the internal format is integer and format is not one of the integer formats listed in table 8.3, or if the internal format is not integer and format is an integer format.

An INVALID_OPERATION error is generated if one of the base internal format and format is DEPTH_COMPONENT or DEPTH_STENCIL, and the other is neither of these values.

An INVALID_OPERATION error is generated if format is STENCIL_INDEX and the base internal format is not STENCIL_INDEX.

If both format and the base internal format are colour formats, the number of components doesn’t need to match; excess components are ignored, missing components are set to 0 or 0.0 for red, green or blue and 1 or 1.0 for alpha.

I’m reasonably sure that type doesn’t matter beyond the case of “packed” formats needing to have the correct number of components.

No, unsized formats are not deprecated. It may be unwise to use them, but they’re still a perfectly valid part of core OpenGL.