framebuffer texturebuffer creation question

So I have a piece of code that decides if it should create a render buffer or texture buffer; and my question focuses on this part of it:

glTexImage2D(GL_TEXTURE_2D, 0, m_internalFormat, m_width, m_height, 0, type(), format(), nullptr);

My question is: does specifying a type/format do anything if the data is NULL, and i’m basically just mallocing some stuff? I already found out that i can’t use 0L for them, so it seems like they must do something.

I thought it just had to match internal_format, but the wiki page specifies GL_R11F_G11F_B10F needs to function for both renderbuffers and textures, but there’s no corresponding type for that. Does it not really matter what the type is, and the format just has to match? or what’s the deal?

This for example:

glTexImage2D(GL_TEXTURE_2D, 0, DEPTH_COMPONENT, m_width, m_height, 0, GL_RED, GL_UNSIGNED_BYTE, nullptr);

Will create an error code, because it can’t do the conversion. So it seems like i can’t always use the same values for all internalFormats; or is it safe to just ignore that error code?

[QUOTE=GeatMaster;1291781]So I have a piece of code that decides if it should create a render buffer or texture buffer; and my question focuses on this part of it:

glTexImage2D(GL_TEXTURE_2D, 0, m_internalFormat, m_width, m_height, 0, type(), format(), nullptr);

My question is: does specifying a type/format do anything if the data is NULL, and i’m basically just mallocing some stuff?[/QUOTE]

No. They describe the format of the provided data block, and if you’re not providing one, they’re not really used for anything. They probably need to be valid for error checking though, as you found out.

I thought it just had to match internal_format

No. The original idea was so you could have your texture be one format on the GPU (the “internal format”), potentially populate with texel data of another format (given by format+type), and the driver in some cases would do the conversion on-the-fly (e.g. populating GPU compressed textures with uncompressed texel data). However, if you’re writing a high-performance app that strives for maximum visual quality, you’ll almost never want to let the driver do a runtime conversion. You’d do the conversion beforehand using high-quality conversion/compression code and just load it in the GPU native format.

Will create an error code, because it can’t do the conversion. So it seems like i can’t always use the same values for all internalFormats; or is it safe to just ignore that error code?

I wouldn’t get in the habit of ignoring error codes. It’ll bite you further down the road. Just make the API call happy by providing a reasonable format/type.

If you don’t want to have to just “know” a reasonable format/type for every internal format, you can query what the driver things is a good format/type to provide using:


  glGetInternalformativ( target, int_format, GL_TEXTURE_IMAGE_FORMAT , 1, &format );
  glGetInternalformativ( target, int_format, GL_TEXTURE_IMAGE_TYPE   , 1, &type );

You might want to browse glGetInternalFormat to see what other goodies you can query. Some of these can be pretty useful.

That’s perfect thanks!!!

Unless my program is on a card that doesn’t support openGL 4.2 I dunno how probable that is. How probable is that? do you know?

If you want to allocate storage for a texture, and not supply any pixel data, then use glTexStorage.

Just do what arekkusu said. He’s got the better solution.

To your question though, you can look at recent reports here: https://opengl.gpuinfo.org/ But I suspect many of the submitters are developers, not end users.