PDA

View Full Version : Reading integer textures in GLSL



Rakehell
07-06-2016, 01:59 PM
I declare the texture as a highp usampler2D. When I sample it, the red, green, and blue channels are 0 but the alpha channel is some huge number.

I've tried GL_LUMINANCE, GL_RED, and GL_RGBA32I and it gives the same result. How can I read my texture as a series of 32-bit unsigned ints?

Alfonse Reinheart
07-06-2016, 03:32 PM
I've tried GL_LUMINANCE, GL_RED, and GL_RGBA32I

You've tried those where? If that was the internal format, then it should never have worked for the first two, since those are 1-channel formats. Furthermore, you haven't shown us the code uploading the data. Are you uploading 4 channels worth of data?

Rakehell
07-06-2016, 03:53 PM
You've tried those where? If that was the internal format, then it should never have worked for the first two, since those are 1-channel formats. Furthermore, you haven't shown us the code uploading the data. Are you uploading 4 channels worth of data?

I can't have a 1-channel texture? That's what I want.

Alfonse Reinheart
07-06-2016, 05:14 PM
Yes, you absolutely can have a one channel texture. But that means it only has one channel. So why are you looking at the G, B, and A components of a texture, when you only have one component?

If you want a one-channel texture, you're supposed to read it like it's a one channel texture. Now to be fair, if your texture is GL_R32UI, then G, B, and A should be 0, 0, 1 respectively.

That being said, GL_INTENSITY and GL_RED are not integer formats. They're unsigned, normalized formats. The only OpenGL formats that are unsigned integer formats are those that end in UI.

So if you tried to read from them with a `usampler2D`, then you get undefined behavior.

GClements
07-06-2016, 08:18 PM
I declare the texture as a highp usampler2D. When I sample it, the red, green, and blue channels are 0 but the alpha channel is some huge number.

I've tried GL_LUMINANCE, GL_RED, and GL_RGBA32I and it gives the same result. How can I read my texture as a series of 32-bit unsigned ints?
First, the texture has to actually contain 32-bit unsigned integers, and none of those formats do. The first two contain unsigned normalised values of unspecified size, the last one contains 32-bit signed integers.

For 32-bit unsigned integers, use GL_R32UI, GL_RG32UI, GL_RGB32UI or GL_RGBA32UI, depending upon whether you want 1, 2, 3 or 4 channels.

Rakehell
07-07-2016, 11:12 AM
First, the texture has to actually contain 32-bit unsigned integers, and none of those formats do. The first two contain unsigned normalised values of unspecified size, the last one contains 32-bit signed integers.

For 32-bit unsigned integers, use GL_R32UI, GL_RG32UI, GL_RGB32UI or GL_RGBA32UI, depending upon whether you want 1, 2, 3 or 4 channels.

I tried GL_R32UI and the texture just shows up blank. The only thing that works is an internal format of GL_RGBA with a type of GL_FLOAT and the integers are copied correctly except I'm getting one value per channel.

Alfonse Reinheart
07-07-2016, 11:16 AM
Nobody can tell you what's wrong if you just say "I tried this and it didn't work." Describing what you do is not the same as your actual code. It's time you stopped posting text and started posting code.


The only thing that works is an internal format of GL_RGBA with a type of GL_FLOAT and the integers are copied correctly except I'm getting one value per channel.

... that doesn't even begin to make sense. You may have managed to invoke multiple forms of undefined behavior that, when taken as an aggregate, just so happen to achieve the effect you want.

Rakehell
07-09-2016, 11:11 AM
I'm just copying one 32-bit buffer to the next so I'm not surprised it works.


I'm in iOS creating a an OpenGL texture from a CVPixelBuffer. This is the setup that currently works:


CVOpenGLESTextureCacheCreateTextureFromImage(kCFAl locatorDefault, videoTextureCache, pixelBufferIsoIndices, NULL, GL_TEXTURE_2D, GL_R32F, canvasNormalsTextureSize, canvasNormalsTextureSize, GL_RED, GL_FLOAT, 0, &videoTextures[VIDTEX_ISO_INDICES]);

Manual page:


CVReturn CVOpenGLESTextureCacheCreateTextureFromImage (
CFAllocatorRef allocator,
CVOpenGLESTextureCacheRef textureCache,
CVImageBufferRef sourceImage,
CFDictionaryRef textureAttributes,
GLenum target,
GLint internalFormat,
GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
size_t planeIndex,
CVOpenGLESTextureRef _Nullable *textureOut
);
Description
Creates a CVOpenGLESTextureRef object from an existing CVImageBufferRef.
Upon successful creation of the texture, this function returns kCVReturnSuccess.
Parameters
allocator
The CFAllocatorRef to use for allocating the texture object. This parameter can be NULL.
textureCache
The texture cache object that will manage the texture.
sourceImage
The CVImageBufferRef that you want to create a texture from.
textureAttributes
A CFDictionaryRef containing the attributes to be used for creating the CVOpenGLESTextureRef objects. This parameter can be NULL.
target
The target texture. GL_TEXTURE_2D and GL_RENDERBUFFER are the only targets currently supported.
internalFormat
The number of color components in the texture. Examples are GL_RGBA, GL_LUMINANCE, GL_RGBA8_OES, GL_RED, and GL_RG.
width
The width of the texture image.
height
The height of the texture image.
format
The format of the pixel data. Examples are GL_RGBA and GL_LUMINANCE.
type
The data type of the pixel data. One example is GL_UNSIGNED_BYTE.
planeIndex
The plane of the CVImageBufferRef to map bind. Ignored for non-planar CVImageBufferRefs.
textureOut
A pointer to a CVOpenGLESTextureRef where the newly created texture object will be placed.

Alfonse Reinheart
07-09-2016, 11:26 AM
This is the setup that currently works:

Yes, that should work. But what you described as "working" was this:


The only thing that works is an internal format of GL_RGBA with a type of GL_FLOAT and the integers are copied correctly except I'm getting one value per channel.

That doesn't describe the code you posted. The posted code's internal format was `GL_R32F`, which is a single-channel, 32-bit floating point format.

If you want a single-channel, 32-bit unsigned integer format, you should use `GL_R32UI` for the internal format. However, you must also adjust your pixel transfer format (https://www.opengl.org/wiki/Pixel_Transfer_Format) and types (https://www.opengl.org/wiki/Pixel_Transfer_Type) to match your data:


CVOpenGLESTextureCacheCreateTextureFromImage(kCFAl locatorDefault, videoTextureCache, pixelBufferIsoIndices, NULL, GL_TEXTURE_2D, GL_R32UI, canvasNormalsTextureSize, canvasNormalsTextureSize, GL_RED_INTEGER, GL_UNSIGNED_INT, 0, &videoTextures[VIDTEX_ISO_INDICES]);

Now, whether this function will do the job correctly, I cannot say. This is some iOS wrapper around genuine OpenGL functionality, so it may not be able to handle integer textures.