PDA

View Full Version : Windows Imaging Component & glTexImage2D()



Kind_9
02-25-2013, 10:08 AM
Hello. I'm trying to load a 32 bit image with the Windows Imaging Component and use it as an opengl texture. The problem is that my code generates a runtime error from nvoglv32.dll(presumably the nvidia opengl driver).

Here is the code to load the image with WIC:


VOID GetImageFromFile(LPCWSTR file, IWICBitmap** bitmap)
{
IWICImagingFactory* factory = NULL;
IWICBitmapDecoder* decoder = NULL;
IWICBitmapFrameDecode* frame = NULL;
IWICFormatConverter* converter = NULL;

CoCreateInstance(CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&factory));
factory->CreateDecoderFromFilename(file, NULL, GENERIC_READ | GENERIC_WRITE, WICDecodeMetadataCacheOnDemand, &decoder);
decoder->GetFrame(0, &frame);
factory->CreateFormatConverter(&converter);
converter->Initialize(frame, GUID_WICPixelFormat32bppBGRA, WICBitmapDitherTypeNone, NULL, 0.0, WICBitmapPaletteTypeCustom);
factory->CreateBitmapFromSource(frame, WICBitmapNoCache, bitmap);

SafeRelease(factory);
SafeRelease(decoder);
SafeRelease(frame);
SafeRelease(converter);
}


And here is the code that calls the above function and creates the texture:


BOOL Create(LPCWSTR file)
{
if (!glIsEnabled(GL_TEXTURE_2D))
glEnable(GL_TEXTURE_2D);

IWICBitmap* bitmap = NULL;
IWICBitmapLock* lock = NULL;

GetImageFromFile(file, &bitmap);
if (!bitmap)
return FALSE;

WICRect r = {0};
bitmap->GetSize((UINT*)&r.Width, (UINT*)&r.Height);

BYTE* data = NULL;
UINT len = 0;

bitmap->Lock(&r, WICBitmapLockRead, &lock);
lock->GetDataPointer(&len, &data);

UINT glid = 0;
glGenTextures(1, &glid);
glBindTexture(GL_TEXTURE_2D, glid);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

glTexImage2D(GL_TEXTURE_2D, 0, 4, r.Width, r.Height, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, data);

lock->Release();
bitmap->Release();

return (GL_NO_ERROR == glGetError());
}

If I change GL_BGRA_EXT in the call to glTexImage2D() to GL_BGR_EXT everything works fine. Any ideas why this is happening? If I'm stuck using GL_BGR_EXT will the texture still be created with 4 pixel components(RGBA) as specified? I'm really getting burnt out on experimenting here and all I want is to transfer the alpha channels from the file on disk to the opengl texture.

PS. I know the code is ugly with very little error checking, but after all the experimenting I've done I'm certain that the problem lies in the call to glTexImage2D().

Nowhere-01
02-25-2013, 11:02 AM
yes, the problem is in your glTexImage2D call. you should've read about it before using: http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml

3-rd parameter is internalFormat, i don't know, what you meant by "4", but you can find accepted values going to the link i gave you above. in your case, it should be GL_RGBA. value 4, probably corresponds to some 3-component internal format, so in case with GL_BGRA it didn't match. and also, GL_BGRA_EXT has same value as GL_BGRA.

Kind_9
02-25-2013, 11:23 AM
According to the win32 API documentation:


internalformat
The number of color components in the texture. Must be 1, 2, 3, or 4, or one of the following symbolic constants: GL_ALPHA, GL_ALPHA4, GL_ALPHA8, GL_ALPHA12, GL_ALPHA16, GL_LUMINANCE, GL_LUMINANCE4, GL_LUMINANCE8, GL_LUMINANCE12, GL_LUMINANCE16, GL_LUMINANCE_ALPHA, GL_LUMINANCE4_ALPHA4, GL_LUMINANCE6_ALPHA2, GL_LUMINANCE8_ALPHA8, GL_LUMINANCE12_ALPHA4, GL_LUMINANCE12_ALPHA12, GL_LUMINANCE16_ALPHA16, GL_INTENSITY, GL_INTENSITY4, GL_INTENSITY8, GL_INTENSITY12, GL_INTENSITY16, GL_R3_G3_B2, GL_RGB, GL_RGB4, GL_RGB5, GL_RGB8, GL_RGB10, GL_RGB12, GL_RGB16, GL_RGBA, GL_RGBA2, GL_RGBA4, GL_RGB5_A1, GL_RGBA8, GL_RGB10_A2, GL_RGBA12, or GL_RGBA16.

Besides I've already tried GL_RGB, GL_RGBA, GL_BGR, and GL_BGRA there and still get the error unless I change format to GL_RGB or GL_BGR_EXT. GL_BGRA and GL_BGR are undefined.

Nowhere-01
02-25-2013, 07:39 PM
that's interesting, i can't find opengl.org spec mentioning using digits as internalformat. it says "internalFormat - Specifies the number of color components in the texture. Must be one of base internal formats given in Table 1, one of the sized internal formats given in Table 2, or one of the compressed internal formats given in Table 3, below.". and i've just tested with working code, using 3 and 4 instead of GL_RGB and GL_RGBA doesn't work.. it does work, but i wouldn't recommend that. GL_BGRA is defined in "glew.h".

so, if changing internalformat doesn't help, it most certainly crashes because "data" size is less, than width * height * 4; is your image 4 component? are you sure? re-check your code for loading image. i have no idea, what's going on there and i'd like to avoid spending my time reading doc's for this ass awful API. maybe someone else here is familiar with WIC.

Kind_9
02-26-2013, 04:22 AM
I guess the number constants are Microsoft specific then. I will avoid using them. Yes I think the problem was that I was loading an image without an alpha channel, but I was assuming that WIC would add the alpha channel when converting. Anyway I used a different program to save the image with alpha and it seemed to fix it, so thank you.

Nowhere-01
02-26-2013, 04:39 AM
I guess the number constants are Microsoft specific then.

what do you mean by "Microsoft specific"? they can't have their own OpenGL specification and any implementation must follow the spec. it's probably just a mistake.

Kind_9
02-26-2013, 05:51 AM
Then feel free to explain why those constants are allowed. This is the beginner forum after all. I have seen plenty of examples using these constants, so either it must be part of the specification or specific to the microsoft implementation.

Nowhere-01
02-26-2013, 06:09 AM
ok, my post has mistake. i've replaced wrong variables in my class. it works, you can set 1/2/3/4 as internalFormat, but i wouldn't rely on that. because i've never seen it being used like that and main specification doesn't mention that. use GL_RGB8 and GL_RGBA8.


Then feel free to explain why those constants are allowed
internalFormat argument is GLint type, which means you can pass any integer value to it. but you should only use enumerators listed in OpenGL spec.

P.S. only implementation of OpenGL Microsoft has is GDI. basically, software mode, it is used when you have no graphics card driver installed. and their implementation still must follow official specification.

Alfonse Reinheart
02-26-2013, 02:47 PM
I have seen plenty of examples using these constants

If you see any code that uses a number instead of an enum for the internal format, immediately leave that website. Any such code is:

1: Poorly written.

2: Is not teaching you good OpenGL practice.

The ability to use numbers as the internal format was removed from core OpenGL in 3.1 (http://www.opengl.org/wiki/Core_And_Compatibility_in_Contexts), but I'm guessing you're using the compatibility profile. Generally speaking, you should avoid any online materials that teach non-core OpenGL functionality.

Kind_9
02-27-2013, 05:01 AM
I will keep that in mind, and probably I'll use the opengl.org documentation from now on. For the record though I don't normally use magic numbers like that, it was just the last thing I tried before coming here to post this.

Anyway the real issue was simpler than I thought. I was supposed to create the WIC bitmap from the format converter instead of the frame object.

tranders
03-02-2013, 11:00 AM
Be careful using opengl.org for documentation because it is not without its flaws. For example, the 3.3 pages refer to a set of tables of valid internal formats but fails to include the tables. For the record, the 1.1 specification for glTexImage2d did not use "magic" numbers since the literal values 1,2,3, and 4 actually referred to the number of components as indicated in the argument's description. For historical reference these values mapped to the following internal base formats:

1 = GL_LUMINANCE
2 = GL_LUMINANCE_ALPHA
3 = GL_RGB
4 = GL_RGBA

I really can't understand why the GL_LUMINANCE & GL_ALPHA formats were dropped from the specification considering these have been retained in the OpenGLES 1/2/3 specifications. There was absolutely no technical reason to do this considering the internal conversion is trivial.

Alfonse Reinheart
03-02-2013, 02:26 PM
There was absolutely no technical reason to do this considering the internal conversion is trivial.

Probably because you can get the exact same effect with texture swizzling. And according to the ES 3.0 spec, the only reason Luminance et. al are still around is for backwards compatibility with ES 2.0.

tranders
03-02-2013, 05:10 PM
Probably because you can get the exact same effect with texture swizzling. And according to the ES 3.0 spec, the only reason Luminance et. al are still around is for backwards compatibility with ES 2.0.

There's a lot to be said for backwards compatibility -- especially when maintaining that support is for all practical purposes free -- something that cannot be said for the additional formats that have been added to the specification. On that I'm sure we will have to agree to disagree.

Alfonse Reinheart
03-02-2013, 08:50 PM
There's a lot to be said for backwards compatibility

Yes. But once you've made the decision to break backwards compatibility, it ceases to be a concern. Because you're breaking it. Deliberately. It's one thing to make a general statement that they shouldn't break backwards compatibility, period. But once that decision has been made, LUMINANCE is pretty indefensible. It's an obvious piece of redundancy next to generic swizzling, and therefore should be cut. Although it would have been nice if they'd introduced texture swizzling into core GL before they cut LUMINANCE, leaving us with a good year or so where you just couldn't get the behavior without hardcoding it into your shader.

In any case, if you're making people give up fixed-function and all of the other crap they removed for 3.1, they're not exactly going to be screaming and crying about losing LUMINANCE. Which is why they should have cut all of the unsized internal formats when they had the chance. But at least the glTexStorage functions don't take them, so there's a win.

Of course, this means you can't use LUMINANCE et. al. with glTexStorage. Even in OpenGL ES. So "backwards compatibility" only applies to old APIs ;)


something that cannot be said for the additional formats that have been added to the specification

... what are you referring to here? Swizzling is free too. So are the Red and RG textures.

mbentrup
03-04-2013, 10:53 PM
The render-to-texture semantics of the R and RG formats are simple, but those of the luminance and intensity formats are not. And in a mostly programmable pipeline you can easily replace them with RG textures. Well except that L/A and RG should use different sRGB conversions....