PDA

View Full Version : Trouble with 8-bit single channel 3D texture



Ronald
07-01-2008, 06:20 AM
Afternoon people,

I have trouble with a 3D texture.
I would like to create a 512x 512x512 3D texture.
All works well when I use RGBA as internal format.

For memory reasons I would like to put single channel 8 Bit (UBYTE) data into the texture and let my shader do some work. But every time I do that, my app crashes.

In short I would like to change:
glTexImage3D (GL_TEXTURE_3D, 0, GL_RGBA, width, height, depth, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0)

into:
glTexImage3D (GL_TEXTURE_3D, 0, GL_LUMINANCE8, width, height, depth, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0)

Im using a Nvidia 7600.

Anyone has had this problem before?
Any help would be welcome.

Regards,

Ronald

dletozeun
07-01-2008, 06:48 AM
is any gl error thrown? check with glGetError.

Do you specfy then data for your texture?

Note that you won't save memory space with luminance since internally, luminance is duplicated 3 times is the RGB channels, attaching 1 for Alpha.

Ronald
07-01-2008, 06:55 AM
Thx for your fast reply Dletozeun,

No glError is thrown. I checked before I call glTexImage3D(). It crashes in the driver with no readable stacktrace when I call:
glTexImage3D (GL_TEXTURE_3D, 0, GL_LUMINANCE8, width, height, depth, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0)

After the create texture I update the data.

On your third point I can only ask how can I save the space?
Since 512x512x512x 32 bits = 512 Mb,
instead of 512x512x512x 8 bits = 128 Mb.
8 bit precision is enough for my application.

dletozeun
07-01-2008, 07:22 AM
I don't know if it is possible to create a texture with only one 8 bit channel in video memory.

But, how do you update the texture data? do you draw into with a fbo?
glTexImage3D allocates memory space for the specified texture, you should not call once more glTexImage3D to update texture data unless you use a pbo.

Ronald
07-01-2008, 07:26 AM
Thx again,

I call:
glTexSubImage3D (GL_TEXTURE_3D, 0, GL_LUMINANCE8, width, height, depth, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, data);

Where data is a reference to the array in ofcourse the correct format ( UBYTE );

dletozeun
07-01-2008, 07:55 AM
So you call twice the glTexSubImage3D ?? once with a null data pointer, and then with a the "data" pointer?
As I said glTexSubImage3D even with a null pointer allocates data in video memory.
I don't know exactly how video memory is managed, but if you allocate twice, 512 MB in video memory, it could cause some problems, but driver should allocate in system memory then, and don't crash your application (if enough space remains, then it is swap memory)

If you use small 3D textures, does your application crash?

V-man
07-01-2008, 08:09 AM
He calls glTexImage3D with NULL which allocates space. Then he calls glTexSubImage3D which is used to update. There is nothing wrong with that.
There is a nvidia doc called nv_ogl_texture_formats.pdf and it states that from TNT to Gf 6800, LUMINANCE8 is directly supported. The 7600 is based on the 6800 design. I'm sure even the 8800 and 9800 generation support it.
It is possible that "data" is not large enough or there is an alignement problem. If each data line is not multiple of 4, call glPixelStorei(GL_UNPACK_ALIGNMENT, 1)

Ronald
07-01-2008, 08:12 AM
No, no

I first call glTexImage3D() to allocate the memory then I update it with glTexSubImage3D(). Works perfect for everything else, 2D textures, 1D textures.

The problem is realy, how to create an 8-bit single channel texture.

As I said from the start all works well when using RGBA for internal format.

The app also crashes when I try to crate small 8 bit single channnel 3D textures.

dletozeun
07-01-2008, 08:21 AM
He calls glTexImage3D with NULL which allocates space. Then he calls glTexSubImage3D which is used to update. There is nothing wrong with that.


Yes!! My big mistake! I read too fast and believed that it was glTexImage3D instead of glTexSubImage3D... :( Sorry.

dletozeun
07-01-2008, 08:26 AM
Ronald, you said that it works perfectly well with RGBA internal format and you specified a RGBA pixel format in the code at the beginning; so it means that your data is composed of 4 8 bit channels for each pixel.

Then with LUMINANCE pixel format the data should have only one 8 bit channel for each pixel.

Do you take care of the data format conversion?

Zengar
07-01-2008, 09:01 AM
Ronald, you said that it works perfectly well with RGBA internal format and you specified a RGBA pixel format in the code at the beginning; so it means that your data is composed of 4 8 bit channels for each pixel.

Then with LUMINANCE pixel format the data should have only one 8 bit channel for each pixel.

Do you take care of the data format conversion?

What do you meanby that?

My guess: 8-bit format is just not supported with 3D textures! Or this is that unpack alignment problem.

babis
07-01-2008, 09:39 AM
On your third point I can only ask how can I save the space?
Since 512x512x512x 32 bits = 512 Mb,
instead of 512x512x512x 8 bits = 128 Mb.
8 bit precision is enough for my application.

Just a wild opinion : if you store 4 8-bit slices in 1 RGBA slice?
Just make your depth 128, then you can fetch your sample with the 3d coord :

[ coord_new = vec3(coord.xy, coord.z * 0.25) ]

and pick the [ ch = mod(coord.z,128) ] channel from the sample.

You could also write to a specific channel using colormask.
This , I *wildly guess* could save you space, since 384MB is a big deal.

babis
07-01-2008, 09:41 AM
My guess: 8-bit format is just not supported with 3D textures! Or this is that unpack alignment problem.

I've been using 8-bit 3d textures successfully. So I would guess it's an alignment thing.

dletozeun
07-02-2008, 12:24 AM
What do you meanby that?
My guess: 8-bit format is just not supported with 3D textures! Or this is that unpack alignment problem.


I was thinking of an alignment problem, due to a bad pixel format in the specified data array.
In the first Ronald's post, glTexImage3D is called with GL_RGBA pixel format, so pixels in data have 4 channels: RGBA with the same values in RGB and 1 for A.
Now In the second call, glTexImage3D is called with GL_LUMINANCE pixel format, so the given data array pixel format must be different: each pixel has one 8 bits luminance channel instead of 4.

So I was wondering if Ronald take care of this.

Zengar
07-02-2008, 12:41 AM
I am sure he did ;) He doesn't seem like somebody who would make a mistake like that...

dletozeun
07-02-2008, 12:45 AM
Yes I agree, but I have to be sure! :)

Ronald
07-04-2008, 02:41 AM
Morning people,

Many thanks for all the replies.
Here an update from my side.

I did get it to work with:
internalFormat = GL_LUMINANCE16_ALPHA16; // 16 bit precision
format = GL_LUMINANCE_ALPHA;
type = GL_UNSIGNED_SHORT;

for luminance I use the value from the data array mapped to unsigned short range. for alpha I use 65535.
This will still use 512 Mb with 512x512x512 3D textures ofcourse.
So still the original problem is not solved, although does exact what I would like to do.

@ Babis. Good thinking. Too bad this is no option for me since I want standard OpenGL linear filtering to be available.

I will get back to you guys when I have taken care of this unpack alignment thing.

Regards,

Ronald

dletozeun
07-04-2008, 08:26 AM
I have never used GL_LUMINANCE16_ALPHA16 internal format, each channel has 16 bit precision, doesn't it?

I don't really get why it would work for the short typed values and not byte typed ones (even more with luminance/alpha pairs rather than luminance only)... finally, what is stored in your data array? shorts or bytes?

zed
07-04-2008, 02:53 PM
i have no problems with 3d LUMINANCE8 textures on my GF7600, im been using a couple for 'nin years + the drivers have never balked, even mipmaps work ok