3D texture max dimension on NV cards

I’m trying to do terrain texturing by using a 3D texture, actually a stack of 2D textures that are 1024x1024 each.

It works fine on Geforce 8 but on my Geforce 7800 with the latest driver (163.75 if I remember), it fails. For a simple reason: when you query the max 3D texture dimension, it returns 512x512x512.

If I try to submit a 1024x1024x4 3D texture, I get no GL error but the triangles textured with this 3D texture don’t appear at all (like if they were invisible) and my application crashes after a couple of seconds. I’m of course not exceeding the max amount of video memory.

ATI cards report a max of 4096^3.

I’m wondering why there is such a restriction of 512^3 on Geforce 7 or under. It almost sounds like the logic of NVidia was (wrongly) that 512^3 exceeds the amount of available video memory on Geforce 7 cards, so there’s no point in upping the limit, while in reality it should be possible to use 3D textures with a very low 3rd dimension and that easily fit in video memory…

Even worse, I’m pretty sure it’s a driver and not a hardware problem, as I believe Direct3D doesn’t suffer from the same limitation.

Is there a logic here ?

Y.

NVIDIA’s GeForce 7 series GPUs do not support 3D textures larger than 512 texels in any dimension, regardless of the amount of available video memory. That should be true under both OpenGL and Direct3D.

GeForce 8 series GPUs do not support 3D textures larger than 2048 texels in any dimension. In theory, it should be possible to define a 2048x2048x2048 texture on a GeForce 8 series GPU, but the 8GB of memory required to store it is a bit much.

Crud, I forgot to comment on the crash issue you described…

From a quick scan of the relevant driver code, it appears that we are enforcing the wrong size limit for TEXTURE_3D targets. So we accept the too-large texture, and it might even work if you forced software rasterization. But the hardware won’t like it, and the crash you’re seeing is likely a result of that. TexImage3D should be generating an INVALID_VALUE error in the case you are describing on the GeForce 7 GPUs.

Hi Paul, and thanks for answering.

There are no GL errors generated; I do a glGetError between each GL call to ensure this.

I verified in Direct3D, and indeed it’s also limited to 512^3, so it’s not a driver limitation.

Is there a reason why the hardware is limited to a maximum of 512 on GF7- (even when you’re not exceeding video memory usage) ?

The crash happens with a few seconds (3 or 4) of delay.

I verified that it didn’t come from my code, by downloading a sample on the web. You can check that out yourself:

http://gpwiki.org/index.php/OpenGL_3D_Textures

In the code, replace the 3D texture dimension from 4 to 1024, and you’ll see how the program dies.

Y.

This dimension limitation on the GeForce 7 GPUs is baked into the hardware. Texture address calculations are fairly complicated and the more address lines you have, the more chip area and design work you need. 512 was simply the number we came up with for those GPUs.

As far as the bug is concerned, I expect there to be no errors with our current driver, even though there really should be. The crash is very likely to be triggered by the driver mis-programming the hardware to use an unsupported texture size. The hardware blows up – figuratively, not literally – and the driver will handle this by crashing the application.

I don’t need a repro case for this – the fix in the driver is trivial.

Thanks,
Pat

you any idea how GL3’s going, pat?

Nice try :slight_smile: