Hi,
I’m working on a volume raycast application and I’m having a problem:
I’m creating 3D texture of 512x512x96x1 size and it works fine, but if I increse it I will see only black box instead of volume model… I was looking for any solution but all I’ve found was to check the size of proxy 3D texture - and it looks ok. (http://www.opengl.org/resources/faq/technical/texture.htm#text0120)
I know that my video card (nvidia quadro fx 1600M) should supports much more textures that 24MB mentioned above - I am able to create procedural texture in RenderMonkey and 512x512x256x4 (256MB) works just fine. What’s more I’ve already worked with another application (written in C, C# and Java) that uses 512x512x125x2 (~62MB) 3D texture.
Is there a limit of the 3d texture size? If so, why I experience two different limitations?
I have windows 7, I’m programming with VS2010 in c# with OpenTK.
Please, if you have any suggestions let me know.
Thanks in advance!
Hi,
I think the problem is in your texenv settings. you should use
PixelPackAlignment = 1 and PixelUnpackAlignment = 1. The black
is like a “sign”. It means that there is no content or some
settings are wrong. When working with 8bit raw files you should use
Furthermore, check for opengl errors with glGetError()
Note that float datasets in most cases store 12bit values and not
16bit values.
By the way uchar files arnt restricted to 24MB. I can load much greater file into my raycaster (uchar files).
And another hind for your transfer functions. better you use
CLAMP_TO_EDGE instead of CLAMP_TO_BORDER. For me, CLAMP_TO_BORDER cause artifacts with raycasting while CLAMP_TO_EDGE doesnt.
I don’t know wether it makes sense or not, but try to upload your
texture per slice with glTexSubImage3D(…).
Did you create you own (synthetic) 3d volumes or did you use
a raw file download in the internet ? There is a 8bit rawfile
in the example mentioned abov (SimpleSlicer). It’s the well known engine block. 256256256 8bit. Maybe you check out this
first.
Thank you very much. I’ll definetely use your advise!
To be clear: I want to use alpha channel to my raytrace algorithms as another parameter. Just for tests. Of course I’ll replace them by 1D transfer function.
My 3d textures were at first just synthetic generated sphere , but I want to apply some medical data I have and I want to work on.
I’ll check glTexSubImage3D function. It may be usefull.
Hmm,
if you have an additional alpha channel then maybe you cant
use the format luminance or intensity. You might use GL_RGBA
instead. Don’t hesitate to ask if you have any further questions.
Btw. you also can check out www.voreen.org. It’s an open source
volume rendering framework. There you might have a look into the
shaders, just for beginning. Using raycasting for volume rendering you need to reconstruct you entrypoints when the nearplane intersects with your boundingbox (the colored cube used for entry exit point generation). Otherwise you cant move
your camera inside the volume.
With dataset sized 512x512x125x2 I used LuminanceAlpha internal and external voxel interpretation. RGBA would be more expensive… anyway i fill probably end up with this RGBA…
Thanks for interesting approach to rendering from inside the volume. I will look for and read about it and finally try to implement myself.
Thanks again - these was very helpful informations!
Hi, good to know its working now.
Of course you can use default pixelalignment (4)
but then your volume textures have to be
power of two dimensions. But for our luck
modern hardware supports npot textures.