Texture Blending

How can I blend 2 texture maps one on top of the other so that they are both visible as a blend in either 2D texture mapping or 3D texture mapping…?

Thanks!

simple multitexture. or shaders. (not advanced at all)

Can you give me a sample of such multitexture blending? I’d like to stay away from shaders.

Thanks!

I’m moving this thread to the beginners forum, you’ll get a more enthusiastic response there.

Something like:


  glActiveTextureARB( GL_TEXTURE0_ARB );
  glEnable          ( GL_TEXTURE_2D   );
  glTexEnvi         ( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE ) 
  glBindTexture     ( GL_TEXTURE_2D, texture0 );

  glActiveTextureARB( GL_TEXTURE1_ARB );
  glEnable          ( GL_TEXTURE_2D   );
  glTexEnvi         ( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE ) ;
  glBindTexture     ( GL_TEXTURE_2D, texture1 );

This sets up the texturing math:

color = litcolor * texture0 * texture1

Great thanks!

So far so good. Now I would like to place the 2 textures parallel in 3D space, so they each go into their own z plane. When I look at this in 3D space, viewing parallel to the texture planes, I get a gap in between them, that is I only see 2 lines. How can I interpolate the space between those 2 textures using the lowest possible version of OpenGL and extensions?

Sounds like you need to describe in more detail what you’re trying to accomplish.

Multitexturing will solve your problem if your goal is to render both textures blended onto a single 2D surface, but sounds like you don’t want to do this, but want to render a volume where these textures are the “caps” for the volume.

Question is, what is “inside” this volume? Is it a hollow box (rectangular parallelipiped) with just these two textures on the top an bottom, and sides which interpolate between the edge texels of the top and bottom textures?

If so, render 6 polygons at the appropriate positions in space. The top and bottom have your usual textures. And for the sides, you can cook some 2xN textures to do the linear interp between edge texels.

Thanks Photon!

What I would like to do is create a radiographic projection of the cube, so essentially that would be blending as you described it, and works great if viewed onto the texture planes. But as mentioned above the problem arises when looking at the cube sideways, because there is nothing in the gap between the 2 textures, thus blending or not, this intergap volume is black.

Yes I could do what you suggested, creating bunches of strips which connect the 2 textures, but that’s an aweful overload because i’d have to connect every texel of tex1 to its neigboring texel (in z direction) before I can render the radiographic. With such overhead I might as well do this using the CPU.

Or I could create 3 texture stacks, one in X, Y and Z each. But this will use gigantic amounts of memory, so that’s out.

I figure there’s got to be a way to do this in OpenGL, but I have not received a decicive hint as to how to do this. That is, filling the intergap voxels with interpolated data from it’s neighboring texels.

Thanks!

Well, again, you need to decide what you want to see on the sides. You’ve just said what you “don’t” want to see. :wink:

Do you just want to see the edge texels on the side (opaquely)? Do you want to see through to the rest of the volume on the side? Are you just doing a blend of the slices, or something more radiologic like MIP (maximum intensity projection), marching cubes/tetrahedra, etc. (I used to implement vis software for CT scanners in a previous life).

Also are you really only doing 3D vis with just 2 slices, or a full stack of images?

Depending on your needs you may want to use a 3D texture, and use multiple slices parallel to the display blended together in multipass to get the effect you want.

Google for GPU volume rendering to get lots of tips on this kind of thing.

Yes I could do what you suggested, creating bunches of strips which connect the 2 textures, but that’s an aweful overload because i’d have to connect every texel of tex1 to its neigboring texel (in z direction) before I can render the radiographic. With such overhead I might as well do this using the CPU.

No, you misunderstand. I mean draw a single quad for the side that connects the two slices, textured with a 2XN texture. Simple and cheap. If you’re rendering M equally-spaced slices instead of just 2, it’s still just a single quad for each side (4 sides total), but you’re texturing each with an MxN texture instead. Of course it depends on what you want to see for side views. If opaque sides, then this is what you want (or use a 3D texture). If you want to “see through” the volume on the sides, you probably want the 3D texture approach.

I figure there’s got to be a way to do this in OpenGL, but I have not received a decicive hint as to how to do this. That is, filling the intergap voxels with interpolated data from it’s neighboring texels.

It all depends on what you want to see. OpenGL renders 2D surfaces. If you want to do volume vis as it sounds like you do, you have simulate that with 2D surfaces, or reprogram your GPU using shaders to implement some sort of ray-marching algorithm.

Best I can infer, you probably want a 3D texture rendered using multiple 2D slices parallel to the view plane, textured using projective texturing.

Thanks Dark Photon,

and yes, I’d like to see radiographic projections of the volume from any angle, so a surface texture of a side view of the original stack won’t be enough. That would only be ok if using cut planes. As mentioned I need to stay away from shaders, which excludes all the material I chewed through on conventional GPU-based volume rendering.

So how would I use a single z-directional stack of 2D textures (or one 3D texture, 2D seems to have way better performance though) to create a radiographic view from any view angle without having to revert to using 3 2D texture stacks (one each for X, Y and Z) which smells like that would create way too much redundant memory overhead.

And yes, it would be a lot more than just 2 texture planes, and it would be radiographic and MIP projections. I only tried to keep it simple. Essentially I am playing with a volume renderer without the use of shaders. And the next step would be a 3D surface tesselator, but that is a separate topic for later.

Thanks!

No, just the recent stuff that uses shaders.

So how would I use a single z-directional stack of 2D textures (or one 3D texture, 2D seems to have way better performance though) to create a radiographic view from any view angle without having to revert to using 3 2D texture stacks (one each for X, Y and Z)

This is what I suggested in my last line: “3D texture rendered using multiple 2D slices parallel to the view plane, textured using projective texturing.” None of that requires shaders.

3D texture. This solves your holes on the sides problem. No shaders here.

Rendered using multiple 2D slices parallel to the view plane. This allows you to sample the volume at various depths. No shaders here.

Textured using projective texturing. You want to use the world-space location of the planes to compute the texture coordinates for the 3D texture. Available via TEXGEN. No shaders here.

For MIP you can use a blend mode of MAX. No shaders here.

So the basic concept doesn’t require shaders. However, some specific bit of processing you might want to do per ray might. Read up on projective texturing, 3D textures, blend modes, etc. and see.

There’s probably lots of old refs on this approach out there. I’m no guru on it. As an example, Mark Harris (www.markmark.net) used this approach for rendering clouds from 3D density textures (skip over all the earlier work on rendering clouds from particles). For instance, search for Flat 3D Textures and Sliced Volume Rendering in his dissertation. For the general idea of Sliced Volume Rendering, see Algorithm 4 on pg. 125. See also the refs he sites for Sliced Volume Rendering.