I have a VBO/FPO-based heightfield plugin for Quartz Composer, and for various reasons, I’m attempting to create an image to bend this heightfield around into a perfect spherical shape. As far as I know, the heightfield works by mapping the RGB channels of the input image directly to the XYZ coordinates of the mesh vertices. I assume this means I can effectively create any shape I want within a unit cube, provided I know what gradients to create in the 3 channels of the input heightmap image. Now, I’m sure this is very easy, but I just can’t seem to get it to work as it should.
Unfortunately, what I seem to get when I feed the output of this into my heightfield, is exactly one quarter of a sphere, and no amount of fiddling seems to fix it. The screenshot below shows the heightfield mesh on top of the image that created it.
Incidentally, I’m aware I can use the same sine function with a phase offset to create the cosine wave, but I thought I’d try and keep things as simple as possible initially, then add more features after I got it working.
I’m probably missing something obvious here, but this one is really annoying me…!
thanks for getting back to me.
You’re right- it is exactly 1/8 of a full sphere, not 1/4.
You’re definitely right about the clamping too. You can actually see it in the displacement map image- there are black areas with clearly-defined edges where the waveform has been cut off at 0.0. Of course, this is because the sin and cos functions go from -1 to 1. Should have remembered this before!!
The weird thing is, I’ve corrected for this in the functions (new code below), but oddly, it STILL doesn’t work!
Obviously the fault lies within the vertex shader.
If you are using negative mapping coordinates make sure to not use GL_CLAMP with glTexParameter.
actually… this woun’t help. I guess texture coordinates will always be clamped before the vertex shader reads them.
unfortunately, I don’t have direct access to these OpenGL parameters within the development application I’m using. However, I’m pretty sure the texture coordinates AREN’T being clamped in this case.
If I just map, say the texture x-coordinate directly to channel-level, I get a nice smooth linear gradient from 0.0 to 1.0 across the texture, as you’d expect, so I don’t think it it a nexture-coordinate issue.
I’ve tried altering the code so that I can visualize the each of the RGB channels on their own by applying one of the channels of the vec3 ‘spherical’ to the B channel of the output gl_FragColor vec4.
I’ve also tried to visualize the sineWave and cosineWave functions on their own, and I get pretty-much what I’d expect. It’s just when I combine the functions together, I get weird results…
Ehm, I believe u should range from 0 to PI and v should range from 0 to TWOPI instead of the other way around.
PS. You may want to move the *TWOPI and *PI multiplications outside of the fragment shader and put them in you glTexCoord calls to avoid performing these two multiplications for each fragment.
thanks for getting back to me!
Good call. I tweaked the code as you suggested, and it’s definitely improved things.
Now I get:
Still not quite what I’m after though.
Incidentally, I’m aware that I’ll only get a 256-step resolution for each channel this way, which will potentially equate to a less smooth shape. This shouldn’t cause this issue though, should it?
I was writing my post based on the code in your original post. Now I see you’re putting the offsets and scale to get in the range [0,1] into your sine and cosine functions which will mess up the spherical coordinates. I’m guessing you meant to compute the coordinates without the offset and then only offset and scale the results when writing the vertex data to an 8 bit per channel buffer, right?
Now I see you’re putting the offsets and scale to get in the range [0,1] into your sine and cosine functions which will mess up the spherical coordinates. I’m guessing you meant to compute the coordinates without the offset and then only offset and scale the results when writing the vertex data to an 8 bit per channel buffer, right?
It will? Hmmm… OK.
I was attempting to scale the levels of the output of the sinWave and cosineWave functions so they wouldn’t get clipped. Should I do the scaling somewhere else, perhaps?
You’re computing these values in a float register within the fragment shader, so you do not need to worry about clipping there. It’s only an issue if you want to write out signed data to an unsigned frame buffer or texture. The offset will mess up the result because (0.5A+0.5)(0.5B+0.5)=/=(0.5C+0.5) where A*B=C