Heightmap for Spherical Heightfield

This is an obscure one:

I have a VBO/FPO-based heightfield plugin for Quartz Composer, and for various reasons, I’m attempting to create an image to bend this heightfield around into a perfect spherical shape. As far as I know, the heightfield works by mapping the RGB channels of the input image directly to the XYZ coordinates of the mesh vertices. I assume this means I can effectively create any shape I want within a unit cube, provided I know what gradients to create in the 3 channels of the input heightmap image. Now, I’m sure this is very easy, but I just can’t seem to get it to work as it should.

This is my GLSL fragment shader code:

// Constants
#define PI 3.14159265
#define TWOPI 6.28318531

// Sine function
float sineWave(float phase) {
	// Create sine wave
	float wave = sin(phase);
	// Output
	return wave;
}

// Cosine function
float cosineWave(float phase) {
	// Create cosine wave
	float wave = cos(phase);
	// Output
	return wave;
}

void main()
{
	vec2 xy = gl_TexCoord[0].xy;
	
	// Parametric UV coordinates
	float u = xy.x * TWOPI;
	float v = xy.y * PI;
	
	// Spherical map values
	vec3 spherical;
	spherical.r = cosineWave(v) * sineWave(u);
	spherical.g = sineWave(v) * sineWave(u);
	spherical.b = cosineWave(u);
	
	gl_FragColor = vec4(spherical.r,spherical.g,spherical.b,1.0);
}

Unfortunately, what I seem to get when I feed the output of this into my heightfield, is exactly one quarter of a sphere, and no amount of fiddling seems to fix it. The screenshot below shows the heightfield mesh on top of the image that created it.

Incidentally, I’m aware I can use the same sine function with a phase offset to create the cosine wave, but I thought I’d try and keep things as simple as possible initially, then add more features after I got it working.

I’m probably missing something obvious here, but this one is really annoying me…!

Cheers guys!

alx
http://machinesdontcare.wordpress.com

I bet it’s the +++ octant (1/8th not 1/4 sphere)

I’d say gl_FragColor is getting clamped.

Hi dobie,

thanks for getting back to me.
You’re right- it is exactly 1/8 of a full sphere, not 1/4.
You’re definitely right about the clamping too. You can actually see it in the displacement map image- there are black areas with clearly-defined edges where the waveform has been cut off at 0.0. Of course, this is because the sin and cos functions go from -1 to 1. Should have remembered this before!!

The weird thing is, I’ve corrected for this in the functions (new code below), but oddly, it STILL doesn’t work!

This is what I now get:

Here is the new code:

// Constants
#define PI 3.14159265
#define TWOPI 6.28318531

// Sine function
float sineWave(float phase) {
	// Create sine wave
	float wave = sin(phase);
	// Scale and offset to 0.0 > 1.0
	wave = wave * 0.5 + 0.5;
	// Output
	return wave;
}

// Cosine function
float cosineWave(float phase) {
	// Create cosine wave
	float wave = cos(phase);
	// Scale and offset to 0.0 > 1.0
	wave = wave * 0.5 + 0.5;
	// Output
	return wave;
}

void main()
{
	vec2 xy = gl_TexCoord[0].xy;
	
	// Parametric UV coordinates
	float u = xy.x * TWOPI;
	float v = xy.y * PI;
	
	// Spherical map values
	vec3 spherical;
	spherical.r = cosineWave(v) * sineWave(u);
	spherical.g = sineWave(v) * sineWave(u);
	spherical.b = cosineWave(u);
	
	gl_FragColor = vec4(spherical.r,spherical.g,spherical.b,1.0);
}

Any clues?

alx
http://machinesdontcare.wordpress.com

Obviously the fault lies within the vertex shader.
If you are using negative mapping coordinates make sure to not use GL_CLAMP with glTexParameter.
actually… this woun’t help. I guess texture coordinates will always be clamped before the vertex shader reads them.

Hi def,

unfortunately, I don’t have direct access to these OpenGL parameters within the development application I’m using. However, I’m pretty sure the texture coordinates AREN’T being clamped in this case.

If I just map, say the texture x-coordinate directly to channel-level, I get a nice smooth linear gradient from 0.0 to 1.0 across the texture, as you’d expect, so I don’t think it it a nexture-coordinate issue.

I’ve tried altering the code so that I can visualize the each of the RGB channels on their own by applying one of the channels of the vec3 ‘spherical’ to the B channel of the output gl_FragColor vec4.

	// Parametric UV coordinates
	float u = xyNorm.x * TWOPI;
	float v = xyNorm.y * PI;
	// Spherical map values
	vec3 spherical;
	spherical.r = cosineWave(v) * sineWave(u);
	spherical.g = sineWave(v) * sineWave(u);
	spherical.b = cosineWave(u);	
	
	// Output displacement map
	return vec4(xy.x,xy.y,spherical.b,1.0);

This gives me a displaced plane.

This is the Red channel:

The Green channel:

And finally the Blue channel:

I’ve also tried to visualize the sineWave and cosineWave functions on their own, and I get pretty-much what I’d expect. It’s just when I combine the functions together, I get weird results…

Any clues?

alx

Ehm, I believe u should range from 0 to PI and v should range from 0 to TWOPI instead of the other way around.

PS. You may want to move the *TWOPI and *PI multiplications outside of the fragment shader and put them in you glTexCoord calls to avoid performing these two multiplications for each fragment.

Hi -NiCo-

thanks for getting back to me!
Good call. I tweaked the code as you suggested, and it’s definitely improved things.

Now I get:

Still not quite what I’m after though.

Incidentally, I’m aware that I’ll only get a 256-step resolution for each channel this way, which will potentially equate to a less smooth shape. This shouldn’t cause this issue though, should it?

Cheers,

alx

I was writing my post based on the code in your original post. Now I see you’re putting the offsets and scale to get in the range [0,1] into your sine and cosine functions which will mess up the spherical coordinates. I’m guessing you meant to compute the coordinates without the offset and then only offset and scale the results when writing the vertex data to an 8 bit per channel buffer, right?

At least now I know how to render Pringles chips :wink:

Now I see you’re putting the offsets and scale to get in the range [0,1] into your sine and cosine functions which will mess up the spherical coordinates. I’m guessing you meant to compute the coordinates without the offset and then only offset and scale the results when writing the vertex data to an 8 bit per channel buffer, right?

It will? Hmmm… OK.
I was attempting to scale the levels of the output of the sinWave and cosineWave functions so they wouldn’t get clipped. Should I do the scaling somewhere else, perhaps?

alx

You’re computing these values in a float register within the fragment shader, so you do not need to worry about clipping there. It’s only an issue if you want to write out signed data to an unsigned frame buffer or texture. The offset will mess up the result because (0.5A+0.5)(0.5B+0.5)=/=(0.5C+0.5) where A*B=C

I believe this is what you’re looking for:


// Constants
#define PI 3.14159265
#define TWOPI 6.28318531

void main()
{
	// Parametric UV coordinates
	float u = gl_TexCoord[0].x * TWOPI;
	float v = gl_TexCoord[0].y * PI;
	
	// Spherical map values
	vec3 spherical;
	spherical.r = cos(u) * sin(v);
	spherical.g = sin(u) * sin(v);
	spherical.b = cos(v);
	
	gl_FragColor = 0.5*vec4(spherical.r,spherical.g,spherical.b,1.0) +0.5;
}

Aha!!!
Got it. I just removed the scaling from the 2 wave functions and added it after the parametric sphere formula. Now it works like a charm!!

spherical = spherical * 0.5 + 0.5;

You’re a star, as ever, -NiCo-!!!

Thanks once again,

alx

Glad to see you got it working now :slight_smile:

Thanks for the very clear explanation there -NiCo-.
I missed you last post earlier. It makes perfect sense now…

Thanks again,

alx

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.