I need some suggestions about mapping a polygon to a unit sphere. Kindly help me.
Printable View
I need some suggestions about mapping a polygon to a unit sphere. Kindly help me.
I would look at spherical texture mapping assuming the polygon is a 2d object
You've asked several questions about "mapping" this to that, but you don't really explain what you mean by that. Are you talking about producing a set of texture coordinates? Are you talking about somehow turning a polygon into a sphere? Are you talking about generating a polygonal representation of a sphere?Quote:
mapping a polygon to a unit sphere.
Describe the result you're trying to achieve.
I think I need to clarify my problem. I have one cross section. I would like to give it the shape of another cross section which is totally different in shape. I thought about unitizing both cross sections by enclosing them within a unit bounding box or sphere, then map the vertices of one cross section to another. I am not sure whether my approach is ok or not. I need some suggestions.
Are you talking aout morphing one cross-section into another? If you you will need to have the same number of vertices on each cross-section.
One way I can think of is as follows
Lets call the cross-section with the most vertices XA and the other XB
express the vertices of XA as a percentage along the path of the cross-section;
find the correspond percentage along XB and insert a vertex.
Now you can morph XA into VB or vice-versa using a linear function applied to each vertex.
There is a caveat - this works well for 2D but you can get some strange in-betweens in 3D depending on the relative start and end positions of the cross-sections. You can notice this when you do skinning/lofting in a 3D modelling package.
Actually it something like morphing, but I have one X-section with a known set of vertices and the one to which I would like to map is an extracted boundary from an image. I would like to morph my X-section to give the shape of image. I am not sure whether I am on the right track. As both are of different size, I wanted to unitize them to a unit bounding box and centred at the center of the bounding box. Then pass line from center to the vertices of the cross section upto the bounding box. This intersects the extracted image boundary somewhere. Corresponding X-section vertex is pushed/ or pulled (as necessary) at the intersection point. Does it sound ok? Please give me some more suggestion.