Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 7 of 7

Thread: Mapping a polygon to a unit sphere

Hybrid View

  1. #1
    Junior Member Regular Contributor
    Join Date
    Jun 2010
    Posts
    162

    Mapping a polygon to a unit sphere

    I need some suggestions about mapping a polygon to a unit sphere. Kindly help me.

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2012
    Location
    Australia
    Posts
    1,104
    I would look at spherical texture mapping assuming the polygon is a 2d object

  3. #3
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    mapping a polygon to a unit sphere.
    You've asked several questions about "mapping" this to that, but you don't really explain what you mean by that. Are you talking about producing a set of texture coordinates? Are you talking about somehow turning a polygon into a sphere? Are you talking about generating a polygonal representation of a sphere?

    Describe the result you're trying to achieve.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Apr 2012
    Posts
    164
    Quote Originally Posted by jenny_wui View Post
    I need some suggestions about mapping a polygon to a unit sphere. Kindly help me.
    Actually, a very simple solution would be to normalize the vertices of the polygon. This would force them to lie on the surface of a unit sphere. Have you tried that?

  5. #5
    Junior Member Regular Contributor
    Join Date
    Jun 2010
    Posts
    162
    I think I need to clarify my problem. I have one cross section. I would like to give it the shape of another cross section which is totally different in shape. I thought about unitizing both cross sections by enclosing them within a unit bounding box or sphere, then map the vertices of one cross section to another. I am not sure whether my approach is ok or not. I need some suggestions.

  6. #6
    Senior Member OpenGL Pro
    Join Date
    Jan 2012
    Location
    Australia
    Posts
    1,104
    Are you talking aout morphing one cross-section into another? If you you will need to have the same number of vertices on each cross-section.

    One way I can think of is as follows
    Lets call the cross-section with the most vertices XA and the other XB

    express the vertices of XA as a percentage along the path of the cross-section;

    find the correspond percentage along XB and insert a vertex.

    Now you can morph XA into VB or vice-versa using a linear function applied to each vertex.

    There is a caveat - this works well for 2D but you can get some strange in-betweens in 3D depending on the relative start and end positions of the cross-sections. You can notice this when you do skinning/lofting in a 3D modelling package.

  7. #7
    Junior Member Regular Contributor
    Join Date
    Jun 2010
    Posts
    162
    Actually it something like morphing, but I have one X-section with a known set of vertices and the one to which I would like to map is an extracted boundary from an image. I would like to morph my X-section to give the shape of image. I am not sure whether I am on the right track. As both are of different size, I wanted to unitize them to a unit bounding box and centred at the center of the bounding box. Then pass line from center to the vertices of the cross section upto the bounding box. This intersects the extracted image boundary somewhere. Corresponding X-section vertex is pushed/ or pulled (as necessary) at the intersection point. Does it sound ok? Please give me some more suggestion.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •