Cylindrical Projection

Hi all,

I have created a virtual environment with nice verteces and surfaces. Orthogonal projections on the screen work fine.

But I now want to create a panoramic view of the whole scene!. Which I think is actually the projection of all points in the world onto a cylinder, that is then unfolded.

Thus, I want to have the horizontal angle (from -180 to 180 deg) on my screen’s x-achsis) and for each angle the y-corrdinates should be the projection of the world in this direction…

Unfortunately I have no idea how to get there I and would be very happy about any helping ideas.
Thanks!!

Rotate your camera around a single axis for fixed angle increments and keep taking pictures. Later, ‘stitch’ the pictures together in either Photoshop or a dedicated panorma making software, and that should be it.

A nicer way is to render the scene 6 (or 4 if you don’t need top and bottom views) times, to build a cubemap.
In the center of the virtual cube, you set up the camera, render 4 times (north east south west) to get a somewhat faceted “cylinder”.
Then, to unwrap the cuboid into a cylinder, render each face with a special fragment shader :

sampledtexcoord.y = texcoord.y
sampledtexcoord.x = tan( texcoord.x-0.5)

then store the result for each face side by side, et voilà un beau panorama.

Feel free to ask for details, that was just a high level view.

Hi,

this is how I think it should work. It is important that I can do this projection live and with a high framerate, thus I would very much appreciate more details about how I could take 4 pictures, stich them together and render them… And do everything on the graphiccard…

Do I also need to correct the picture for some distortion?

Thank you so much for your help!!!

I will try to describe this solution in smaller steps :

  1. the camera projection to use :
    1.1) find the correct projection : it must have 90 of horizontal fov, vertical fov is up to you
    1.2) find the appropriate rendering resolution : horizontally it should be around 1.27 / 4 = 0.32 times the expected panorama horizontal resolution
    1.3) according to above steps, find the vertical resolution
  2. render the 4 views, for each view :
    2.1) render scene
    2.2) copy it to a texture (each render to a different texture) using glCopyTexImage2d
    2.3) rotate 90° the camera to the right around vertical axis before next render
  3. now the unwrap+stitch :
    3.1) render a textured quad (a quarter the width of total panorama) in leftmost side of the screen with the first render texture, using a fragment shader to allow the ‘flat to quarter cylinder’ correction by tweaking the texture coordinate with sampledtexcoord.x = tan( texcoord.x-0.5)
    3.2) in the same way but translated ‘width’ pixel to the right, render second render texture
    3.3) proceed same way for third and fourth render textures
    3.4) swapbuffers, now you have your panorama

This will be fast. It will be render 1 scene * 4 + some fast copy to texture and quad rendering.
If parts of the GL window are covered by other windows, you will have to do all the 2.1) steps rendering to FBO.
The glsl shader in 3.1) is only a basic texturing shader, with the added trick on texture coordinates.

Hi thank for your help, after some playing around I seem to have realized the stips until stiching… But I am not able to programm the shader to do the correction. Do you have any suggestions how this could be realized easily. Below you find my current source code (Python)…

I am drawing a large vertex array where half of the quads are filled with one texture while the others are filled with another…

This is done 4 times for each camera orientation.


def DrawGLScene():
    global texture, go,position
    
    i = 3
    for ang in [0,90,180,270]:
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)    # Clear The Screen And The Depth Buffer
        glLoadIdentity()
           
        glRotatef(ang,0,1,0)
        glTranslate(0,0,-position)
        
        if texture:
            glEnable(GL_TEXTURE_2D)
            glBindTexture(GL_TEXTURE_2D,1)    
            
        glDrawArrays(GL_QUADS, 0, 800)
        
        if texture:
            glBindTexture(GL_TEXTURE_2D,2)   
            
        glDrawArrays(GL_QUADS, 800, 400)
         
    
        glBindTexture(GL_TEXTURE_2D,i)
        glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, 1024, 768, 0)
        
        i+= 1

    
    
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)    # Clear The Screen And The Depth Buffer
    glLoadIdentity()
    
    i = 3
    for x in [-0.5, -0.25, 0, 0.25]:
        glBindTexture(GL_TEXTURE_2D,i)
        glEnable(GL_TEXTURE_2D)
        
        glBegin(GL_QUADS)
        
        glTexCoord2f(0, 0); glVertex3f(x, -0.25/2,-0.5)
        glTexCoord2f(1, 0); glVertex3f(x+0.25, -0.25/2,-0.5)
        glTexCoord2f(1, 1); glVertex3f(x+0.25, 0.25/2,-0.5)
        glTexCoord2f(0, 1); glVertex3f(x, 0.25/2,-0.5)
        glEnd()
        
        glDisable(GL_TEXTURE_2D)
        i+=1
    
    glutSwapBuffers()
    
    glutPostRedisplay()
    
    if go:
        position += 0.1
        if position > 54:
            position = 50

Thanks,
Armin

Tutorials about GLSL texturing :
http://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/texturing.php
http://www.opengl.org/wiki/GLSL_:_common_mistakes#Binding_A_Texture
http://www.ozone3d.net/tutorials/glsl_texturing_p02.php#part_2
http://www.lighthouse3d.com/opengl/glsl/index.php?texture

You can also do it without GLSL, but it will require to draw a highly tesselated rectangle, with custom texture coordinates following the famous tan(). Ie, doing with vertices instead of doing it properly with fragments thanks to GLSL. It will be uglier and probably slower.

A small addition, you can use glCopyTexSubImage2D to copy data over an already existing texture, it is faster than glCopyTexImage2D.

Ok thanks for the links and the glCopyTextSubImage2d… I create a large, single texture , which I then draw onto a rectangle… Works very nice and I can walk through my virtual environment with a 360deg view around the vertical axis… However still I am puzzling with the shader stuff to introduce the distortion…

However I suceeded to write and compile a shader and to include source code that my programme behaves as normal, but how can I change the coodinates of pixels within a texture?? So far I can only change their color!


    VERTEX_SHADER = compileShader("""
        void main() {
            
                gl_TexCoord[0] = gl_MultiTexCoord0;
                gl_Position = ftransform();
            } 
        """, GL_VERTEX_SHADER)
        
    
    FRAGMENT_SHADER = compileShader("""
            uniform sampler2D tex;
            
            void main()
            {
                vec4 color = texture2D(tex, gl_TexCoord[0].st);               
                gl_FragColor = color;
            }
        """, GL_FRAGMENT_SHADER)
    
    testshader = compileProgram(VERTEX_SHADER,FRAGMENT_SHADER)
    glUseProgram(testshader)

By the way, shouldn’t it be possible to perform the entire cylindrical projection with shaders?

Thank you so much for your help!!! This is really great!

I thought about something like that :


void main() {
  vec2 customTexCoord;
  customTexCoord.s = gl_TexCoord[0].s;
// in radian : 0.5/tan(0.5) = 0.9152...
  customTexCoord.t = tan(gl_TexCoord[0].t-0.5) * 0.915243860856226;
  vec4 color = texture2D(tex, gl_customTexCoord.st);
  gl_FragColor = color;
}

And I would really like to see a screenshot of your results :slight_smile:

Doing directly a cylinder rendering directly is unfortunately not possible, mainly because it is a non-linear projection. Hardware is optimized for linear interpolations.

Hi,

Puhh… This was tricky but after some days of work (and basic school geometry) I got it… The biggest problem was that you also need to stretch the other direction (t in my case) not only s.

So I have a camera rotating in place by 90 deg taking 4 pictures to each side. The horizontal opening is 90 deg, but the vertical opening is slightly larger in order to get some ground which is needed by the cylindrical projection later.

These 4 pictures are stiched together into one large texture, thus one camera picture goes from s=0… 0.25 the other from 0.25 … 0.5 … and so on. This texture is then used to create the transformation with the following fragment shader:


            uniform sampler2D tex;
            void main() {
              
              float PI = 3.14159265;
              
              float angle;
              float azim;
              float dist;
              
              float get_s, get_t, pos_s;
              
              angle = gl_TexCoord[0].s*360.0 - 180.0;
              
              if (angle >= -180.0 && angle < -90.0) {
                  azim = angle+180.0 - 45.0;
                  pos_s = 0.0;
              }
    
              if (angle >= -90.0 && angle < 0.0) {
                  azim = angle + 45.0;
                  pos_s = 0.25;
              }
              
              if (angle >= 0.0 && angle < 90.0) {
                  azim = angle - 45.0;
                  pos_s = 0.5;
              }
              
              if (angle >= 90.0 && angle < 180.0) {
                  azim = angle - 180.0 + 45.0;
                  pos_s = 0.75;
              }
              
                            
              dist = sqrt(pow(tan(PI*azim/180.0),2.0) + 1.0);          
              get_s = (tan(PI*azim/180.0) + 1.0)/8.0 + pos_s;
              get_t = (gl_TexCoord[0].t-0.5)*dist + 0.5;
              
        
              if (gl_TexCoord[0].t  < 0.5 - 0.5/sqrt(2.0) || gl_TexCoord[0].t > 0.5 + 0.5/sqrt(2.0)) {
                  gl_FragColor = vec4(0.0,0.0,0.0,0.0);
              } else {
                  gl_FragColor = texture2D(tex, vec2(get_s, get_t));
              }
            }

The “World” I have tested the programme with is a long tunnel with colored checkerboard walls.

Here you see the tunnel in two orientations (90deg, 135deg, 250deg)


and the corredponding panorama pictures (as a test: they have to be the same of course, but moved horizontally)


I am really grateful for your help!

Looks very good :slight_smile:
There seem to be room for improvement in the texture sampling, by using trilinear filtering and triggering mipmaps, once the 4 views textures are updated, with glGenerateMipmap(GL_TEXTURE_2D);

Ideally add anisotropic filtering too.

Very nice!

Thank you ZbuffeR, the trinlinear filtering makes the textures at the end of the tunnel much less pixelated!!
I simply did the following:


glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR)

glGenerateMipmapEXT(GL_TEXTURE_2D)

Thats all…

Good !
Adding a bit of anisotropic filtering can limit the blurriness that comes with mipmaps :
http://developer.nvidia.com/object/Anisotropic_Filtering_OpenGL.html

It may be even nicer.

Yeah!!! Now its perfect!

This is all very beautiful, but how was the cylindrical projection fix derived? Can I get some links? Where is the radius of the cylinder taken into account?

Radius does not need to be taken in account. And it is the same for classic planar projections, distance to the plane is not taken in account.