Lat-long/spherical projection using vertex shader

Hello,

I am trying to emulate a spherical camera projection from inside a vertex shader. What I am really after is something like this: In a pathracer like Arnold/RIS, you would render a latlong image by using a spherical camera/projection. The idea behind this render being that for every pixel in the rendered image, compute a corresponding 3d coordinate based on spherical/polar math driven by the NDC coordinate. So this way I get a lat-long style renders of the entire scene with offline rendering.

I was wondering if this is possible in a GLSL/vertex shader context? I have been able to project the incoming vertices in spherical manner. But it truly is not a latlong image. I understand Vertex shader works only on the incoming vertices and post MVP I can get screenspace coords. But I will need access to entire screenspace coord and also the render resolution to be able to do what I am after, I suppose?

I know there are techniques like two pass, cubemapped render techniques but I am trying to avoid that and get the latlong projection via a vertex shader.

Can the GLSL gurus please help?

Thanks
-raw

You can’t get around the fact that the result of applying a spherical projection to a triangle isn’t a triangle.

If your triangles are really small and not close to the poles, the difference may not be significant. Otherwise, the sane solution is to render the six faces of a cube map then project that using a fragment shader.

[QUOTE=GClements;1285931]You can’t get around the fact that the result of applying a spherical projection to a triangle isn’t a triangle.

If your triangles are really small and not close to the poles, the difference may not be significant. Otherwise, the sane solution is to render the six faces of a cube map then project that using a fragment shader.[/QUOTE]

Regarding triangle retaining its shape, I can do away with using a tesselation shader to get smooth geometry. The geometry will indeed be deformed but that’s OK.

What I want to know is if its possible at all to achieve true latlong style render (like I first mentioned) within a vertex shader.

Thanks again.

Short answer: no. In particular, consider how you would need to deal with triangles which intersect the axis or the anti-meridian (i.e. those which would intersect the edges of the viewport once projected). Such triangles would need to be split so that the projection is continuous for each portion, and that can’t be done with a vertex shader.

You might be able to get something which is “close enough” by simply projecting the vertices then discarding back-facing triangles (those which span the anti-meridian) as well as any fragments for which an attribute’s dFdx() is extreme (which will occur in triangles which either enclose the poles or are very close to them).

That will result in some kind of border at the edges of the viewport. Whether it matters depends upon your application. If you want a seamless image which can be used as a spherical environment map, then it will matter.

[QUOTE=GClements;1286040]

You might be able to get something which is “close enough” by simply projecting the vertices then discarding back-facing triangles (those which span the anti-meridian) as well as any fragments for which an attribute’s dFdx() is extreme (which will occur in triangles which either enclose the poles or are very close to them).

That will result in some kind of border at the edges of the viewport. Whether it matters depends upon your application. If you want a seamless image which can be used as a spherical environment map, then it will matter.[/QUOTE]

Gotcha. I understand what you are saying about the edge cases.

Out of intellectual curiosity, I would still like to go ahead write a GLSL shader that, in practise, does what I intend to barring the edge cases. I was planning to use this this GLSL shader inside Maya’s Viewport 2.0 and later use the Maya Playblast render (baked hardware render) for VR purposes.

Let me start with a sample code I have in mind.


vec4 toLatLong(vec4 v) // incoming vertex after MVP mult. 
{
    float val;
    vec4 o = vec4(1.0);

    // radius, theta, phi
    float r = sqrt(pow(v.x, 2.0) + pow(v.y, 2.0) + pow(v.z, 2.0));
    float theta = acos(v.z/r);
    float phi   = atan(v.y/v.x);

    o.x = clamp(0.5 + phi/(2.0*PI), 0.0, 1.0);
    o.y = clamp(theta/PI, 0.0, 1.0);
    //o.z = (r * cos(theta))/v.z;
    return o; // final vertex pos. gl_position.
}

Since final render image resolution is what matters to me I would like the shader to be driven by the render resolution. Like this: https://www.shadertoy.com/view/MlfSz7

If I am able to achieve something close to the above Shadertoy example then that is something that I can find it usable.

This is my first ever foray into realtime shaders so I really appreciate your help :slight_smile:

Thanks

[QUOTE=GClements;1286040]

You might be able to get something which is “close enough” by simply projecting the vertices then discarding back-facing triangles (those which span the anti-meridian) as well as any fragments for which an attribute’s dFdx() is extreme (which will occur in triangles which either enclose the poles or are very close to them).

That will result in some kind of border at the edges of the viewport. Whether it matters depends upon your application. If you want a seamless image which can be used as a spherical environment map, then it will matter.[/QUOTE]

Gotcha! I understand what you saying about the edge cases. Can you please elaborate of the workaround you mentioned? Out of intellectual curiosity, I would still like to go ahead write a GLSL shader that, in practise, does what I intend to - barring the edge cases. I was planning to use this this GLSL shader inside Maya’s Viewport 2.0 and later use Maya Playblast render (baked hardware render) for VR purposes.

Let me start with a sample code I have in mind.


vec4 toLatLong(vec4 v) // incoming vertex after MVP mult.
{
    float val;
    vec4 o = vec4(1.0);

    // radius, theta, phi
    float r = sqrt(pow(v.x, 2.0) + pow(v.y, 2.0) + pow(v.z, 2.0));
    float theta = acos(v.z/r);
    float phi   = atan(v.y/v.x);

    o.x = clamp(0.5 + phi/(2.0*PI), 0.0, 1.0);
    o.y = clamp(theta/PI, 0.0, 1.0);
    //o.z = (r * cos(theta))/v.z;
    return o; // final vertex pos. gl_position
}

I would like the code to be driven by the image render resolution. That is, once I have the screen space vertices I would like that to scaled by the render resolution. Like in this example ( https://www.shadertoy.com/view/MlfSz7 ). I was wondering if we can get similar behaviour in a vertex shader.

If I can get something working close to the above example I think I can make it fit my purpose.

Being relatively new to opengl, I really appreciate your help.

Thanks

The workarounds are to identify triangles which will be incorrect and discard or fix them.

For a triangle which spans the anti-meridian, you might have e.g. one vertex at -179.9° and two at +179.9°. When projected, these will have NDC X coordinates of -0.999 and +0.999, i.e. its width will be the entire viewport minus a fraction of a pixel, rather than just a fraction of a pixel. If the fragment shader computes dFdx() for any interpolated attribute, its value will be a few orders of magnitude smaller than for other triangles.

These triangles can be discarded, or fixed by duplicating them on both sides. So the hypothetical -179.9/+179.9 triangle becomes two identical triangles offset by 360°, one from -179.9 to -180.1, the other from +180.1 to +179.9. Each will span one edge (left or right) of the viewport.

Triangles which intersect the axis can be detected by a point-in-triangle test (i.e. whether the origin is inside the triangle’s projection onto the equatorial plane). They can be fixed by reflecting a vertex about the corresponding pole. E.g. for a triangle which encompasses the north pole, adding 180° to the longitude while changing the latitude from λ to 90°+(90-λ) = 180°-λ. Such triangles will span the top or bottom edge of the viewport. Note that the “fixed” triangle may span the anti-meridian, in which case it will need to be duplicated as above.

Use the length() function.

Use theta=atan(length(v.xy), v.z). acos() is numerically unstable when its argument is close to one (i.e. near the poles). Actually, use the reciprocal theta=atan(v.z, length(v.xy)) so that you get both hemispheres.

Use atan(v.y, v.x). The two-argument form gets the quadrant correct: atan(1,1) is π/4 (45°), atan(-1,-1) is -3π/4 (-135°).

I suggest using the signed [-1,+1] range rather than the unsigned [0,1] range.

IOW:


theta=atan(v.z, length(v.xy))
phi=atan(v.y, v.x)
o = vec4(phi/PI, 2.0*theta/PI,length(v),1.0)

Thank you for the detailed answer and also the GLSL tips. Really helpful!

[QUOTE=GClements;1286093]The workarounds are to identify triangles which will be incorrect and discard or fix them.

Triangles which intersect the axis can be detected by a point-in-triangle test (i.e. whether the origin is inside the triangle’s projection onto the equatorial plane). They can be fixed by reflecting a vertex about the corresponding pole. E.g. for a triangle which encompasses the north pole, adding 180° to the longitude while changing the latitude from λ to 90°+(90-λ) = 180°-λ. Such triangles will span the top or bottom edge of the viewport. Note that the “fixed” triangle may span the anti-meridian, in which case it will need to be duplicated as above.

[/QUOTE]

Is this something that can be done inside GLSL vertex code? My understanding is that for things like point-in-triangle tests, you need to have control over VBOs etc. That is, you have to control main OpenGL client app. Is that right? 

My requirement here is very specific. I can only use GLSL (VS/FS) code inside via Maya's viewport support( with OGS framework). 

While the above math didn't get me far, I tried the technique described [here](https://emmanueldurand.net/spherical_projection/). It gave me better results. 

But I am doubting that even with above approaches, I will get to where I want to in terms of image correctness for VR purposes. 

For ex: in a pathtracer like PBRT, the following code would work as a spherical camera.

Float theta = Pi * sample.y / fullResolution.y;
Float phi = 2 * Pi * sample.x / fullResolution.x;
Vector3f dir(std::sin(theta) * std::cos(phi), std::cos(theta),
std::sin(theta) * std::sin(phi));

we are traversing *every* pixel in the image and finding a corresponding spherical coordinate hit in the scene. So we are going from the pixel domain to the 3d. So we get complete mapping coverage. But in a VS, we work only on incoming vertices with no idea about overall image size for correct mapping? So I can't visualize if the above math is accurate enough. Please correct me if I am not thinking this the right way. 

I tried attaching screenshots but the forum is saying is invalid format(jped) :tired:

Many kind thanks!

Thank you for the detailed answer and also the GLSL tips. Really helpful!

[QUOTE=GClements;1286093]The workarounds are to identify triangles which will be incorrect and discard or fix them.

Triangles which intersect the axis can be detected by a point-in-triangle test (i.e. whether the origin is inside the triangle’s projection onto the equatorial plane). They can be fixed by reflecting a vertex about the corresponding pole. E.g. for a triangle which encompasses the north pole, adding 180° to the longitude while changing the latitude from λ to 90°+(90-λ) = 180°-λ. Such triangles will span the top or bottom edge of the viewport. Note that the “fixed” triangle may span the anti-meridian, in which case it will need to be duplicated as above.

[/QUOTE]

Is this something that can be done inside GLSL vertex code? My understanding is that for things like point-in-triangle tests, you need to have control over VBOs etc. That is, you have to control main OpenGL client app. Is that right?

My requirement/limitation here is very specific. I can only use GLSL (VS/FS) code inside via Maya's viewport supports(OGS framework).

While the above math didn't get me far, I tried the technique described [here](https://emmanueldurand.net/spherical_projection/). It gave me better results.

But I am doubting if even with above approaches, I will get to where I want to barring the vertex stretching/singularities and other issues.

For ex: in a pathtracer like PBRT, the following code would work as a spherical camera.

Float theta = Pi * sample.y / fullResolution.y;
Float phi = 2 * Pi * sample.x / fullResolution.x;
Vector3f dir(std::sin(theta) * std::cos(phi), std::cos(theta),
std::sin(theta) * std::sin(phi));

we are traversing *every* pixel in the image and finding a corresponding spherical coordinate hit in the scene. So we are going from the pixel domain to the 3d. So we get complete mapping coverage. But in a VS, we work only on incoming vertices with no idea about overall image size? So I can't visualize if the above math is accurate enough. Does that make sense?

Thanks

No, because it’s a property of a triangle rather than of a single vertex. Triangles which span the anti-meridian can be detected in a GLSL fragment shader by the magnitude of dFdx(). But that would only allow you to discard them, not fix them.

You could perform the fixes in a geometry shader.

Suppose that you have a triangle which is in the middle of the viewport. Now suppose that you pan the view so that the triangle moves horizontally. When part of the triangle disappears off one edge of the viewport, the portion which disappeared must re-appear at the opposite edge. The only way to get the rasterisation process to do this is to duplicate the triangle, so three vertices become six vertices.

The rabbit hole goes deeper n deeper. Alright. I will read up on geometry shader and give it a shot.

What do you think about my concerns regarding the “correctness” of the latlong image achievable with glsl in comparison to a pathtracer render output like I explained in my earlier post? Do you think we can match it exactly?

No, because the fragments within each triangle will always have their positions determined by linear interpolation between vertex positions.

As the triangles get smaller, so will the deviation from the correct result, but it will never reach zero. Just getting to the point where the distortion isn’t obvious will probably require the use of tessellation shaders to get triangles down to a few pixels in size.

I have got it working to decent stage but I think I need to get something of a basic stuff clarified.

So imagine a simple setup with a camera and two spheres. One sphere z units in front of camera and the other -z units behind the camera. After MVP mult based on perspective projection, will the sphere right behind the camera get discarded? That is will the vertex shader not receive vertices from this object? My experiments tell me that I can’t operate on these vertices. For ex: atan(v.z, v.x) works only on positive v.z.

Please let me know if I am missing something.

Assuming z is the distance to the closest point on the sphere, and MVP is applied in vertex shader, then if you output that MVP-transformed position from your vertex shader, you won’t get fragment shader executions for the sphere behind the eyepoint. These primitives/fragments will be clipped away by the near clip plane (as well as the bottom, top, left, and right clip planes as well).

However, if (in the vertex shader) after you’ve transformed the positions by MVP (i.e. taken them to clip space), you can modify them any way you want to. In so doing, you could cause these primitives that would have been clipped away not to be clipped. (I’m not saying you should; just trying to give you a complete answer to your question.)

The vertex shader is executed for every vertex. Frustum culling and clipping are performed on primitives, based upon the vertex coordinates written to gl_Position by the vertex shader.

If the vertex shader simply applies a perspective projection, then vertices behind the viewer will have negative W, so they’ll fail all of the clip tests.

If you’re implementing a panoramic projection, you’d never give vertices a negative W coordinate.

The two-argument form of atan() works with any combination of signs. The result will have the same sign as the first argument; its magnitude will be between 0 and pi/2 if the second argument is positive, between pi/2 and pi if it’s negative.

[QUOTE=GClements;1286216]

The two-argument form of atan() works with any combination of signs. The result will have the same sign as the first argument; its magnitude will be between 0 and pi/2 if the second argument is positive, between pi/2 and pi if it’s negative.[/QUOTE]

Thanks Gclements and Dark Photon. That clarified it for me. Doing just MV instead of MVP is what was really needed. I remapped atan accordingly to get the correct phi value after normalizing. The panoramic projection now closely matches that of a pathtracer. I am testing on simple scene and yet to to test on a complex one.

I am learning up on geometry shaders to get something working for my case. I will update on that later.

I realised that I will also need to get correct normals for lighting. After the vertices have been dragged across, what techniques are available for me to get a simple lambertian shading with normals respecting the new position of vertices? Is it something that can do in vertex shader? Or is it going to be geometry or pixel shader?

Thankd

Lighting can be done either in the vertex shader or fragment shader. Nowadays, it’s normally done in the fragment shader unless you’re specifically trying to emulate the fixed-function lighting. Either way, you will need affine vertex coordinates (prior to the spherical mapping) for lighting. That’s why legacy OpenGL has separate model-view and projection matrices: the lighting calculations don’t work in a projective space (i.e. after you’ve applied a perspective projection).

So the vertex shader needs to emit both affine vertex coordinates (after the model-view transformation) and spherical (lat-lon) vertex coordinates. The latter will be used for gl_Position, the former for lighting calculations.

[QUOTE=GClements;1286229]Lighting can be done either in the vertex shader or fragment shader. Nowadays, it’s normally done in the fragment shader unless you’re specifically trying to emulate the fixed-function lighting. Either way, you will need affine vertex coordinates (prior to the spherical mapping) for lighting. That’s why legacy OpenGL has separate model-view and projection matrices: the lighting calculations don’t work in a projective space (i.e. after you’ve applied a perspective projection).

So the vertex shader needs to emit both affine vertex coordinates (after the model-view transformation) and spherical (lat-lon) vertex coordinates. The latter will be used for gl_Position, the former for lighting calculations.[/QUOTE]

Ah, I see. That makes sense.

So that means I pass a vec4 to frag in addition to gl_position and use the former for lighting in frag shader. Since I don’t use persp matrix multiplication on gl_position none of the vertices get culled.(depending on my spherical proj math). Would there be the exact particular matrix you suggest that I for normal transformation? (Transpose-inv-MV)?

Are you referring to lighting technique described here.
https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/lighting.php

Is this the best practice in GLSL shading? What others would suggest as reference for best practices?

Thanks

You can just use the model-view matrix for transforming normals, provided that it’s composed only of translation, rotation and uniform scale (no shear or non-uniform scale). You only need to use the inverse-transpose if the model-view matrix distorts angles (so that vectors which are perpendicular in object coordinates are no longer perpendicular after transformation).

For lighting, the only requirement is that all of the vectors involved (vertex position, normal direction, light position/direction, eye position/direction) are in the same coordinate system, and that the coordinate system is affine to “world” space (i.e. no projection).

That’s basically taking the legacy OpenGL lighting model and performing the calculations per-fragment rather than per-vertex (i.e. the lighting calculation is performed in the fragment shader). Note that the gl_* variables holding lighting and material parameters only exist in the compatibility profile (they hold the values set by glLight() and glMaterial(), which also only exist in the compatibility profile). In the core profile, you need to define your own uniform variables for lighting/material state.

But it provides a reasonable basis for understanding how the lighting process is handled by the vertex and fragment shaders. Modern lighting models take advantage of the power of modern GPUs to do far more (e.g. environment maps, normal maps, shadows, etc), but you’ll still find the same concepts being used.

Can you point me to references/sites/books/code for this please?