Vertex and Geometry shader coordinate scaling problem

Hello everybody,

I’m coding a tiled forward rendering system without compute shader, because I use OpenGL 3.3. That’s why I need to calculate the frustums struct on the CPU side and I load it to the GPU memory as a uniform buffer object. That’s not a speed problem, because this is done only on program start: it includes only X and Y coordinates ie. frustums’ sides. However, the Z coordinate, ie the frustums’ depths, may become a bottleneck because they are continuously stored in CPU memory, where seeking deepest values are done, and then loaded back to GPU memory. But I hope using glReadPixels with streaming double pixel buffer objects is fast enough. Actually those frustums needs two depths for lightning, back and front, but I’m working only back side now, because that’s enough for oclusion testing with bounding boxes.

Okay, that was introduction… The problem is that I cannot get OpenGL view frustum and outer bounds of tiled rendering frustums grid equal. I show relevant samples of my code, then descibe what’s happening. My Frustum projection:

//define local variables
float windowRatio=(float)widthI/(float)heightI;
float angle=60;
float near=0.5;
float far=20000;
float top=neartan(PiiI/180angle/2);
float bottom=-top;
float right=top*windowRatio;
float left=-right;

//this is OpenGL PROJECTION matrix
projektioI[0]=near/right;
projektioI[1]=0;
projektioI[2]=0;
projektioI[3]=0;
projektioI[4]=0;
projektioI[5]=near/top;
projektioI[6]=0;
projektioI[7]=0;
projektioI[8]=0;
projektioI[9]=0;
projektioI[10]=-1*(far+near)/(far-near);
projektioI[11]=-1;
projektioI[12]=0;
projektioI[13]=0;
projektioI[14]=-2*(far*near)/(far-near);
projektioI[15]=0;

Tiled rendering frustums grid is a collection of Y and X aligned plane equations of form (ax+by+c*z+d=0). I put here only the outmost ones. Every plane is defined with three points, one of them is always origo. The other two are corner points of the near plane of the OpenGL view frustum.

    float origo[3]={0, 0, 0};

//Right plane, two points:
float rightTop[3]={0, 0, -near};
rightTop[1]=neartan(PiiI/180angle/2);
rightTop[0]=rightTop[1]*windowRatio;
float rightBottom[3]={rightTop[0], -rightTop[1], -near};

//left plane, two points:
float leftTop[3]={-rightTop[0], rightTop[1], -near};
float leftBottom[3]={-rightTop[0], -rightTop[1], -near};

//top plane, one point (another one already/still defined):
leftTop[0]=-rightTop[0];
leftTop[1]=rightTop[1];
leftTop[2]=-near;

//bottom plane, one point (another one already/still defined):
leftBottom[0]=-rightTop[0];
leftBottom[1]=-rightTop[1];
leftBottom[2]=-near;

Below is a function I use to calculate plane equations:

void Render::calculatePlaneEquations(float* tasoYhtalo, const int& index, const float* origo, const float* vektori1, const float* vektori2) {

//let’s calculate cross product V1xV2
float v1[3]={origo[0]-vektori1[0], origo[1]-vektori1[1], origo[2]-vektori1[2]};
float v2[3]={vektori2[0]-origo[0], vektori2[1]-origo[1], vektori2[2]-origo[2]};
float N[3]={v1[1]*v2[2]-v1[2]*v2[1], -v1[0]*v2[2]+v1[2]*v2[0], v1[0]*v2[1]-v1[1]*v2[0]};

//normalize the normal vector
float siirto=sqrt(N[0]*N[0]+N[1]*N[1]+N[2]*N[2]);
N[0]=N[0]/siirto;
N[1]=N[1]/siirto;
N[2]=N[2]/siirto;

//let’s calculate the coefficient d of the equation ax+by+cz+d=0
siirto=N[0]*origo[0]+N[1]*origo[1]+N[2]*origo[2];

//return the coefficients a, b, c, and d of ax+by+cz+d=0
tasoYhtalo[4index]=N[0];
tasoYhtalo[4
index+1]=N[1];
tasoYhtalo[4index+2]=N[2];
tasoYhtalo[4
index+3]=siirto;
}

The planes we’ve got by calculatePlaneEquations are used in the geometry shader by function like below (there are two similar functions, one for X coord and another for Y coord. This is for X coord):

//these are from main fuction. I’ve tried these both in vertex shader and geometry shader, this time it’s geometry shader
"vec4 verteksi=gl_in[i].gl_Position;
"
"verteksi=modelViewProjectionverteksi;
"
"verteksi.y=-1.0
verteksi.y;
" //this negation is needed, or the depth picture is upside down
"verteksi.x=-1.0*verteksi.x;
" //I’m not sure is this negation correct…

//this is the geometry shader function
"void seekHorizontalTile(in float pystyTaso[4*(MAKSIMIRIVI/BLOCK_SIZE+2)], in vec4 verteksi, inout int x) {
"
"int tila=0;
"
"do {
"

//this is the place we use plane equations
"float suunta=pystyTaso[4*x]verteksi.x+pystyTaso[4x+1]verteksi.y+pystyTaso[4x+2]verteksi.z+pystyTaso[4x+3];
"
"if (tila==0 && x<=tiiliaX) {
"
"if (suunta>0) {
"
"x=x+(tiiliaX-x+2)/2;
"
"} else {
"
"tila=1;
"
"}
"
"} else {
"
"tila=1;
"
"}
"
"if (tila==1 && x>=0) {
"
"if (suunta<0) {
"
"x=x-1;
"
"} else {
"
"tila=2;
"
"}
"
"} else {
"
"if (tila!=0) {
"
"tila=2;
"
"}
"
"}
"
"} while (tila<2);
"
"}
"

So, everyting seems to be fine, except that objects are rendered only on the center of window, not upper, lower, left or right area of window. The culling boudary is very sharp and it is always symmetric with window boudaries. I first figured that there is a bug somewhere in depth code… But that’s not the case, the problem persist if I remove the oclusion testing. It turns out that there is a scaling problem.

I got an idea to use a different view angle between OpenGL view frustum and tiled rendering frustums grid. I found that if I use

atan(1/sqrt(3))=30 degrees with Z value of sqrt(3)

60 degrees view angle for OpenGL view frustum and

atan(1/sqrt(3)/2)=49.106605 degrees with Z value of sqrt(3)/2

98.213210 degrees view angle for tiled rendering frustums grid, I got an exact match with culling boundary and window boundaries. However, now oclusion test fails: this is a natural consequence, because the tiles are now stretched outside the area their depth data is measured for. This is a confusing result and I’m hoping I have done some stupid mistake or there is some way I can fix this problem.

I hope somebody can give me advice or at least give me answers to the following questions:

  1. As far as I know, the vertex shader built-in out parameter gl_Position and geometry shader built-in in parameter gl_in[index].gl_Position have exactly the same values in all their four components, x, y, z and w. OpenGL won’t do any changes between the shaders. Is this true?

  2. I suppose, that somewhere between geometry shader and fragment shader OpenGL does all conversions:
    -x, y, z are divided by w
    -z coordinate gets squeezed from -1 to 1 into 0 to 1; this also delinearizes z coordinate (though I have a feeling this is done before geometry shader…)
    -z coordinate negation (the change from right handed coordinate system into the left handed coordinate system)
    -x and y coordinates are changed from -1 to 1 range into the framebuffer pixel range: from 0 to the number of pixels on axis

  3. Between a fragment shader and the final framebuffer there are no coordinate conversions. Is this true?

Please help me :dejection:

gl_Position is in clip coordinates. gl_FragCoord is in window coordinates.

Clip coordinates are converted to NDC by dividing X, Y and Z by W.

NDC Z is converted to depth by an affine transformation which maps -1 to the near depth and +1 to the far depth, as specified by glDepthRange() (these default to 0 and 1 respectively). NDC X and Y are converted to window space via the viewport transformation.

If there’s a conversion from right-handed to left-handed coordinates, it’s performed by the projection matrix. All of the conventional projection matrices (those generated by glFrustum, glOrtho, gluPerspective and gluOrtho2D, and the equivalents in GLM) result in eye-space being a right-handed coordinate system.

Hi GClements,

I think we need to connect these coordinate systems to the reality.

As far as I know, clip coordinates means the same than a vertex coordinate (x, y, z, w) multiplied by a modelviewprojection matrix or simply projection matrix. Then it depends on where you use a projection matrix, inside a vertex shader or inside geometry shader, when gl_Position is in clip coordinates. If you don’t apply a projection matrix at all, then a vertex is considered to be in clip coordinates all the time… I can be completely wrong, that’s only my idea.

gl_FragCoord is in window coordinates.

I agree. So fragment shader always uses window coordinates. This means that my suggestion

Between a fragment shader and the final framebuffer there are no coordinate conversions.

is true. This also means, that coordinate transformations made by fixed OpenGL pipeline must happen before fragment shader, but not necessarily after geometry shader: there is a possibility, that OpenGL expects the projection matrix, for example, be used inside a vertex shader. Furthermore, since we can do matrix tranformations inside a geometry shader, this all has sense only if we can make rotations and movements of vertexes after we have multiplied by a projection matrix ie. we are in clip coordinates. But can we? Generally matrix multiplication is not commutative ie.

A . B != B . A

but it is associative

A . B . C = (A . B) . C = A . (B . C)

so I think we can. I think we can break modelviewprojection matrix into projection matrix and use it inside a vertex shader, and into modelview matrix and use it inside a geometry shader. But does OpenGL require us to do so or something similar? That’s vital information to me to get my tiled forward rendering engine working.

Seems that key to success is to know should I use clip space coordinates inside the geometry shader or not…

You can do whatever you like. The final vertex coordinates written to gl_Position by the last shader before the fragment shader are in clip coordinates. How you generate them is entirely up to you. The use of view, model and/or projection matrices is common but not strictly required. Any coordinate systems prior to clip coordinates (e.g. object coordinates or eye coordinates) are entirely up to the programmer.

There was a stupid mistake: when you’re doing oclusion test, don’t use modelviewprojection matrix, use modelview matrix. I don’t understand what made me using modelviewprojection matrix…

Now occlusion test is mainly working, but only mainly :frowning: See a picture below:

UnbrokenHouse

This is a perfect view with 394 objects. and when we go underground

UnderSide

only four objects are drawn, as supposed. But, when I slightly move the viewpoint, it may be not perfect anymore…

BrokenHouse

Now there are only 390 ojects drawn, and as you can see, the missing objects are in front of another objects, so there shouldn’t be any possibility to cull them out. This goes even more crazier when you look at the movie:

https://vid.me/TqRO

Objects are like thinking to be or not to be… Although the disappearing is quite rare, it is still very annoying. Does anybody have an idea what could be wrong?

Jepp, I got it fixed… Almost…

This time seems it was x coordinate which needed negation. But now there are new problems… This is really hard, because you cannot debug shaders (or at least I don’t know such a software. Someone could mention gDEbugger program, but it got stuck when working with my DLL file). Without a debugger programmer needs to know everything in his/her head, there’s no room for errors or guessing on how OpenGL or mathematics works, and that’s what I’m doing now…

glIntercept with shader debugging activated prints a log file of uniform values for each draw. Check your input, the most you can do after that is cleaning up your code and concentrating very hard on what each line does. Good luck :slight_smile: