I am working on Multiple Render Target (MRT) stuff. I have two

RGBA8 color textures attached to my FBO. Everything is fine

without MSAA.

But when I activate MSAA for the FBO I get this strange effect (see image):

While the content of the 2nd color attachment (the GBuffer, not shown here) looks as expected,

the content of the 1st color attachment (RGBA) looks weird. Namely as if the triangles are more

and more "blended" with the black background as the MSAA sample count increases (getting darker).

The strange thing is, this happens although blending is disabled

Code :

glDrawBuffers(2, &Buffers[0]); // Buffers contains GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1
glDisable(GL_BLEND);
glDrawElements(GL_TRIANGLES, ... );

Code :

glColorMaski( 0, true, true, true, false);
glDrawElements(GL_TRIANGLES, ... );

This is the relevant shader code:

Code :

out vec4 Color; // defines color in 1st color attachment
out vec4 Color1; // defines color in 2nd color attachment
void main(void)
{
Color = CalculateColor(...does not matter...);
if (Color.r <= 1.0) Color = vec4(1.0); // For debugging purposes just set the color to white for the 1st color attachment
Color1 = SetGBuffer(...does not matter...);
Color1.a = 1.0; // For debugging purposes just set the alpha to one for the 2nd color attachment
}

The names of the shader outs (Color, Color1) are set before linking the program via glBindFragDataLocation().

The shader code is validated and works fine. The FBO is also validated and complete. There are no OpenGL errors reported.

I really need help with this, I am debugging this issue for so long and I am out of ideas what could be the reason =/

Help is really appreciated! ]]>

#include "stdafx.h"

#include <stdlib.h>

#include <gl/glut.h>

// 4 control points for our cubic bezier curve

float Points[4][3] = {

{ 10,10,0 },

{ 5,10,2 },

{ -5,0,0 },

{-10,5,-2}

};

// the level of detail of the curve

unsigned int LOD=20;

void OnKeyPress(unsigned char key,int,int) {

switch(key) {

// increase the LOD

case '+':

++LOD;

break;

// decrease the LOD

case '-':

--LOD;

// have a minimum LOD value

if (LOD<3)

LOD=3;

break;

default:

break;

}

// ask glut to redraw the screen for us...

glutPostRedisplay();

}

void OnDraw() {

// clear the screen & depth buffer

glClear(GL_DEPTH_BUFFER_BIT|GL_COLOR_BUFFER_BIT);

// clear the previous transform

glLoadIdentity();

// set the camera position

gluLookAt( 1,10,30, // eye pos

0,0,0, // aim point

0,1,0); // up direction

glColor3f(1,0,1);

// we will draw lots of little lines to make our curve

glBegin(GL_LINE_STRIP);

for(int i=0;i!=LOD;++i) {

// use the parametric time value 0 to 1

float t = (float)i/(LOD-1);

// nice to pre-calculate 1.0f-t because we will need it frequently

float it = 1.0f-t;

// calculate blending functions

float b0 = t*t*t;

float b1 = 3*t*t*it;

float b2 = 3*t*it*it;

float b3 = it*it*it;

// calculate the x,y and z of the curve point by summing

// the Control vertices weighted by their respective blending

// functions

//

float x = b0*Points[0][0] +

b1*Points[1][0] +

b2*Points[2][0] +

b3*Points[3][0] ;

float y = b0*Points[0][1] +

b1*Points[1][1] +

b2*Points[2][1] +

b3*Points[3][1] ;

float z = b0*Points[0][2] +

b1*Points[1][2] +

b2*Points[2][2] +

b3*Points[3][2] ;

// specify the point

glVertex3f( x,y,z );

}

glEnd();

// draw the Control Vertices

glColor3f(0,1,0);

glPointSize(3);

glBegin(GL_POINTS);

for(int i=0;i!=4;++i) {

glVertex3fv( Points[i] );

}

glEnd();

// draw the hull of the curve

glColor3f(0,1,1);

glBegin(GL_LINE_STRIP);

for(int i=0;i!=4;++i) {

glVertex3fv( Points[i] );

}

glEnd();

// currently we've been drawing to the back buffer, we need

// to swap the back buffer with the front one to make the image visible

glutSwapBuffers();

}

void OnInit() {

// enable depth testing

glEnable(GL_DEPTH_TEST);

}

void OnExit() {

}

void OnReshape(int w, int h)

{

if (h==0)

h=1;

// set the drawable region of the window

glViewport(0,0,w,h);

// set up the projection matrix

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

// just use a perspective projection

gluPerspective(45,(float)w/h,0.1,100);

// go back to modelview matrix so we can move the objects about

glMatrixMode(GL_MODELVIEW);

glLoadIdentity();

}

int main(int argc,char** argv) {

// initialise glut

glutInit(&argc,argv);

// request a depth buffer, RGBA display mode, and we want double buffering

glutInitDisplayMode(GLUT_DEPTH|GLUT_RGBA|GLUT_DOUB LE);

// set the initial window size

glutInitWindowSize(640,480);

// create the window

glutCreateWindow("Bezier Curve: +/- to Change Level of Detail");

// set the function to use to draw our scene

glutDisplayFunc(OnDraw);

// set the function to handle changes in screen size

glutReshapeFunc(OnReshape);

// set the function for the key presses

glutKeyboardFunc(OnKeyPress);

// run our custom initialisation

OnInit();

// set the function to be called when we exit

atexit(OnExit);

// this function runs a while loop to keep the program running.

glutMainLoop();

return 0;

}

So, how can I control the curve by the mouse (left click) instead of pressing + and - , also, how can I draw the points by myself after I run the program ?

Thanks

I've been scavenging the internet for over a week now for resources on cascaded shadow maps, so my current implementation is kind of jury rigged together, but it at the very least produces shadows, so I'm part way there.

My issue is a math issue I think. I can't quite get the projection right for rendering into a given shadow split / cascade level.

My camera's rotation doesn't properly shift where the light's CSM's should be focused on, and further, when the camera does rotate the entire shadow map tends to squish towards the center.

See i.imgur.com/JRajdIb.gif

I'm fairly certain that my calculation for getting the split distances is correct. I've debugged it more than several times and triple checked the math myself by doing it by hand and getting the same expected results.

Also, if I move an object between the split regions, the shadow does decay in quality, as expected. I can see this firsthand by colorizing the regions that lay within any given cascade of the shadow map.

My scene is simple. It contains only a few basic objects and a single directional light to cast shadows, so there is nothing else to really interfere with my results.

Lastly, I do have my shadow textures set up in a texture array, and sampling from any given one works correctly.

When I tell my directional light to create a shadow, I perform the following (worry not about inefficiencies at this point):

Code :

float lambda = 0.5; // Lambda value for split distance calc.
float n = 1.0f; // Near plane
float f = 100000.0f; // Far plane
float m = 6; // 6 split intervals
float Ci[7]; // Split distances stored here
Ci[0] = n; // Base split = near plane
// 6 levels of shadows
for (int x = 0; x < 6; x++)
{
// Calculate the split distance
float cuni = n+((f-n)*((x+1)/m));
float clog = n*powf(f/n, (x+1)/m);
float c = lambda*cuni + (1-lambda)*clog;
Ci[x+1] = c;
QMatrix4x4 cameraModelMatrix = camera->getModelMatrix();
float frustumHeight = 2.0 * Ci[x+1] * tanf((90.0f * 0.5 * M_PI)/180);
float frustumWidth = frustumHeight * (camerasize.width()/camerasize.height());
// Corners of the frustum
QVector3D corners[8];
corners[0] = QVector3D(-(frustumWidth/2), -(frustumHeight/2), Ci[0]), corners[1] = QVector3D((frustumWidth/2), -(frustumHeight/2), Ci[0]), corners[2] = QVector3D((frustumWidth/2), (frustumHeight/2), Ci[0]), corners[3] = QVector3D(-(frustumWidth/2), (frustumHeight/2), Ci[0]),
corners[4] = QVector3D(-(frustumWidth/2), -(frustumHeight/2), Ci[x+1]), corners[5] = QVector3D((frustumWidth/2), -(frustumHeight/2), Ci[x+1]), corners[6] = QVector3D((frustumWidth/2), (frustumHeight/2), Ci[x+1]), corners[7] = QVector3D(-(frustumWidth/2), (frustumHeight/2), Ci[x+1]);
// Transform corner vectors by the camera's view/model matrix
for (int z = 0; z < 8; z++)
corners[z] = cameraModelMatrix*corners[z];
// Calculate bounding box
QVector3D min(INFINITY,INFINITY,INFINITY), max(-INFINITY,-INFINITY,-INFINITY);
for (int z = 0; z < 8; z++)
{
if (min.x() > corners[z].x())
min.setX( corners[z].x() );
if (min.y() > corners[z].y())
min.setY( corners[z].y() );
if (min.z() > corners[z].z())
min.setZ( corners[z].z() );
if (max.x() < corners[z].x())
max.setX( corners[z].x() );
if (max.y() < corners[z].y())
max.setY( corners[z].y() );
if (max.z() < corners[z].z())
max.setZ( corners[z].z() );
}
// Create Crop Matrix
float scaleX, scaleY, scaleZ;
float offsetX, offsetY, offsetZ;
scaleX = 2.0f / (max.x() - min.x());
scaleY = 2.0f / (max.y() - min.y());
offsetX = -0.5f * (max.x() + min.x()) * scaleX;
offsetY = -0.5f * (max.y() + min.y()) * scaleY;
scaleZ = 1.0f / (max.z() - min.z());
offsetZ = -min.z() * scaleZ;
QMatrix4x4 crop( scaleX, 0.0f, 0.0f, offsetX,
0.0f, scaleY, 0.0f, offsetY,
0.0f, 0.0f, scaleZ, offsetZ,
0.0f, 0.0f, 0.0f, 1.0f);
QMatrix4x4 projection = QMatrix4x4::ortho(-1, 1, -1, 1, -1, 1);
crop = projection * crop;
/*...SEND VALUES TO SHADER...*/
/*..RENDER SCENE INTO THIS SHADOW TEXTURE...*/
}

In my fragment shader, to determine my light space position of a fragment, I do:

Code :

vec4 LightSpacePos = LightCrop[i] * LightViewMatrix * WorldPosition;

And the rest really isn't needed, as the shadows work, and so does displaying the regions which fall under each texture.

Although I understand the underlying basics of how this should work, the specific implementation of it is what I'm a little bit stumped on. I've seen the NVidia slides on it, the GPU Gems article, as well as over a dozen other websites.

Can anyone explain to me how I should be correctly determining the minimum and maximum values required to properly create the crop matrix?

I will gladly provide any further information if anyone wants to help me. ]]>

I manage to create some nice "reflection and refraction" textures of my scene in order to achieve a "realistic" water effect.

I applied the following steps:

1) creating FBOs for both reflection and refraction

2) clipping planes (clipping for reflection the image which is above the water & clipping for refraction the image which is under the water). basically I checked that textures are fine (indeed, are rendered fine in my FBOs)

3) using projection texture mapping I tried to apply my textures on my "plane" which is the "water".

However, my textures are not aligned properly (somehow those are not synced with the actual scene). If you can help me getting this done I would be really thankful.

This is what I get: https://www.youtube.com/watch?v=dZqh7CBfE-c (Left Top square is the reflection and Right Top square is refraction) - i'm using only reflection for now.

As you can see, I used only the reflection image, but the refraction image works same (it is not inverted because it is not reflection, but doesn't fit my ground).

Here is my shader:

Code :

#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec2 texCoord;
layout (location = 2) in vec3 normal;
out vec2 texCoord0;
out vec4 clipSpace;
out vec3 normal0;
out vec3 worldPos0;
uniform mat4 view;
uniform mat4 proj;
uniform mat4 trans;
void main()
{
gl_Position = proj * view * trans * vec4(position.x, position.y, position.z, 1.0);
clipSpace = proj * view * trans * vec4(position.x, position.y, position.z, 1.0);
worldPos0 = (trans * vec4(position, 1.0)).xyz;
texCoord0 = texCoord;
normal0 = normal;
}

Code :

#version 330 core
in vec3 normal0;
in vec2 texCoord0;
in vec4 clipSpace;
in vec3 worldPos0;
out vec4 outColor;
uniform sampler2D reflectionTexture;
uniform sampler2D refractionTexture;
void main()
{
vec3 ndc = (clipSpace.xyz / clipSpace.w)/ 2.0 + 0.5;
vec2 reflectTexCoords = vec2(ndc.x, -ndc.y);
vec2 refractTexCoords = vec2(ndc.x, ndc.y);
vec4 reflectColor = texture2D(reflectionTexture, reflectTexCoords);
vec4 refractionColor = texture2D(refractionTexture, refractTexCoords);
outColor = reflectColor;
}

Code :

glEnable(GL_CLIP_DISTANCE0);
// Render Scene
/// REFLECTION rendering
waterFBO->bindReflectionFrameBuffer();
glUniform4f(planeCoordinates, 0.0f, 1.0f, 0.0f, -150.0f);
float distance = 2 * 150.0f;
view->setPosition(glm::vec3(view->getPosition().x, view->getPosition().y - distance, view->getPosition().z));
view->invertPitch();
RenderPass(shaderProgram, shadowShaderProgram, mesh1, mesh2, mesh3, uniView, uniProj, uniTrans, view->getProjMatrix(), view->getViewMatrix(), lightSpaceMatrix, sl, skybox, cubeMapSampler, cameraPos, uniTime, time, uniWaterEffect);
waterFBO->unbindCurrentFrameBuffer();
view->setPosition(glm::vec3(view->getPosition().x, view->getPosition().y + distance, view->getPosition().z));
view->invertPitch();
/// REFRACTION rendering
waterFBO->bindRefractionFrameBuffer();
glUniform4f(planeCoordinates, 0.0f, -1.0f, 0.0f, 150.0f);
RenderPass(shaderProgram, shadowShaderProgram, mesh1, mesh2, mesh3, uniView, uniProj, uniTrans, ProjectionMatrix, ViewMatrix, lightSpaceMatrix, sl, skybox, cubeMapSampler, cameraPos, uniTime, time, uniWaterEffect);
waterFBO->unbindCurrentFrameBuffer();
glDisable(GL_CLIP_DISTANCE0);
/// Normal Rendering
glUniform4f(planeCoordinates, 0.0f, -1.0f, 0.0f, 100000.0f);
/*ShadowMapPass(shaderProgram, shadowShaderProgram, mesh1, mesh2, mesh3, uniShadowView, uniShadowProj, uniShadowTrans, sl, uniTime, time, uniWaterEffect);*/
RenderPass(shaderProgram, shadowShaderProgram, mesh1, mesh2, mesh3, uniView, uniProj, uniTrans, ProjectionMatrix, ViewMatrix, lightSpaceMatrix, sl, skybox, cubeMapSampler, cameraPos, uniTime, time, uniWaterEffect);
glUseProgram(waterProgram);
glUniform1i(waterReflSampler, 2);
glUniform1i(waterRefrSampler, 3);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, waterFBO->getReflectionTexture());
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, waterFBO->getRefractionTexture());
RenderWater(waterTrans, waterProj, waterView, ProjectionMatrix, ViewMatrix, uniTime, uniWaterEffect, time);
glUseProgram(shaderProgram);

Thanks for your time! ]]>

I generated a terrain and I want texture it.

My vertex shader is like that :

Code :

#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
layout (location = 2) in vec2 texCoord;
out vec2 TexCoord;
out vec3 ourColor;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(position, 1.0f);
ourColor = color;
TexCoord = vec2(texCoord.x, 1.0 - texCoord.y);
}

My fragment shader is like that :

Code :

#version 330 core
out vec4 color;
uniform sampler2D ourTexture1;
uniform sampler2D ourTexture2;
in vec2 TexCoord;
void main()
{
color = texture(ourTexture1, TexCoord);
}

I have all the terrain's vertices in a vector.

I don't know how to store texture coordinates. I guess in another vector but i think there will be an issue with

Quote:

glVertexAttribPointer

I mean, in the vertices vector, there is no texture coordinates so how will I call glVertexAttribPointer when i'll just want render the terrain ? and i'll just want render the texture ?

My code is like that in the case of vertices, color and texture coordinates are in the same vector and VBO.

Code :

glVertexAttribPointer(0, 3, GL_FLOAT, GL_TRUE, 8 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
// Color attribute
glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, 8 * sizeof(GLfloat), (GLvoid*)(6 * sizeof(GLfloat)));
glEnableVertexAttribArray(1);
// TexCoord attribute
glVertexAttribPointer(2, 2, GL_FLOAT, GL_TRUE, vertices_terrain.size() * sizeof(GLfloat), (GLvoid*)(6 * sizeof(GLfloat)));
glEnableVertexAttribArray(2);

En résumé : I want separate vertices and color, texture coordinates.

Because of shaders and my architecture, there will be not be colors, texture and position at every call. Am I right ?

If you don't understand tell me !

Thanks a lot ]]>

I get a strange problem with commiting a sparse texture's mipmap tail. All is fine while level<NUM_SPARSE_LEVELS_ARB, however once I try to commit the tail, everything crashes and burns and I get an access violation (nvoglv32).

According to the spec, glTexturePageCommitmentEXT with level == NUM_SPARSE_LEVELS_ARB is legal. The offset is 0, and the size is the actual level size (size >> level).

nVidia GTX 980 with latest drivers, 4.5 profile.

Assistance welcome!

Thank you ]]>

The following program writes a subwindow in the top, left corner of the main window. If you left-click on that area, it is supposed to toggle the size of the subwind// subwindow.c

Why does not line 46

glutReshapeWindow(500, swindowheight);

reshape my subwindow?

Code :

// compile as gcc subwindow.c -lm -lglut -lGL
#include <stdio.h>
#include <GL/glut.h>
int WindowWidth = 750, WindowHeight = 500, iwindow[2];
void init(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glClearColor(0.75, 0.75, 0.75, 1.0);
glOrtho(0, WindowWidth, 0, WindowHeight, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_QUADS);
glColor3f(.75, .75, .75);
glVertex2i(0, 0);
glVertex2i(WindowWidth, 0);
glVertex2i(WindowWidth, WindowHeight);
glVertex2i(0, WindowHeight);
glEnd();
glutSwapBuffers();
}
void display(){ // display main window
// not interested in main window
}
void subwindow(int x, int y){ // display subwindow on command; x, y not used
static int condition = 0, swindowheight;
if(condition == 1){ // toggle between condition 1 & condition 0
condition = 0;
swindowheight = 100;
}
else{
condition = 1;
swindowheight = 150;
}
printf("condition = %d; swindowheight = %d\n", condition, swindowheight);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glClearColor(0.0, 0.0, 0.0, 1.0);
glutReshapeWindow(500, swindowheight);
glViewport(0, 0, (GLsizei) 500, (GLsizei) swindowheight);
glOrtho(0, 500, 0, swindowheight, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_QUADS);
glColor3f(1.0, 0.0, 0.0);
glVertex2i(10, swindowheight -10);
glVertex2i(490, swindowheight -10);
glVertex2i(490, swindowheight -40);
glVertex2i(10, swindowheight -40);
glColor3f(0.0, 1.0, 0.0);
glVertex2i(10, swindowheight -50);
glVertex2i(490, swindowheight -50);
glVertex2i(490, swindowheight -90);
glVertex2i(10, swindowheight -90);
glEnd();
glutSwapBuffers();
}
void mouse(int btn, int state, int x, int y){
if(btn == GLUT_LEFT_BUTTON && state == GLUT_DOWN){ } // not used
else if(btn == GLUT_LEFT_BUTTON && state == GLUT_UP){
subwindow(x, y);
}
}
int main(int argc, char* argv[]){
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(WindowWidth, WindowHeight);
glutInitWindowPosition(100, 100);
iwindow[0] = glutCreateWindow("Resizing Subwindow");
init();
iwindow[1] = glutCreateSubWindow(iwindow[0], 10, 10, 500, 100);
glutMouseFunc(mouse);;
// glutReshapeFunc(reshape);
glutDisplayFunc(display);
glutMainLoop();
}

Incidentally, line 13 should give me a gray screen without recourse to lines 17 - 23. However, without the GL_QUADS, I get a black screen. ?? ]]>

I'm doing depth-stencil clearing using glClearNamedFramebufferfi :

Code :

// f is depth ClearValue - 1.0 , s is stencilClearValue - 0x00, fbo is valid, completness-checked framebuffer
if( depthBufferFormat == GpuApi::ETF_Depth24Stencil8 )
{
glClearNamedFramebufferfi( fbo, GL_DEPTH_STENCIL, f, s );
}

What'im getting is : GL_INVALID_VALUE error generated. Invalid draw buffer.

Documentation https://www.opengl.org/sdk/docs/man/...arBuffer.xhtml desn't have any drawbuffer argument passed to the function ;) What make thing more interesting is the signature of the function:

Code :

typedef void (GLAPIENTRY * PFNGLCLEARNAMEDFRAMEBUFFERFIPROC) (GLuint framebuffer, GLenum buffer, GLfloat depth, GLint stencil);

If I change signature to the function to pass drawbuffer beetwen buffer and depth:

Code :

typedef void (GLAPIENTRY * PFNGLCLEARNAMEDFRAMEBUFFERFIPROC) (GLuint framebuffer, GLenum buffer,GLint drawbuffer, GLfloat depth, GLint stencil);

Suddenly clearing starts working perfectly fine w/o any error or warning messages. It seems that somehow documentation is wrong and all the WGL stuff follows that wrong doc or I'm missing something.

// Clearing depth and stencil using separate functions works fine. ]]>

Besides the clipping issue, the shadow is correct. The clipping artifact results in the shadow disappearing as a hard cut when i start point the viewport down, i added some NDC to remove it clipping by looking upwards that may give some hints.

If I change the shaders to work in world space, the point light shadow works fine. I compute the ShadowView with camera inverse (just like for spot lights etc)

Any input is greatly appreciated!

Depth pass is unchanged for world / camera space and is not included in the samples bellow.

From c++ (last multiplication is active only when I test in camera space versus world space that works fine)

Code :

// This code fills the glsl uniform ShadowPointView[index]
for (int sv = 0; sv < 6; ++sv) {
shadowView[sv] = (ShadowBiasMatrix * pointProjection * view[sv]) * glm::inverse(m_camera->view());
}

"fs_in.vs_coords" are in camera space, pointViewPosition is multiplication by camera view (just like spot lights)

Code :

vec3 lightDirection = Light[0].pointViewPosition - fs_in.vs_coords;
float shadow = CalcShadowFactor(lightDirection);

Just to demonstrate my comment above, when i test in world space, multiplication of camera->view is removed.

Code :

for (auto & i : m_pointLights) {
m_shaderModel->setUniform(i->pointViewPositionUniform,
glm::vec3(m_camera->view() * glm::vec4(i->position, 1.0f)));
}

OK so code posted above I don't think has any errors, in the CalcShadowFactor() I think I have missed some NDC conversion or similar...

Code :

float CalcShadowFactor(vec3 LightDirection)
{
vec3 lightDirectionShadow = LightDirection;
float axis[6];
// Adding this line solved the shadow clipping when looking upwards in the scene
lightDirectionShadow = vec3(0.5) * lightDirectionShadow.xyz + vec3(0.5);
lightDirectionShadow = lightDirectionShadow ;
axis[0] = -lightDirectionShadow.x;
axis[1] = lightDirectionShadow.x;
axis[2] = -lightDirectionShadow.y;
axis[3] = lightDirectionShadow.y;
axis[4] = -lightDirectionShadow.z;
axis[5] = lightDirectionShadow.z;
int maxAxisID = 0;
for(int i = 1; i < 6; i++) {
if(axis[i] > axis[maxAxisID]) {
maxAxisID = i;
}
}
vec4 shadowCoord = ShadowPointView[maxAxisID] * vec4(fs_in.ws_coords,1);
shadowCoord.xyz /= shadowCoord.w;
shadowCoord.w = shadowCoord.z;// - shadowPointBias;
shadowCoord.z = float(maxAxisID);
float shadow = shadow2DArray(ShadowMap, shadowCoord).x;
return shadow;
}

Attached a file showing a teapot shadow starting to clip for illustration.

Fredrick

I'm on my last year of Master's degree and I'm working on a study project. I must make a forest simulation. With weather, forest fire, species cohabitation, reproduction etc...

I'm loading trees in vectors from .obj files. xFrogs trees if you want to know.

These files are quite heavy and i don't copy the vector each time i want a new tree. I thought about translation instead.

Do you have any ideas how can i transale an entire vbo ?

I have one VAO containing couple of vbos representing trees.

One more thing, how can I avoid re render trees already rendered previous frames ?

Thank you ]]>

Let's take two animations walk and attack. Here i think its worth mentioning that my animations' matrices are pre-cached and pre-calculated , and just loaded runtime based on something like this.

Code :

RunningTime += elapsedTime;
double TimeInTicks = RunningTime * TICKS_PER_SECOND;
double AnimationTime = fmod(TimeInTicks, (double)(size - 1));
index = (int)AnimationTime;

Now if i decide to load a new animation in the shader, new matrices are being loaded in and the transition (expectedly) is not that smooth.

Here is what i was thinking.

- Use Animation 1

- Before swapping to animation 2 lerp(Anim1,Anim2)

Where Lerp will linearly interpolate the last N matrices of Anim1 and the first N animation matrices from Anim2. GLM has a nice function for that , its something like:

Code :

T = mix(T1,T2,mix_factor).

- Upload those N lerp-ed matrices in the shader for vertex skinning

- Continue with Anim2 (from start - 0 ? or offset Anim2 matrices before uploading with N ?)

Simple Process schema

Code :

Play Anim1 -> Interpolate last N matrices from A1 with first N matrices from A2 -> Play Anim2

today i was just wondering how accurate opengl rendering is ?

does anyone know a project were people have compared a rendered 3d szene to a photo taken from reality ..

to masure if the positoning is correct.

thnax

uwi2k2 ]]>

I'm pretty new here on this forum so please don't judge me for asking "probably" a similar question (even that, from my searches I found nothing relevant).

I have some "major" issue with shadow mapping for directional light (spotlight). It actually renders fine, but I have followed a tutorial and the code seems to be pretty similar and couldn't find any explicit difference or to understand what exactly is wrong because I

Picture with actual behavior:

I followed a tutorial which says:

1) Render from the point of view of light (using a depth texture)

2) Render normally (using that depth texture)

Code:

Code :

void ShadowMapPass(GLuint shaderProgram, GLuint shadowShaderProgram, Mesh& mesh1, Mesh& mesh2, Mesh& mesh3, GLint uniShadowView, GLint uniShadowProj, GLint uniShadowTrans, SpotLight sl[4])
{
glUseProgram(shadowShaderProgram);
shadowMapFBO.BindForWriting();
glClear(GL_DEPTH_BUFFER_BIT);
glm::mat4 lightViewMatrix = glm::lookAt(
sl[0].Position,
sl[0].Direction,
glm::vec3(0.0f, 1.0f, 0.0f)
);
glm::mat4 lightProjMatrix = glm::perspective(
45.0f,
static_cast<float>(WINDOW_WIDTH) / static_cast<float>(WINDOW_HEIGHT),
0.1f,
1000.0f
);
glUniformMatrix4fv(uniShadowProj, 1, GL_FALSE, value_ptr(lightProjMatrix));
glUniformMatrix4fv(uniShadowView, 1, GL_FALSE, value_ptr(lightViewMatrix));
glUniformMatrix4fv(uniShadowTrans, 1, GL_FALSE, value_ptr(
glm::translate(glm::vec3(0.0f, 10.0f, 0.0f)) *
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f)) *
glm::scale(glm::vec3(10.0f, 10.0f, 10.0f)
)));
mesh1.Render(shaderProgram, GL_TEXTURE0,
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f))
);
glUniformMatrix4fv(uniShadowTrans, 1, GL_FALSE, value_ptr(
glm::rotate(glm::radians(m_scale), glm::vec3(0.0f, 1.0f, 0.0f)) *
glm::translate(glm::vec3(100.0f, 20.0f, 0.0f)) *
glm::scale(glm::vec3(10.0f, 10.0f, 10.0f))
));
mesh2.Render(shaderProgram, GL_TEXTURE0,
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f))
);
glUniformMatrix4fv(uniShadowTrans, 1, GL_FALSE, value_ptr(
glm::rotate(glm::radians(m_scale), glm::vec3(0.0f, 1.0f, 0.0f)) *
glm::translate(glm::vec3(0.0f, 15.0f, 100.0f)) *
glm::scale(glm::vec3(15.0f, 15.0f, 15.0f))
));
mesh3.Render(shaderProgram, GL_TEXTURE0,
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f))
);
glm::mat4 trans;
glUniformMatrix4fv(uniShadowTrans, 1, GL_FALSE, value_ptr(trans));
DrawPlane();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
void RenderPass(GLuint shaderProgram, GLuint shadowShaderProgram, Mesh& mesh1, Mesh& mesh2, Mesh& mesh3, GLint uniView, GLint uniProj, GLint uniTrans, glm::mat4& ProjectionMatrix, glm::mat4& ViewMatrix, GLint lightSpaceMatrix, SpotLight sl[4], Skybox *skybox, GLint cubeSampler, GLint cameraPos)
{
glUseProgram(shaderProgram);
shadowMapFBO.BindForReading(GL_TEXTURE1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, skybox->m_pCubemapTex->m_textureObj);
glUniformMatrix4fv(uniProj, 1, GL_FALSE, value_ptr(ProjectionMatrix));
glUniformMatrix4fv(uniView, 1, GL_FALSE, value_ptr(ViewMatrix));
/*glUniform3f(cameraPos, view->getPosition().x, view->getPosition().y, view->getPosition().z);*/
glUniform1i(cubeSampler, 0);
glm::mat4 lightViewMatrix = glm::lookAt(
sl[0].Position,
sl[0].Direction,
glm::vec3(0.0f, 1.0f, 0.0f)
);
glm::mat4 lightProjMatrix = glm::perspective(
45.0f,
static_cast<float>(WINDOW_WIDTH) / static_cast<float>(WINDOW_HEIGHT),
0.1f,
1000.0f
);
glm::mat4 lightTranslateMatrix = glm::translate(glm::vec3(1.0f, 1.0f, 1.0f));
glUniformMatrix4fv(lightSpaceMatrix, 1, GL_FALSE, value_ptr(lightProjMatrix * lightViewMatrix * lightTranslateMatrix));
glUniformMatrix4fv(uniTrans, 1, GL_FALSE, value_ptr(
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f))
));
planeTexture->Bind(GL_TEXTURE0);
DrawPlane();
glUniformMatrix4fv(uniTrans, 1, GL_FALSE, value_ptr(
glm::translate(glm::vec3(0.0f, 10.0f, 0.0f)) *
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f)) *
glm::scale(glm::vec3(10.0f, 10.0f, 10.0f)
)));
mesh1.Render(shaderProgram, GL_TEXTURE0,
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f))
);
glUniformMatrix4fv(uniTrans, 1, GL_FALSE, value_ptr(
glm::rotate(glm::radians(m_scale), glm::vec3(0.0f, 1.0f, 0.0f)) *
glm::translate(glm::vec3(100.0f, 20.0f, 0.0f)) *
glm::scale(glm::vec3(10.0f, 10.0f, 10.0f))
));
mesh2.Render(shaderProgram, GL_TEXTURE0,
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f))
);
glUniformMatrix4fv(uniTrans, 1, GL_FALSE, value_ptr(
glm::rotate(glm::radians(m_scale), glm::vec3(0.0f, 1.0f, 0.0f)) *
glm::translate(glm::vec3(0.0f, 15.0f, 100.0f)) *
glm::scale(glm::vec3(15.0f, 15.0f, 15.0f))
));
mesh3.Render(shaderProgram, GL_TEXTURE0,
glm::rotate(glm::radians(0.0f), glm::vec3(0.0f, 1.0f, 0.0f))
);
}

If you see some obvious mistakes/ issues I would be very grateful to know.Thanks

Code :

M_Translation.Translate(light_world_space.x, light_world_space.y, light_world_space.z);
M_ModelMatrix = M_Translation * M_Rotation;
M_LightMatrix = M_Viewing * M_ModelMatrix;
light_eye_space = M_LightMatrix * light_world_space;

Does this seem correct? I do this once per rendering frame. After this I draw a box object in the scene and from the initial point of view from the camera I can see that only one side of the box (the correct side) is lit and as the box rotates so does the lit side. In other words the light is rotating with the object.

Inside the object transformation function is this code to position and rotate the object. Into it I pass the current light position in eye space as well as the current view matrix.

Code :

void Billboard::TransformTest(mat4 viewMatrix, vec3 light_position)
{
mModelMatrix = mTranslationMatrix * mRotationMatrix;
mModelViewMatrix = viewMatrix * mModelMatrix;
mModelViewMatrix.GetMatrix(mv_Matrix);
// Now we calculate the modelview-projection matrix
mMVPMatrix = mProjectionMatrix * mModelViewMatrix;
mMVPMatrix.GetMatrix(mvp_Matrix);
// Finally we calculate the normal matrix
mNormalMatrix = mModelViewMatrix;
mNormalMatrix.InvertM();
mNormalMatrix.TransposeM();
mNormalMatrix.GetNormalMatrix(normal_Matrix);
glUniform3f(0, 1.0f, 1.0f, 1.0f); // Diffuse reflectivity
glUniform3f(1, 1.0f, 1.0f, 1.0f); // Light intensity
glUniform4f(2, light_position.x, light_position.y, light_position.z, 1.0f);
glUniformMatrix4fv(3, 1, GL_FALSE, mv_Matrix);
glUniformMatrix4fv(4, 1, GL_FALSE, mvp_Matrix);
glUniformMatrix3fv(5, 1, GL_FALSE, normal_Matrix);
glUniform1i(6, 0);
}

Any idea what I'm doing wrong? By the way as it is currently experimental code, all the Uniform locations have been numbered manually and they've all been checked and are correct.

EDIT: Ok changing 'mNormalMatrix = viewMatrix;' to 'mNormalMatrix = mModelViewMatrix;' fixed the light rotating with box problem. Now when the camera is static at the initial viewing location it does actually look correct. Moving and rotating the camera position still causes the light to move however. I wonder what I'm doing wrong. :( ]]>