I have a 3D world surrounded by a Skybox. Due to the nature of the geometry it has many floating point coordinates that were rounded off during creation. This leads to undisired small "gaps" in my 3D world appearing as I move around letting some pixels from the 3D Skybox get through. Now the geometry issue is something I cannot fix right now so I started to look for other solutions. Without the skybox I don't even notice these gaps and that is exactly what I want! Stencil buffer seems to be out of the question since it operates on the pixel layer so it probalby won't help me to prevent the skybox shining through the pixels that make up the gaps, right?

Does anyone have an Idea how to approach this issue without cahsing too much drawbacks like in performance?

Kind regards

Saski ]]>

I have a procedurally generated height-cubemap. With this I colourize to create a diffuse map.

Now, I'm creating a normalmap based on the code from "Mathematics for 3D Game Programming and Computer Graphics, Third Edition", which appears to work really well.

The problem I've had to date is Tangent-space normal map. I just cannot find a way to generate the normal-map and generate the tangent and binormal on my vertices without any graphical artifacts.

http://gamedev.stackexchange.com/que...ping-a-cubemap

http://stackoverflow.com/questions/3...ts-my-tangents

If anyone has a solution for that I'd love to know it.

I've basically given up on this for now though. What I'm trying to do now is generate a object-space normal map for my cubemapped sphere. This seems like it would be easier to generate and fool-proof to use. But so far I'm having trouble.

My normal map looks like this:

So I think there's clearly something wrong in it's generation. Here's the code that generates it.

Code :

float scale = 15.0;
std::deque<glm::vec4> normalMap(textureSize*textureSize);
for(int x = 0; x < textureSize; ++x)
{
for(int y = 0; y < textureSize; ++y)
{
// center point
int i11 = utils::math::get_1d_array_index_from_2d(x,y,textureSize);
float v11 = cubeFacesHeight[i][i11].r;
// to the left
int i01 = utils::math::get_1d_array_index_from_2d(std::max(x-1,0),y,textureSize);
float v01 = cubeFacesHeight[i][i01].r;
// to the right
int i21 = utils::math::get_1d_array_index_from_2d(std::min(x+1,textureSize-1),y,textureSize);
float v21 = cubeFacesHeight[i][i21].r;
// to the top
int i10 = utils::math::get_1d_array_index_from_2d(x,std::max(y-1,0),textureSize);
float v10 = cubeFacesHeight[i][i10].r;
// and now the bottom
int i12 = utils::math::get_1d_array_index_from_2d(x,std::min(y+1,textureSize-1),textureSize);
float v12 = cubeFacesHeight[i][i12].r;
glm::vec3 S = glm::vec3(1, 0, scale * v21 - scale * v01);
glm::vec3 T = glm::vec3(0, 1, scale * v12 - scale * v10);
glm::vec3 N = (glm::vec3(-S.z,-T.z,1) / std::sqrt(S.z*S.z + T.z*T.z + 1));
glm::vec3 originalDirection;
if(i == POSITIVE_X)
originalDirection = glm::vec3(textureSize,-y,-x);
else if(i == NEGATIVE_X)
originalDirection = glm::vec3(-textureSize,-x,-y);
else if(i == POSITIVE_Y)
originalDirection = glm::vec3(-x,-textureSize,-y);
else if(i == NEGATIVE_Y)
originalDirection = glm::vec3(-y,textureSize,-x);
else if(i == POSITIVE_Z)
originalDirection = glm::vec3(-y,-x,textureSize);
else if(i == NEGATIVE_Z)
originalDirection = glm::vec3(-y,-x,-textureSize);
glm::vec3 o = originalDirection;
glm::vec3 a = N;
glm::vec3 ax = glm::normalize(o) * (glm::dot(a,glm::normalize(o)));
N = ax;
N.x = (N.x+1.0)/2.0;
N.y = (N.y+1.0)/2.0;
N.z = (N.z+1.0)/2.0;
normalMap[utils::math::get_1d_array_index_from_2d(x,y,textureSize)] = glm::vec4(N.x,N.y,N.z,v11);
}
}
for(int x = 0; x < textureSize; ++x)
{
for(int y = 0; y < textureSize; ++y)
{
cubeFacesHeight[i][utils::math::get_1d_array_index_from_2d(x,y,textureSize)] = normalMap[utils::math::get_1d_array_index_from_2d(x,y,textureSize)];
}
}

cubeFacesHeight is 6 faces of height values.

What I'm attempting to do is use the value originally given to N, as this is the normal map as though it was the surface of a plane. Then, I'm attempting to apply this to the original direction vector of each point (which is also the normal vector). I think it's that application, where ax is set that it the problem.

I then implement it in my Fragment shader like so:

Code :

#version 400
layout (location = 0) out vec4 color;
struct Material
{
bool useMaps;
samplerCube diffuse;
samplerCube specular;
samplerCube normal;
float shininess;
vec4 color1;
vec4 color2;
};
struct PointLight
{
bool active;
vec3 position;
vec3 ambient;
vec3 diffuse;
vec3 specular;
float constant;
float linear;
float quadratic;
};
uniform Material uMaterial;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
in vec3 ex_normal;
in vec3 ex_positionCameraSpace;
in vec3 ex_originalPosition;
in vec3 ex_positionWorldSpace;
in vec4 ex_positionLightSpace;
in PointLight ex_light;
/* *********************
Calculates the color when using a point light. Uses shadow map
********************* */
vec3 CalcPointLight(PointLight light, Material mat, vec3 n, vec3 fragPos, vec3 originalPos, vec3 viewDir)
{
/* just lighting stuff that doesn't matter */
vec3 lightDir = normalize(fragPos - light.position);
vec3 reflectDir = normalize(reflect(lightDir, n));
float specularFactor = pow(dot(viewDir,reflectDir), mat.shininess);
if(specularFactor > 0 && diffuseFactor > 0)
specularColor = light.specular * specularFactor * specularMat;
/*more lighting stuff*/
}
vec3 get_normal(vec3 SRT)
{
vec3 map = texture(uMaterial.normal,SRT).rgb * 2.0 - 1.0;
return mat3(transpose(inverse(view * model))) * map;
}
void main(void)
{
vec3 viewDir = normalize(-ex_positionCameraSpace);
vec3 n = get_normal(glm::normalize(ex_originalPosition));
vec3 result = CalcPointLight(ex_light,uMaterial,n,ex_positionCameraSpace, ex_positionWorldSpace,viewDir);
color = vec4(result,1.0);
}

Considering that my Fragment shader works fine without sampling the normal map, and instead using "ex_originalPosition", I don't think it's the problem. I could just use some help in generating the object space normal map. ]]>

My lighting strategy is to have all the lights set to various intensities of white and control the color tinting of the material texture maps (generally being light-gray metallic textures) using Gl_Materialfv. The issue remains even when setting both the lighting and material parameters to large values.

Is this a limitation of OpenGL, or is there a proper way to use an arbitrary texture map, set its color tint with Gl_Materialfv, and have the light sources make it brighter than the texture map's source image? Here's the (hopefully) relevant source code:

http://eightvirtues.com/games/sylph/...aterial%20Code

And yes, I'm Jon Snow and I know nothing. :) ]]>

When I turn to compile the project, the following appears:

undefined reference to `glfwInit '

undefined reference to `glfwCreateWindow '

And other methods.

How could I fix this? I guess the question should be:

The contents of the library GLFW is faulty, another idea I can think and I tried with different compilers and nothing happens.

(My IDE is Code :: Blocks)

Thank you, bye.

EDIT: (I'm compiling with MinGW.) ]]>

On the other hand the translatef function wont accept any value greater then 1. and if it exceeds the value above one then I am able to view toe obstacles only and the background it is as per default(white)

I just recently got into opengl programming.

Basically what I am trying to do is to create a scene where everything is dark except the flashlight on my camera. So I made a textured skybox to move around in.

I got a form of lighting but it seems to be stationary. I messed around with quite a few settings like exponent , cutoff and annuattion but I never seem to be able to move my light in the direction where my camera is going it always stays in the same spot.

this is what i wrote thus far:

InitGL

Code :

void Scene::initializeGL()
{
camera = new Camera();
world = new World();
object = new Object();
world->initSkybox();
QGLWidget::initializeGL();
glShadeModel(GL_SMOOTH);
glClearColor(0.0f,0.0f,0.0f,0.0f);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
//Light
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
camera->Position_Camera(10, 3, -20, 0, 2.5f, 0, 0, 1, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
timer->start(10);

my paintGL

Code :

void Scene::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
GLfloat ambientColor[] = {0.0f, 0.0f, 0.0f, 1.0f};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor);
GLfloat lightPos [] = {camera->mPos.x,camera->mPos.y,camera->mPos.z,1.0};
glLightfv(GL_LIGHT0,GL_POSITION,lightPos);
GLfloat spotDir[] = {camera->mView.x - camera->mPos.x,camera->mView.y- camera->mPos.y,camera->mView.z - camera->mPos.z};
glLightfv(GL_LIGHT0,GL_SPOT_DIRECTION,spotDir);
gluLookAt(camera->mPos.x, camera->mPos.y, camera->mPos.z,
camera->mView.x, camera->mView.y, camera->mView.z,
camera->mUp.x, camera->mUp.y, camera->mUp.z);
world->drawSkyBox(0,0,0,1000,1000,1000);
}

I'm attempting to fill a buffer of fixed size with a structure :

Quote:

struct Vector2f

{

float x;

float y;

};

struct Node

{

Vector2f id;

float f;

float h;

float g;

};

I'm using java where a conventional C structure isn't available so have created a class with a similar structure;

Quote:

public class Node

{

public float[] pos;

public float h;

public float g;

public float f;

}

Quote:

map = new Node[8][8];

I've looked into glBufferData(); that takes the parameters:

glBufferData(GL_SHADER_STORAGE_BUFFER, size, ptr,

GL_STATIC_DRAW);

However Java does not support ptrs and must be passed a buffer instead.

Would this be a Java side issue of converting a class to a buffer? In that case I guess I may have to ask this question somewhere else?

I'm quite lost here so any suggestions would be greatly appreciated.

Thanks in advance ]]>

then how to display their texture coordinates of these n vertices? or print out their list value? ]]>

One problem I've noticed is that, depending on what I set my FoV to, it affects whether pressing A makes me strafe left vs strafe right (it should be left, D causes right strafe).

For example,

if I set

if I set my

(Forward/back movement is correct no matter what the FoV is).

Anyone have a clue as to what's going on?

Also a less-major concern, people suggest a FoV of 60 degrees, but this seems to make my 1x1x1 cube waaay too stretched:

http://i.cubeupload.com/vclEli.png

Relevant code:

Code :

int main()
{
const int Width = 1600/2;
const int Height = 900/2;
Shader shader("mvp");
GraphicArray cube = make_rectangle(1, 1, 1, Vertex(-1, -1, -1));
cube.send_to_GPU();
glm::vec3 sight_direction(0, 0, 1);
auto winsize = window.getSize();
float FoV = 49.0f;
glm::mat4 Projection = glm::perspective(
FoV, // The horizontal Field of View, in degrees
static_cast<float>(winsize.x) / winsize.y, // Aspect Ratio.
0.1f, // Near clipping plane.
100.0f // Far clipping plane.
);
glm::vec3 cameraPosition(0, 0, 0);
glm::vec3 upVector(0, 1, 0); //upVector(0, 1, 0);
glm::mat4 View = glm::lookAt(
cameraPosition, // the position of your camera, in world space
cameraPosition + sight_direction, // where you want to look at, in world space
upVector // +y = up
);
glm::mat4 Model = glm::mat4(1.0f);
glm::mat4 MVP = Projection * View * Model;
GLuint MatrixID = glGetUniformLocation(shader.getProgram(), "MVP");
bool movement_forward = false;
bool movement_backward = false;
bool movement_left = false;
bool movement_right = false;
glEnable(GL_DEPTH_TEST);
glClearColor(0.0, 0.0, 0.2, 1.0);
while(window.isOpen())
{
// Handle events for player movement (WASD keys)
// ...
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, window.getSize().x, window.getSize().y);
shader.bind();
// W/A movement:
if (movement_forward)
cameraPosition += sight_direction / 100.f;
if (movement_backward)
cameraPosition -= sight_direction / 100.f;
// A/D Strafing:
if (movement_left)
cameraPosition += glm::normalize(glm::cross(sight_direction, upVector)) / 100.f;
if (movement_right)
cameraPosition -= glm::normalize(glm::cross(sight_direction, upVector)) / 100.f;
sight_direction = glm::vec3(0, 0, 1);
View = glm::lookAt(
cameraPosition,
cameraPosition + sight_direction,
upVector);
MVP = Projection * View * Model;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
// Render:
cube.draw();
// Display:
glFlush();
window.display();
}
}

Shader:

Code :

#version 430 core
in vec3 in_Position;
in vec3 in_Color;
uniform mat4 MVP;
out vec3 pass_Color;
void main(void)
{
vec4 v = vec4(in_Position, 1);
gl_Position = MVP * v;
pass_Color = in_Color;
}

I'm fairly new to OpenGLES3.1 and I'm attempting to utilise compute shaders in attempt to parallelise a conventionally serial algorithm.

I was wondering if anyone could offer any advice on the subject of SSBOs and potentially two dimensional SSBOs.

I would like my shader to take two input buffers of structs :

Quote:

struct Vector2f

{

float x;

float y;

};

struct Node

{

Vector2f id;

float f;

float h;

float g;

};

layout(std140, binding = 0) buffer destBuffer

{

nodes Node[];

} outBuffer;

I would like to take this initial node (first buffer), find its corresponding node (second buffer) and then find its adjacent nodes in the second buffer.

Once the adjacent nodes have been found I would then like to fill up a 3rd SSBO with these adjacent nodes.

That's about it for now;

My first question would be is this at all possible?

If yes would it be possible with 2D SSBOs?

If yes how do I go about creating two dimensional SSBOs?

If not possible with 2D SSBOs what are my other options? (A 1D SSBO with an Array of Arrays perhaps?)

I hope I have explained this in enough detail, any comments would be greatly appreciated :)

Thanks in advance.

Ubershader seems quite difficult to maintain and is not efficient. But the many-small-shader approach seems impractical if I have many features: if I have n features to be turned on or off, there would be 2^n possible combinations, and I need to create 2^n shaders for them.

Is there a better way to do that? Is it possible to automatically combine several shaders together? For example, I may write a simple shader to have texture only, and another simple shader to have lighting effect only. If I want to have both texture and lighting effect, I can combine them together and do not need to write a new shader for that. Is it possible?

If yes, how to get it? If no, what is the better way? ]]>

The problem looks like z-fighting, does anybody know how I can solve it?

Here my lookup function:

Code :

float lookup( )
{
float shadow;
float depth = texture( shadowMap, ShadowCoord.xy ).x;
shadow = ShadowCoord.z > depth ? 0.25f : 1.0f;
return shadow;
}

Thanks in advance ]]>

It works nicely, unless I rotate my object, in which case the effect breaks. How can I allow for object rotation in my code?

Here is my vertex shader:

Code :

uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}