I am having a problem with my shaders. I was writing a point light shadow mapping shader in GLSL #120 and I ran into a snag. My uniforms are defined as follows:

Code :

const int MAX_LIGHTS = 4;
uniform sampler2D shadowTextSamp0;
uniform sampler2D shadowTextSamp1;
uniform sampler2D shadowTextSamp2;
uniform sampler2D shadowTextSamp3;
uniform samplerCube shadowCube0;
uniform samplerCube shadowCube1;
uniform samplerCube shadowCube2;
uniform samplerCube shadowCube3;

Which is pretty straightforward. But I need to bind a cube map to the samplerCubes, so I used glGetUniformLocation to grab the uniform reference from my programs, but when I call it with the name of shadowCube# whichever it may be, it returns -1. My code for grabbing the handles is as follows:

Code :

std::string parseCube = "shadowCube" + str;
std::cout << glGetUniformLocation(inShade->getID(), parseCube.c_str());

where str represents a std::string of an integer, which is meant to get shadowCube#

I checked how the string was made to avoid space errors, and I've made sure the variables are the EXACT same name. I'm not sure where I went wrong...

Could somebody please give me a reason why this may have happened?

Thanks,

Devmane144 ]]>

I was wandering about how to implement a screen tearing effect, such as this (photoshop demo):

I imagine the work should be done in the fragment shader, as if I did it in the vertex shader, everything would be just deformed and not cut. I can send the offset in via an input variable or calculate it in the shader, but how to move the fragments? Also, would I have to do it in every fragment shader I'm using (say 3 for different kinds of objects) or is there a way to acces all the fragments below the current one?

Thank you

[short version]

What is the best way to pass 224 of mat4 and store it in the fragment shader to be able to do calculation using them?

[long version]

In my code i'm doing matrix multiplication to project the 3D points around so I need the camera transformation and parameter which will store in mat4 (I do camera param * camera transform * 3D point). Each camera has 2 mat4 and there are 112 total cameras so there are total of 224 mat4 to be exact. Right now I'm declaring uniform of type array of mat4 in the fragment shader like so,

Code :

uniform mat4 cam_views[128];
uniform mat4 cam_params[128];

and I pass the value to the shader like so,

Code :

//cam_trans and cam_params is the array of GLfloat of size [16 * 128]
glUniformMatrix4fv(glGetUniformLocation(shaderProgramID, "cam_views"), 128, GL_TRUE, cam_trans);
glUniformMatrix4fv(glGetUniformLocation(shaderProgramID, "cam_params"), 128, GL_TRUE, cam_params);

The problem is when I try to access the uniform in my shader it seem that the value of the uniform is null. My code instantly crash with the read access violation error. Here is an example of the code that will cause the crash in fragment shader. Note if I change cam_views[current_cam] to cam_views[Any number less than 128] it works fine.

Code :

for (int current_cam = 0; current_cam < 128; current_cam++) {
vec4 pixel3D_pos_acam = cam_views[current_cam] * pixel3D_coord;
vec4 img_2d_pos = cam_params[current_cam] * pixel3D_pos_acam;
img_2d_pos.x /= img_2d_pos.z;
img_2d_pos.y /= img_2d_pos.z;
fColor = vec4(img_2d_pos.x, 0,0,1); //it run fine without this line
}

So my question is what is the best (and correct) way to pass such a huge array of mat4 to the shader? I heard of using float texture but if I use float texture how can I read the value back and create a mat4 data type? Because I need to do matrix multiplication. Or maybe a uniform buffer? I not quite get the idea how to use any of this.

Extra question: After this I also need to access many (112 arrays) huuugee array of float (360,000 elements) within the shader. I also need to access image data (112 of 600*600 images) within the shader too. So how to pass those 2 huuuge data to the shader? Everything combined needs around 800 mb of memory.

Thank you ]]>

I recently updated my NVIDIA GeForce driver to version 364.72, and a shader program that worked perfectly fine before does not seem to compile anymore. I haven't changed a thing to the script itself after the update, however now I get the following error:

Code :

error: Type mismatch between variable "i_Color" matched by location "1"

The vertex shader has a variable 'out vec4 o_Data1' bound to location 1, while the fragment shader has a variable 'in vec4 i_Color' bound there as well. I'll post my shaders below, but as you'll see, they're quite straightforward. I'm using it for font rendering from a single texture, allowing the font to change color in the fragment shader. I'm not sure what could possibly be going wrong. I've got other programs with a similar structure, that still work after the update. Note that these scripts were code-generated. That's why I use layout location on all of my data, it made the scripts much easier to mix and match.

My vertex shader:

Code :

#version 450 core
layout(location = 0)
uniform mat4 u_Matrix;
layout(location = 0)
in vec4 i_Vertex;
layout(location = 3)
in mat4 i_Matrix;
layout(location = 2)
in vec4 i_Data1;
layout(location = 1)
out vec4 o_Data1;
layout(location = 1)
in vec2 i_Data0;
layout(location = 0)
out vec2 o_Data0;
void main()
{
o_Data1 = i_Data1;
o_Data0 = i_Data0;
gl_Position = u_Matrix * i_Matrix * vec4(i_Vertex.x + 0.0, i_Vertex.y - 0.0, i_Vertex.z, 1.0);
}

My fragment shader:

Code :

#version 450 core
layout(location = 1)
uniform sampler2D u_Texture;
layout(location = 1)
in vec4 i_Color;
layout(location = 0)
in vec2 i_Coord;
out vec4 o_Color;
void main()
{
o_Color = texelFetch(u_Texture, ivec2(i_Coord), 0);
if(o_Color.a > 0.0)
{
o_Color.rgb = i_Color.rgb * (1.0 - o_Color.rgb);
}
if(o_Color.a == 0.0)
{
discard;
}
}

Does anyone have an idea what's going wrong here? Thanks in advance! ]]>

[I'd put an image here but this website says that .jpg isn't a valid image!]

Here is the setup code for the matrices and the vertex structures:

Code :

static float vertex_coordinates[] =
{
-0.5f, +0.5f, +0.5f, +0.5f,
-0.5f, -0.5f, +0.5f, -0.5f
};
static float texture_coordinates[] =
{
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 0.0f
};
void create_vertex_attribute(const char *attribute, float *attribute_data, size_t data_size)
{
GLuint attribute_buffer_id;
GLint location;
location = get_vertex_attribute_location(attribute);
glEnableVertexAttribArray(location);
generate_and_bind_opengl_object(GL_ARRAY_BUFFER, &attribute_buffer_id);
glBufferData(GL_ARRAY_BUFFER, data_size, attribute_data, GL_STATIC_DRAW);
glVertexAttribPointer
(
location, // vertex_attribute_location
2, GL_FLOAT, // two floats per vertex
GL_FALSE, // don't normalize
0, // stride (packed)
(GLubyte*)NULL // no offset
);
OpenGL_error_check(__FILE__, __LINE__, __FUNC__);
}
void setup_vertex_shader_pipeline(void)
{
mat4 ortho_matrix;
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
set_uniform_variable(GL_BOOL, "use_vertex_shader_pipeline", GL_TRUE);
// creates a matrix for projecting two-dimensional coordinates onto the screen.
ortho_matrix = glm::ortho(0.0f, (float)viewport[2], 0.0f, (float)viewport[3], -1.0f, +1.0f);
set_uniform_matrix4_variable("ortho_matrix", ortho_matrix);
generate_and_bind_opengl_object(GL_VERTEX_ARRAY, &vertex_array_object_handle);
create_vertex_attribute("vertex_coordinates", vertex_coordinates, sizeof(vertex_coordinates));
create_vertex_attribute("texture_coordinates", texture_coordinates, sizeof(texture_coordinates));
OpenGL_error_check(__FILE__, __LINE__, __FUNC__);
printf("\tGLSL vertex shader pipeline setup\n");
}
void OpenGL_GLM_render_tile(TILE *tile)
{
mat4 translation_matrix, rotation_matrix, scale_matrix;
translation_matrix = glm::translate(glm::mat4(1.0f), glm::vec3((float)tile->x, (float)tile->y, 0.0f));
// the tile angle is in radians for GLM and rotates about z-axis
rotation_matrix = glm::rotate(glm::mat4(1.0), (float)tile->angle, glm::vec3(0.0, 0.0, 1.0));
scale_matrix = glm::scale(glm::mat4(1.0), glm::vec3(tile->width * (float)tile->scale, tile->length * (float)tile->scale, 1.0));
// have the GPU multiply these matrices to compute the transform matrix
set_uniform_matrix4_variable("translation_matrix", translation_matrix);
set_uniform_matrix4_variable("rotation_matrix", rotation_matrix);
set_uniform_matrix4_variable("scale_matrix", scale_matrix);
glDrawArrays(GL_QUADS, 0, 4);
glutPostRedisplay();
}

Here is the vertex shader. As you can see I can switch it from fixed pipeline to programmable very easily.

Code :

in vec2 vertex_coordinates; // from vertex buffer object
in vec2 texture_coordinates; // from vertex buffer object
uniform bool use_vertex_shader_pipeline;
uniform mat4 ortho_matrix; // these are from the OpenGL GLM matrix code
uniform mat4 translation_matrix;
uniform mat4 rotation_matrix;
uniform mat4 scale_matrix;
uniform mat4 transform_matrix;
out vec2 fragment_texture_coordinates;
void main(void)
{
if(use_vertex_shader_pipeline)
{
fragment_texture_coordinates = texture_coordinates;
// make the GPU do the matrix multiplications
// a cascade of transformations in the appropriate order
gl_Position = ortho_matrix
* translation_matrix
* rotation_matrix
* scale_matrix
* vec4(vertex_coordinates, 0.0, 1.0);
}
else
{
// This relies on the old fixed-pipeline facilities there is no
// need to set up the matrix operations in the OpenGL code
fragment_texture_coordinates = gl_MultiTexCoord0.st;
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
}
}

I've been looking at this for a couple of days and can't seem to find the problem. :doh:

Please help! ]]>

i) I have scan through some example ray tracer, and I find that most of them tend to use the fragCoord to cast the ray. But I have been wondering why don't they use eye and vertex coordinate to cast the ray? Please correct me if I am wrong here, the camera/eye is at (0,0,0) in eye coordinate. And suppose we multiply vertex with modelview matrix: myvertex = (ModeviewMAtrix * vertex) , we get a result of vertex in eye coordinate stored in myvertex. Is it appropriate to cast a ray with ray.Origin at (0,0,0) and direction of the ray is the position of myvertex in normalized form?

I am trying to do a very basic stuff only here, drawing 2 sphere with different ambient different material, no lighting calculation involve yet, I just want my shader to be able to correctly map the correct ambient color onto my spheres

size 640 480 // window size

camera 0 -4 4 0 0 0 0 1 1 45 //eye, center, up, fovy

pushTransform // first sphere should look grey color

ambient .7 .7 .7

sphere 0 0 0 1 // xyz radius

popTransform;

pushTransform // 2nd sphere, purple

translate 2 0 0

ambient .1 .7 .7

sphere 0 0 0 1

popTransform;

Code :

//vertex shader
void main() {
gl_Position = pm * mv * vec4(vertices, 1.0) ; // pm and mv are uniforms for projection and modelview mat
myvertex = mv * vec4(vertices, 1.0) ; // vertices is the varying input passed from window with
}

Code :

//Fragment shader
//data for raytracer//
const int numObj = 2;
uniform vec4 ambData[numObj];
uniform vec4 diffData[numObj];
uniform vec4 specData[numObj];
uniform vec4 emiData[numObj];
uniform mat4 transfData[numObj];
uniform float shnData[numObj];
uniform int typeData[numObj];
uniform float sizeData[numObj];
uniform mat4 lookAt;
uniform int maxDepth;
//////////////////////
bool circleIntersect(in vec3 cen, in float r, in vec3 ori, in vec3 dir, inout float t ){
vec3 RC = ori - cen;
float DD = dot(dir, dir);
float DdRC = dot(dir, RC);
float sqtN, sqtP;
t = r*r - dot(RC,RC) + DdRC*DdRC;
if( t > 0.0 ) // 2 root
{
sqtP = sqrt(t) - DdRC;
sqtN = -sqtP - DdRC;
if (sqtP <= 0 && sqtN <= 0){
return false;
}
if (sqtN < sqtP){
t = sqtN;
}
else{
t = sqtP;
}
if (t <= 0)
return false;
return true;
}
return false;
}
vec4 intersection(in vec3 rayO, in vec3 rayD){
vec4 retClr = vec4(0.0);
float tMin = t_inf; //t_inf is a constant = 100 000.0f
float t = tMin;
vec3 norm;
int closestIdx;
for (int i = 0; i < numObj; i++){
if (typeData[i] == 2){
vec4 c = lookAt * transfData[i] * vec4(0.0, 0.0, 0.0, 1.0);
if (circleIntersect(c.xyz, 6.0, rayO, rayD, t)){
if (t < tMin){
closestIdx = i;
tMin = t;
}
}
else{
continue;
}
}
else if (typeData[i] == 4){
if (cubeIntersect(transfData[i], rayO, rayD, t, norm) ){
if (t < tMin){
closestIdx = i;
tMin = t;
}
}
else{
continue;
}
}
}
retClr = vec4(0,0,0,1) + ambData[closestIdx];
return retClr;
}
void main (void)
{
vec4 eye = vec4(0,0,0,1); //lookAt * vec4(0.0, -4.0, 4.0, 1.0);
vec3 rayOri = eye.xyz / eye.w;
vec3 rayDir = normalize(myvertex.xyz - eye.xyz);
gl_FragColor += intersection(rayOri, rayDir);
}

The result I get is 2 circle are drawn on the screen. But sadly both of them in purple color...I rotate the camera to see these 2 sphere from different directions but both of them look completely purple, no other color. I highly doubt my ray setup is totally wrong. ]]>