here is my Fragment shader

Code :

#version 430
#define PI 3.1415926535897932384626433832795
uniform vec3 lightColor;
uniform sampler2D tex;
uniform vec3 ambientLight;
uniform vec3 lightPosition;
uniform vec3 cameraPosition;
in Vertex{
vec4 rawPosition; // Vertex Position only multiplied by the transformationmatrix. ( gl_Position is also multiplied by the projection and view matrices)
vec2 uv;
vec4 normal; // the normals have also been multiplied by the transformationmatrix ( is this correct? my normals have a w-value of 0.0f but the transformationmatrix makes a difference. without it the reflections dont seem be in the right place.
}vertexIn;
out vec4 color;
void main(){
vec3 normal = normalize(vec3(vertexIn.normal.x, vertexIn.normal.y, vertexIn.normal.z));
vec3 worldPosition = vec3(vertexIn.rawPosition.x, vertexIn.rawPosition.y, vertexIn.rawPosition.z);
vec3 lightVector = normalize(lightPosition - worldPosition);
vec3 reflection = normalize(reflect((-1.0)*lightVector, normal));
vec3 cameraVector = normalize(cameraPosition - worldPosition);
float lightAngle = clamp(dot(lightVector, normal), 0.0, 1.0);
float cameraAngle = clamp(dot(cameraVector, reflection), 0.0, 1.0);
float specular = pow(cameraAngle, 100);
vec4 actColor = texture(tex, vertexIn.uv);
vec4 diffuseLight = vec4(lightColor, 1.0) * lightAngle;
vec4 specularLight = vec4(lightColor, 1.0) * specular;
color = actColor * clamp(diffuseLight, vec4(ambientLight, 1.0), vec4(1.0, 1.0, 1.0, 1.0)) + specularLight;
}

here is an imgur link to visualize what i mean. the lightsource is the small cube above the middle of the grid

thanks for the help ]]>

I've been searching a lot on this topic, though I cannot seem to find anyone having the same problem as me.

Consider a shader SSBO like:

Code :

#version 430
layout (std430) buffer myBlock
{
vec4 color;
vec3 direction;
...
};
void main()
{
...
}

Than you should be able to querry the index of that Buffer object with:

Code :

int blockIndex = GL43.glGetProgramResourceIndex(programID, GL43.GL_SHADER_STORAGE_BLOCK, "myBlock");

I tried replacing the SSBO with an UBO (and use GL43.GL_UNIFORM_BLOCK as GLEnum in the 'glGetProgramResourceIndex' method) and it works as expected.

So further testing was querying the name with:

Code :

String blockName = GL43.glGetProgramResourceName(program, GL43.GL_SHADER_STORAGE_BLOCK, 0, 20);

I used index '0' because its the only SSBO declared in the whole ShaderProgram.

So even if I do:

Code :

int blockIndex = GL43.glGetProgramResourceIndex(programID, GL43.GL_SHADER_STORAGE_BLOCK, GL43.glGetProgramResourceName(program, GL43.GL_SHADER_STORAGE_BLOCK, 0, 20));

So my question is; am I doing anything wrong, or is this possibly a bug from my driver.

I wouldnt be suprissed if it is tbh. since I have many problems with different render engines as well.

Any help/comment is appreciated! ]]>

I made it in Blender with the following steps:

- Frame Size: 128x128px

- Used Blender's toon shader called (Free Style)

- and then scaled the image back up.

Is there anyway I can accomplish the same effect with GLSL? Where I draw the image in a small frame, not use anti-aliasing, and then scale it back up so I get that "pixelated" effect?

I have book that shows me how to write a toon shader, that's not a problem. I'm just not sure if drawing in a small frame and then scaling up like I did in Blender is the solution, and if it is..I have no idea how to do that. ]]>

I have run into a problem with accuracy of sin/cos on an Intel HD4600.

Because of this I have had to implement a min/max sin/cos in glsl.

What I really need to know is how costly my implementations are so that I can play accuracy off against performance.

Can anyone provide me with the cycle cost for the builtin math functions for the HD4600 GPUs?

Thanks

Damian ]]>

i have read that the compute shader can render, but i'm asking myself how this works.

do i have to create the image data for the framebuffer anywhere else, like in a shader storage buffer, and later put the data into a framebuffer texture or can i access directly a (framebuffer) texture in the compute shader anyhow ?

thanks in advance! ]]>

I have just started to use GLSL and i currently have general problems in understanding matrix-operations. It's years ago, when i had to work with matrices. So i have a newbie question concerning gl_ModelViewProjectionMatrix.

Background:

My idea (or some kind of experiment) is, to use the vertex-shader as "object-generator" - which is more flexible than static C-code. A huge part already works fine.

In the C-part of my program, i just generate a flat rectangle with x and y between 0 and 1 (with variable resolution) and z=0. In the vertex-shader i use x and y as sources to generate objects like ie. simply a torus.

Also normal-vectors are already calculated correct by the shader.

My problem is the object-positioning. For C-generated objects, i used "gl_Position = ftransform()", which i cant use in this case.

Using gl_ModelViewProjectionMatrix i get correct rotations but - somehow - translations are lost. So my torus can be rotated by calling glRotatef() in the C-part of the program, but glTranslatef() doesn't have any effect.

Here is the relevant vertex-shader-code (some function are shortened):

Code :

varying vec4 Vertex;
varying vec3 Normal;
vec4 rotateX (vec3 v, float a) {...}
vec4 rotateY (vec3 v, float a) {...}
vec4 rotateZ (vec3 v, float a) {...}
vec3 NormalVec(vec4 p, vec4 vX, vec4 vY) {...}
vec4 Torus(vec2 pos) {
vec4 v = vec4 (0.0, 0.0, 0.1, 0.0);
v = rotateY (v, pos.y * 2.0) + vec3(0.0, 0.0, 0.25);
v = rotateX (v, pos.x * 2.0);
return v;
}
void main () {
vec4 t1 = Torus(vec2(gl_Vertex));
vec4 t2 = Torus(vec2(gl_Vertex) + vec2(0.0, 0.01));
vec4 t3 = Torus(vec2(gl_Vertex) + vec2(0.01, 0.0));
Normal = NormalVec(t1, t2, t3);
t1 = t1 * gl_ModelViewProjectionMatrix;
Vertex = vec4(t1.x, t1.y, t1.z, 1);
gl_Position = Vertex;
}

Anyone knows why tranlations are lost??

Regards,

Frank ]]>

Is there a way I can render primitives with the geometry shader without the texture and lighting disappearing? My guess is creating another VBO just for geometry shader work, but I'm sure I'm wrong.

I'll share what I got for the vertex and geometry shaders so far, thanks in advanced!

Code :

// Vertex
#version 330 core
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
layout (location = 0) in vec3 position;
layout (location = 1) in vec2 texCoord;
layout (location = 2) in vec3 normals;
uniform mat4 mvp;
uniform mat3 nrmMatrix;
out vec3 position0;
out vec2 texCoord0;
out vec3 normals0;
out mat3 nrmMatrix0;
out mat4 model0;
out vec3 dirLightPos0;
out VS_OUT {
vec3 normal;
} vs_out;
void main() {
position0 = position;
texCoord0 = texCoord;
normals0 = normals;
nrmMatrix0 = nrmMatrix;
model0 = model;
vs_out.normal = vec3(projection * vec4(nrmMatrix * normals, 1.0));
gl_Position = mvp * vec4(position, 1.0);
}
// Geometry
#version 330 core
layout (triangles) in;
layout (line_strip, max_vertices = 6) out;
in VS_OUT {
vec3 normal;
} gs_in[];
const float MAG = 0.2f;
out vec3 color0;
void GenerateLine(int index) {
gl_Position = gl_in[index].gl_Position;
EmitVertex();
gl_Position = gl_in[index].gl_Position + vec4(gs_in[index].normal, 0.0f) * MAG;
EmitVertex();
EndPrimitive();
}
void main() {
GenerateLine(0);
GenerateLine(1);
GenerateLine(2);
color0 = vec3(1, 0, 0);
}

assumung a triangle:

flat shader says that the resulting color for all pixels that cover the face is the (arithmetic) average color:

color(any pixel) = (color(vertex1) + color(vertex2) + color(vertex3)) / 3

but without geometry shader, i dont have access to all three vertices (or results of vertexshader invocations)

when passing the resulting color of 1 vertexshader invocation as "flat out vec3 color", then the 2 previous invocations will be just discarted

passing "(smooth) out vec3 color" will result in interpolating the 3 colors within the triangle, which is NOT "flat shading"

am i missing something ?

do i need to have the geometry shader in my program to make correct flat shading ?

what's with "gouraud shading" ? ]]>