I'm currently working on a compute shader to render a large number of point lights. Additionally, I want as much point lights as possible to cast shadows. For this purpose, I decided to use an array of bindless images (more specifically, an array of imageCube's, each containing the 6-faces-depth-map of a point light). To read depth values from those images, I need to use the imageLoad(gimageCube, ivec3) function. However, this function requires an ivec3 consisting of two texel coordinates and an index determining the face of the cube. Previously, when I was using samplerCube, I could easily use the direction vector from the point light to the fragment to get the corresponding depth value of the depth cube map. Now I need a way to convert this direction vec3 into the ivec3 described above. Can you give me some hints how to approach this? I'm looking for an efficient way as complex plane-line intersections tests are probably redundant.

Thanks for your help! ]]>

Code :

Code :

#version 330 core
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aColor;
layout(location = 2) in vec2 aTex;
out vec4 ourColor;
out vec2 Texture;
uniform mat4 transform;
void main(){
transform;
gl_Position = transform * vec4(aPos,1.0f);
ourColor = vec4(aColor,1.0);
Texture = vec2(aTex.x,1 - aTex.y);
}

Quote:

transform *

My code runs clearly without

Quote:

transform *

I would like to write the output/buffer of fragment shader as I wish (e.g. all white), however this result is depending of the model's shapes presented in the scene. How can I proceed with this correctly?

Follow below my minimal source code and the results.

Thanks in advance!

- Vertex:

Code :

#version 130
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

- Fragment:

Code :

#version 130
void main() {
gl_FragData[0] = vec4(1,0,0,1);
}

- Expected result:

- Current result:

We are trying to create a terrain made of tiles using parallax maping based on hightmap.

It it possible to implement terrain only using fragment shader? (Using some variety of parallax-making)

Also we have a reference implementation for the terrain(Attached here). It would be great if anybody can infer what will be shader logic behind this (Is it only parallaxmaping or vertex shader also used here?)

Thanks

-Shakthi

Firstly, I checked to ensure all of my tiles were touching when uploading my data -> all okay, definitely the correct UV coordinates and data sent with no obvious rounding errors.

Secondly, I disabled MSAA and checked whether it was the UV coordinates bleeding or the floating point positions being off -> UV coordinates seemed to be bleeding in texture atlas.

I was surprised to see that when I wrote the following simple fragment shader (to test if the v_uv was within the UV coordinates I require):

Code :

#version 330
uniform sampler2D tex;
in vec2 v_uv;
out vec4 f_color;
void main() {
vec4 color = texture(tex, v_uv);
float tile_size = 128.0 / 2048.0;
float top_left_x = 256.0 / 2048.0;
float top_left_y = 256.0 / 2048.0;
if (v_uv.x < top_left_x) {
color = vec4(0.0, 0.0, 0.0, 1.0);
}
if (v_uv.y < top_left_y) {
color = vec4(0.0, 0.0, 0.0, 1.0);
}
if (v_uv.x > top_left_x + tile_size) {
color = vec4(0.0, 0.0, 0.0, 1.0);
}
if (v_uv.y > top_left_y + tile_size) {
color = vec4(0.0, 0.0, 0.0, 1.0);
}
f_color = vec4(color.x, color.y, color.z, color.a);
}

The bleeding texture atlas now showed black lines instead of the original atlas bleeding. How can that be and why is it extrapolating the UV outside of my primitive's edges by a small amount?

Even though I uploaded the UV coordinates of (0.125, 0.125) to (0.1875, 0.1875) for every tile (the texture is located on the 2nd column and 2nd row, 128px of a 2048px atlas).

here the piece of code that generate the shape:

Code cpp:

int rr = 50;
int h = 1000;
int vertices = 25;
beginShape(TRIANGLE_STRIP);
for (int i = 0; i <= vertices; i++) {
float angle = TWO_PI / vertices;
float x = sin(i * angle);
float z = cos(i * angle);
float u = float(i) / vertices;
vertex(x * rr, -h/2, z * rr);
vertex(x * rr, +h/2, z * rr);
}
endShape();

here my default vertex:

Code glsl:

#version 150
#define PROCESSING_COLOR_SHADER
#ifdef GL_ES
precision mediump float;
#endif
in vec4 position;
in vec3 color;
out vec3 Color;
uniform mat4 transform;
in vec2 texCoord;
out vec2 TexCoord;
in vec4 normal;
uniform float u_time;
uniform mat3 normalMatrix;
void main() {
vec4 pos = position;
TexCoord = texCoord;
Color = color;
gl_Position = transform * pos;
}

and here the result:

what i want is try to put the shape into some kind of "spline", or something like a ray, just something like that:

how can i "bend" geometry to generate that points of intersection? i assume is something that i can do in vertexShader, but maybe im wrong. thanks a lot