Textures in the geometry shader

I have a quad, the size of my viewport window that has an equivalently sized and proportioned texture map applied to it. Both the vertex shader and the fragment shader work, and the expected result is output to the screen.

I am trying to write a geometry shader in OpenGL that loads a texture (it’s already been passed through an edge detection algorithm, so I’m left with a black background and white edges), and then queries each pixel of the loaded texture to determine whether or not there is an edge (a white pixel). If it does find an edge, I want the geometry shader to create a point at the corresponding location of the edge pixel, continue along the selected axis until it finds another edge, creates a point there, and then emits a line between the two, stored in a buffer.

Is it possible to sample a texture within a geometry shader? If so, how would I go about uploading the image to a uniform and hen extracting the information of individual pixels?

Thanks guys, much appreciated!

Alex

The purpose of geometry shaders is to generate primitives (ie, triangles). Textures are out of interest here.

I understand the usage of the geometry shader, I’m just trying to use it marginally differently. What I’m trying to do is something similar to the Hough Transform section on this link:

https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch41.html

I’m just not entirely sure where to start, and the code in the link isn’t particularly well explained…

Here’s what I have so far:

#version 330 
//define the input and output primitives 
layout(triangles) in;
layout (line_strip, max_vertices = 256) out;

//input varyings from vertex shader
in vec4 vertPosition[3];
in vec2 vertTexCoord[3];
uniform sampler2D texImage;

//output points?

//variables needed for main
uint xAxis, yAxis, count, pixelColor;

void main() {
	for (xAxis = 0; xAxis = 512; xAxix++){
		for yAxis = 0; yAxis = 512; yAxis++){
			if (texImage())
		}
	}
}

I anybody can give any pointers, that would be greatly appreciated :slight_smile:

Alex

You’re right.

From the GS4 specs:

Geometry shaders have the ability to do a lookup into a texture map, if
supported by the GL implementation. The maximum number of texture image
units available to a geometry shader is
MAX_GEOMETRY_TEXTURE_IMAGE_UNITS_ARB; a maximum number of zero indicates
that the GL implementation does not support texture accesses in geometry
shaders.

So ensure that MAX_GEOMETRY_TEXTURE_IMAGE_UNITS_ARB is not set to 0.
I went quickly to these specs and it looks at first glance there can have some tricks.

Yes. However, implicit derivatives aren’t available outside fragment shaders, so functions which depend upon them will always sample from the base mipmap level.

The same as for any other shader.

Create a texture, bind it to a specific texture image unit, store the index of that unit (e.g. 0 for GL_TEXTURE0) in a uniform variable with glUniform1i(). The GLSL variable must be of a sampler type which matches the texture’s target (e.g. sampler2D for GL_TEXTURE_2D).

Within the shader you can use any of the texture query functions listed in the GLSL specification. If you want to retrieve texels using integer array indices, use texelFetch(). If you want to use a normalised coordinate vector, use texture(), textureLod(), textureGrad() etc.

Thank you very much guys! I really appreciate it :slight_smile:

So far, I’ve come up with this:


#version 330 
precision mediump float;
//define the input and output primitives 
layout(triangles) in;
layout (line_strip, max_vertices = 256) out;

//input varyings from vertex shader
in vec4 vertPosition[3];
in vec2 vertTexCoord[3];
uniform isampler2D texImage;

//output points
out vec4 outVertPos;
out vec2 outVert_texCoord;
//variables needed for main
uint xAxis, yAxis, count;
ivec2 pixelCoord;
vec4 pixelColor;
float xVertCoord, yVertCoord;

void main() {
	for (xAxis = 0; xAxis = 511; xAxis++){
		for yAxis = 0; yAxis = 511; yAxis++){
			//set integer values for the texel fetch function
			pixelCoord.x = xAxis;
			pixelCoord.y = yAxis;
			//retrieve colour of particular texel
			pixelColor = texelFetch(texImage, pixelCoord, 0);
			
			if (pixelColor.r == 1.0) {
				//check red channel
				xVertCoord = xAxis/511; //normalise coordinates or output points (not sure if this is correct) 
				yVertCoord = yAxis/511;

				outVertPos = vec4(xVertCoord, yVertCoord, 0.0, 1.0);

				gl_Position = outVertPos;
				EmitVertex();
				count = count + 1; 
				//function for ensuring line is emitted
				if(count = 2){
					EndPrimitive();
					count = 0;

				}

			}

		}
	}
}

What do you guys think o this? I’m fairly new to OpenGL, so whilst it looks right, I don’t really know what does or doesn’t…
Again, thanks very uch for your help guys!

Alex

I would say that this kind of looping all around the values will certainly lead to very poor performances. The double loop will make things even worse.

An edge links two vertices. So the texture at the vertex position should be white too if the vertex belong to an edge. So querying the texture at the texture coordinate of each vertex of each edge of the input triangle should be far enough.

Finally, you can have a look at this paper, which seems to do what you want to do.

Thank you Silence!

The paper itself seems to do the opposite of what I want to do, starting from geometry and then rendering an image, whereas I want to try and get geometry from an image. I’ll see if it can kickstart the flow of some creative juices though. :slight_smile:

I thought maybe I could use some kind of edge detection convolution kernel in the fragment shader to output gl_fragCoord to a VBO, and then use the geometry shader to link those vertices into lines. I can then find the intersection point of the longest lines in both the X and Y axes and output that as the centre of my shape.

Guys, I’ve been trying to post this same post for a while now, it just won’t seem to work; it says something about URLs?

Any thoughts on what I’m doing wrong?

Thanks

Unfortunately I didn’t find something more relevant…

For about the URL, I think you need 5 posts here.

Edge detection in the fragment shader (or a compute shader) is a viable option. But that will give you a set of points in unspecified order; you’ll need to organise the points into line strips, and a geometry shader isn’t much use for that.

GClements, I tried to do it and then ran into that option… It’s getting too late for me to continue trying to implement this into my dissertation, so I will have to write the report with what I’ve got so far (which unfortunately isn’t much). My knowledge of sorting algorithms is limited, is there any particular one you guys would suggest?

Additionally, there would be an output of of a vec4(x, y, z, 1/w) for each fragment. Could they not be sorted so that the vec4s with the same value of x, are stored next to each-other, and then those can be passed through the geometry shader as lines? I’m just throwing stuff at the wall and seeing what’ll stick at this point…

Thanks

Alex

For sorting on the GPU, either even-odd mergesort or bitonic sort (both are O(n*log2(n)), but bitonic sort has better locality). But that wouldn’t necessarily help much; the hard part is collating the points into line segments.

They could, but that isn’t going to give you anything resembling a correct result.

Also, you would probably want to avoid having to order the points as they’re generated, as that is going to harm parallelism and thus performance.

If the edge-detection algorithm involves determining the line equation for the edge, then I’d suggest clustering (e.g. k-means) based upon that. Points can then be sorted based upon distance along the line (i.e. the dot product of the point’s position and edge’s tangent vector).