Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 12

Thread: Textures in the geometry shader

  1. #1
    Newbie Newbie
    Join Date
    May 2017
    Posts
    5

    Textures in the geometry shader

    I have a quad, the size of my viewport window that has an equivalently sized and proportioned texture map applied to it. Both the vertex shader and the fragment shader work, and the expected result is output to the screen.

    I am trying to write a geometry shader in OpenGL that loads a texture (it's already been passed through an edge detection algorithm, so I'm left with a black background and white edges), and then queries each pixel of the loaded texture to determine whether or not there is an edge (a white pixel). If it does find an edge, I want the geometry shader to create a point at the corresponding location of the edge pixel, continue along the selected axis until it finds another edge, creates a point there, and then emits a line between the two, stored in a buffer.

    Is it possible to sample a texture within a geometry shader? If so, how would I go about uploading the image to a uniform and hen extracting the information of individual pixels?

    Thanks guys, much appreciated!

    Alex

  2. #2
    Member Regular Contributor
    Join Date
    Jul 2012
    Posts
    420
    The purpose of geometry shaders is to generate primitives (ie, triangles). Textures are out of interest here.

  3. #3
    Newbie Newbie
    Join Date
    May 2017
    Posts
    5
    I understand the usage of the geometry shader, I'm just trying to use it marginally differently. What I'm trying to do is something similar to the Hough Transform section on this link:

    https://developer.nvidia.com/gpugems...ems3_ch41.html

    I'm just not entirely sure where to start, and the code in the link isn't particularly well explained...

    Here's what I have so far:
    Code :
    #version 330 
    //define the input and output primitives 
    layout(triangles) in;
    layout (line_strip, max_vertices = 256) out;
     
    //input varyings from vertex shader
    in vec4 vertPosition[3];
    in vec2 vertTexCoord[3];
    uniform sampler2D texImage;
     
    //output points?
     
    //variables needed for main
    uint xAxis, yAxis, count, pixelColor;
     
    void main() {
    	for (xAxis = 0; xAxis = 512; xAxix++){
    		for yAxis = 0; yAxis = 512; yAxis++){
    			if (texImage())
    		}
    	}
    }

    I anybody can give any pointers, that would be greatly appreciated

    Alex

  4. #4
    Member Regular Contributor
    Join Date
    Jul 2012
    Posts
    420
    You're right.

    From the GS4 specs:

    Geometry shaders have the ability to do a lookup into a texture map, if
    supported by the GL implementation. The maximum number of texture image
    units available to a geometry shader is
    MAX_GEOMETRY_TEXTURE_IMAGE_UNITS_ARB; a maximum number of zero indicates
    that the GL implementation does not support texture accesses in geometry
    shaders.
    So ensure that MAX_GEOMETRY_TEXTURE_IMAGE_UNITS_ARB is not set to 0.
    I went quickly to these specs and it looks at first glance there can have some tricks.

  5. #5
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,407
    Quote Originally Posted by Un_dead View Post
    Is it possible to sample a texture within a geometry shader?
    Yes. However, implicit derivatives aren't available outside fragment shaders, so functions which depend upon them will always sample from the base mipmap level.

    Quote Originally Posted by Un_dead View Post
    If so, how would I go about uploading the image to a uniform and hen extracting the information of individual pixels?
    The same as for any other shader.

    Create a texture, bind it to a specific texture image unit, store the index of that unit (e.g. 0 for GL_TEXTURE0) in a uniform variable with glUniform1i(). The GLSL variable must be of a sampler type which matches the texture's target (e.g. sampler2D for GL_TEXTURE_2D).

    Within the shader you can use any of the texture query functions listed in the GLSL specification. If you want to retrieve texels using integer array indices, use texelFetch(). If you want to use a normalised coordinate vector, use texture(), textureLod(), textureGrad() etc.

  6. #6
    Newbie Newbie
    Join Date
    May 2017
    Posts
    5
    Thank you very much guys! I really appreciate it

    So far, I've come up with this:

    Code :
    #version 330 
    precision mediump float;
    //define the input and output primitives 
    layout(triangles) in;
    layout (line_strip, max_vertices = 256) out;
     
    //input varyings from vertex shader
    in vec4 vertPosition[3];
    in vec2 vertTexCoord[3];
    uniform isampler2D texImage;
     
    //output points
    out vec4 outVertPos;
    out vec2 outVert_texCoord;
    //variables needed for main
    uint xAxis, yAxis, count;
    ivec2 pixelCoord;
    vec4 pixelColor;
    float xVertCoord, yVertCoord;
     
    void main() {
    	for (xAxis = 0; xAxis = 511; xAxis++){
    		for yAxis = 0; yAxis = 511; yAxis++){
    			//set integer values for the texel fetch function
    			pixelCoord.x = xAxis;
    			pixelCoord.y = yAxis;
    			//retrieve colour of particular texel
    			pixelColor = texelFetch(texImage, pixelCoord, 0);
     
    			if (pixelColor.r == 1.0) {
    				//check red channel
    				xVertCoord = xAxis/511; //normalise coordinates or output points (not sure if this is correct) 
    				yVertCoord = yAxis/511;
     
    				outVertPos = vec4(xVertCoord, yVertCoord, 0.0, 1.0);
     
    				gl_Position = outVertPos;
    				EmitVertex();
    				count = count + 1; 
    				//function for ensuring line is emitted
    				if(count = 2){
    					EndPrimitive();
    					count = 0;
     
    				}
     
    			}
     
    		}
    	}
    }

    What do you guys think o this? I'm fairly new to OpenGL, so whilst it looks right, I don't really know what does or doesn't..
    Again, thanks very uch for your help guys!

    Alex

  7. #7
    Member Regular Contributor
    Join Date
    Jul 2012
    Posts
    420
    I would say that this kind of looping all around the values will certainly lead to very poor performances. The double loop will make things even worse.

    An edge links two vertices. So the texture at the vertex position should be white too if the vertex belong to an edge. So querying the texture at the texture coordinate of each vertex of each edge of the input triangle should be far enough.

    Finally, you can have a look at this paper, which seems to do what you want to do.

  8. #8
    Newbie Newbie
    Join Date
    May 2017
    Posts
    5
    Thank you Silence!

    The paper itself seems to do the opposite of what I want to do, starting from geometry and then rendering an image, whereas I want to try and get geometry from an image. I'll see if it can kickstart the flow of some creative juices though.

    I thought maybe I could use some kind of edge detection convolution kernel in the fragment shader to output gl_fragCoord to a VBO, and then use the geometry shader to link those vertices into lines. I can then find the intersection point of the longest lines in both the X and Y axes and output that as the centre of my shape.

    Guys, I've been trying to post this same post for a while now, it just won't seem to work; it says something about URLs?

    Any thoughts on what I'm doing wrong?

    Thanks
    Last edited by Un_dead; 05-12-2017 at 07:27 AM.

  9. #9
    Member Regular Contributor
    Join Date
    Jul 2012
    Posts
    420
    Unfortunately I didn't find something more relevant...

    For about the URL, I think you need 5 posts here.

  10. #10
    Senior Member OpenGL Guru
    Join Date
    Jun 2013
    Posts
    2,407
    Quote Originally Posted by Un_dead View Post
    I thought maybe I could use some kind of edge detection convolution kernel in the fragment shader to output gl_fragCoord to a VBO, and then use the geometry shader to link those vertices into lines. I can then find the intersection point of the longest lines in both the X and Y axes and output that as the centre of my shape.
    Edge detection in the fragment shader (or a compute shader) is a viable option. But that will give you a set of points in unspecified order; you'll need to organise the points into line strips, and a geometry shader isn't much use for that.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •