Problems with simple Ortho Projection / Shader tes

Hope everyone’s doing well!

Don’t know if anyone can point me in the right direction on this. I’m trying to figure out why my geometry shader work isn’t going quite like I’m wanting, so I’ve created a really dumb test shader and hopefully someone can point out where the math is going wrong. This project is using OpenSceneGraph for the high-level code so I’m for the most part omitting that except a few little parts that should be clear as they’re pretty similar to the OpenGL calls.

What I’m trying to do is:
- Generate a set of 20x20 GL_POINTS: (0,0), (0,1) … (0,19), (1,0), (1,1) … (1,19), (19,0), (19,1) … (19,19)
- Render them orthographically on a 100x100 viewport
- Use a vertex/geometry shader to shift each POINT over by one pixel (keep in mind this is just to see if I can wrestle the shader into doing what I want…)

Problems I’ve had:
- Using “gl_Position = gl_Vertex” in the vertex shader and “gl_Position = gl_PositionIn[0]” in the geometry shader gives me a single pixel in the center of the 100x100 image
- Using “gl_Position = ftransform()” in the vertex shader and “gl_Position = gl_PositionIn[0]” in the geometry shader gives me all the points in the correct positions (square in the bottom left corner)

 I don't understand why gl_Position = gl_Vertex doesn't work when using an ortho projection (I know I've seen this done before...)  Why does it suddenly transform everything to the center?  Working out with ftransform() makes sense but I don't get 

 - Using "gl_Position = gl_Vertex" and "gl_Position = gl_PositionIn[0] + vec4(1., 0., 0., 0.)" gives me a blank image.
 - Using "gl_Position = ftransform()" and gl_Position = gl_PositionIn[0] + vec4(1., 0., 0., 0.)" gives me the points translated, but not by one pixel; they're translated by what looks like about 40 pixels.  Rendering a few extra points using "gl_Position = gl_PositionIn[0] + vec4(.41, .25, 0., 0.);" and "gl_Position = gl_PositionIn[0] + vec4(.81, .5, 0., 0.);" renders two more squares with their horizontal edges just touching.  I'm not sure what to make of this?

I know this should in theory be possible because I’ve seen it done in other code, but I may have missed something somewhere in how a matrix was getting set up maybe? (I know this is OSG code, not OpenGL, but I think the principals of what it’s doing are similar enough to be okay just to give some reference since the focus of what I’m doing is shader work):



//!
//!  initView
//!
//!  - Initializes the graphics context/traits.
//!  - Sets the camera's projection and view parameters.
//!  - Adds the topNode as the sceneData
//!
void initView( ref_ptr<osgViewer::Viewer> view, ref_ptr<Group> topNode ) {

    ref_ptr<GraphicsContext::Traits> traits = new GraphicsContext::Traits;

    // essentially like GLUT setting the window size/position
    // and buffering options
    traits->width = 100;
    traits->height = 100;
    traits->x = 0;
    traits->y = 0;
    traits->doubleBuffer = true;

    // the view is the view into the scene, setting the
    // graphics context defines the window into which it
    // renders...
    ref_ptr<GraphicsContext> graphicsContext
            = GraphicsContext::createGraphicsContext(traits);

    Camera *camera = view->getCamera();
    camera->setGraphicsContext(graphicsContext);
    camera->setClearColor( Vec4(0., 0., 0., 1.) );
    camera->setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

    camera->setProjectionMatrix( Matrix::ortho(0, 100, 0, 100, -1, 1));
    camera->setViewport(0, 0, 100, 100);

    //camera->setReferenceFrame( Transform::ABSOLUTE_RF );
    camera->setViewMatrix( Matrix::identity() );
    
    view->setSceneData(topNode);

    return;
}


My shaders are:


//!
//!  getFragmentShaderSource
//!
//!  - Generate a very simple fragment shader.
//!  - Color each fragment red.  Different from the original yellow,
//!    lets me know this is working?
//!
const string getFragmentShaderSource() {

    stringstream fragmentShaderSource;

    fragmentShaderSource
    << "void main () {                                  
"
    << "    gl_FragData[0] = vec4(1., 0., 0., 1.);      
"
    << "}                                               
"
    << "                                                
";

    return fragmentShaderSource.str();

}



//!
//!  getGeometryShaderSource
//!
//!  - Generate a very simple geometry shader.
//!  - Write each pixel exactly one pixel right of where
//!    it came in.
//!  - PROBLEM:
//!      - This moves the square significantly to the right instead of one pixel.
//!
const string getGeometryShaderSource() {

    stringstream geometryShaderSource;

    geometryShaderSource
    << "#version 150                                  
"
    << "#extension GL_EXT_geometry_shader4 : enable   
"
    << "out vec4 outputPoints;                        
"
    << "                                              
"
    << "void main () {                                
"
    << "    gl_Position = gl_PositionIn[0] + vec4(1., 0., 0., 0.);  
"
    << "    outputPoints = vec4(1., 1., 1., 1.);      
"
    << "    EmitVertex();                             
"
    << "    EndPrimitive();                           
"
    << "}                                             
"
    << "                                              
";

    return geometryShaderSource.str();

}



//!
//!  getVertexShaderSource
//!
//!  - PROBLEM HERE?
//!  - Generate a very simple vertex shader.
//!  - Not sure why this isn't working:
//!       - gl_Vertex + geometry transform leaves me with a blank image (tried adjusting z-value.)
//!       - gl_Vertex + no geometry transform gives me a single point in the center
//!       - ftransform produces perfectly placed vertices
//!  - Seems to make sense that ftransform would, but why wouldn't
//!    gl_Vertex?  I've seen it used in other shaders before doing
//!    similar things.
//!
const string getVertexShaderSource() {

    stringstream vertexShaderSource;

    vertexShaderSource
    << "void main () {                              
"
    //<< "    gl_Position = gl_Vertex;                
"
    << "    gl_Position = ftransform();             
"
    << "}                                           
"
    << "                                            
";

    return vertexShaderSource.str();

}

AHA… I think I understand now… After playing around with it for 24-ish hours, I think I finally get it.

The example I saw with someone using gl_Vertex was rendering to a TextureRectangle, which doesn’t use normalized coordinates, where I’m trying to render to the screen, which does. So… all of my points are ending up in places I’m not expecting as a result when I go that route.

I’ve figured out, I can use gl_Vertex and then normalize the points based on the screen size, eg. something like:


    float x = ((gl_PositionIn[0].x * 2.) - SCREENW)/SCREENW;
    float y = ((gl_PositionIn[0].y * 2.) - SCREENH)/SCREENH;
    gl_Position = vec4(x,y,0.,1.);

for a literal point-to-point placing, or something else if I want it translated.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.