Blit vs Render from texture

I would like to get a possible explanation for a discrepancy in the resulting output when Blitting versus sending a texture to a shader for simple rendering.

My blit operation is simply sending fbo blit to the default framebuffer of the same dimension:

      glBlitFramebuffer(0, 0, width_, height_, 0, 0, width_, height_,
                        GL_COLOR_BUFFER_BIT, GL_NEAREST);

Everything renders fine.
The other operation, which also renders but very differently, as if the camera is looking from a different perspective, is done in the shaders

vert:

#version 450 core
layout (location = 0) in vec3 vert;
layout (location = 1) in vec2 texCoord;
out vec2 fTexCoord;
void main()
{
  fTexCoord = texCoord;
  gl_Position = vec4(vert, 1);
}

frag:


#version 450 core
uniform sampler2D tex;
in vec2 fTexCoord;
out vec4 fColor;
void main()
{
  fColor = texture(tex, fTexCoord);
}

Inputs to the shader are:

std::vector<GLfloat> verts = {0,0,0, 0,1,0, 1,1,0, 1,0,0};
std::vector<GLfloat> tex_coords = {0,0, 0,1, 1,1, 1,0};
std::vector<GLuint> indices = {0, 1, 2, 2, 3, 0};

Finally, from the vao setup


      vbos_.resize(3);
      glGenBuffers(3, vbos_.data());
      glBindBuffer(GL_ARRAY_BUFFER, vbos_[0]);
      glBufferData(GL_ARRAY_BUFFER, verts.size() * sizeof(GLfloat),
                   verts.data(), GL_DYNAMIC_DRAW);
      glEnableVertexAttribArray(0);
      glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);

      glBindBuffer(GL_ARRAY_BUFFER, vbos_[1]);
      glBufferData(GL_ARRAY_BUFFER, tex_coords.size() * sizeof(GLfloat),
                   tex_coords.data(), GL_DYNAMIC_DRAW);
      glEnableVertexAttribArray(1);
      glVertexAttribPointer(1, 2, GL_FLOAT, GL_TRUE, 0, NULL); // normalize

      glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbos_[2]);
      glBufferData(GL_ELEMENT_ARRAY_BUFFER, 3 * indices.size() * sizeof(GLuint),
                   indices.data(), GL_STATIC_DRAW);
      glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);

I render:


        GLuint rgb_handle = get_tex_handle(); // returns the fbo texture handle
        program_->SaveBind(); // bind the program
        glEnable(GL_TEXTURE_2D);
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, rgb_handle);
        program_->SetUniform("tex", 0);
        glBindVertexArray(vao_);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbos_.back());
        glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
        glBindVertexArray(0);
        glBindTexture(GL_TEXTURE_2D, 0);

And it produces the same data, but in a different location on my main screen. Is there anything obviously wrong here?

Blitting just copy what is stored in a renderbuffer into another one. What can vary are the location and area of the pixels, plus how the filtering is done in case source and destination size change.

When you render to a texture, this involves a lot more. You need to take into account input and output viewports, plus input and output projection / modelview matrices. So if you want the same result than for the blitting, viewports must match and also the projection matrix (which should generally be an orthographic one).

This will only render to the upper-right quadrant of the viewport. To fill the viewport, use:


std::vector<GLfloat> verts = {-1,-1,0, -1,1,0, 1,1,0, 1,-1,0};

Alternatively, transform the vertices in the vertex shader:


  gl_Position = vec4(vert.xy*2-1, 0, 1);

Thanks to both. GClements, you are absolutely right that was an obvious problem. I still see an offset, and I am convinced it has to do with my viewports as Slience mentions.

You need to take into account input and output viewports, plus input and output projection / modelview matrices.

Since I am rendering only to the fbo, then rendering the texture directly in the default framebuffer, I should not have a problem with the projection and modelview matrices correct? The viewports, however, are a different story.

I was worried about viewports from the beginning, as my goal is to project a 3D cloud taken from a camera and project it back to the camera frame exactly as it would appear in the original image. I have my ProjectionMatrix set up correctly as far as I can tell, but I am very concerned about the effects of the viewport and do not fully know how to fix/debug viewport (warping?) side effects.

If the FBO and the default framebuffer aren’t the same size, you need to explicitly set the viewport when switching between them.

The viewport is a property of the current context, not a framebuffer. The first time a context is bound to a window, the viewport will automatically be set to the extent of the window, but it won’t subsequently be changed unless you explicitly change it with e.g. glViewport().

The vertex shader you posted doesn’t use any matrices.

[QUOTE=GClements;1290176]If the FBO and the default framebuffer aren’t the same size, you need to explicitly set the viewport when switching between them. The viewport is a property of the current context, not a framebuffer
[/QUOTE]

Ok, I thought a viewport applied to each fbo. I hesitate to go too deep in the weeds of my design (although if anybody is willing to help me with that I also would appreciate it), I actually have 2 framebuffers, one renders to a texture and is the same size as the default framebuffer, the other renders the same scene to a texture but from a different viewpoint. That framebuffer is the size of the original camera, which is smaller than the default framebuffer size, and I plan to inlay that smaller texture within the larger one. As I said, the problem is making sure the smaller texture is faithful to what the camera actually would see given its intrinsics and the viewpoint.

It sounds like I will end up needing to set and reset the viewport here, of course open to better suggestions. Thanks again

[QUOTE=GClements;1290176]
The vertex shader you posted doesn’t use any matrices.[/QUOTE]

True enough, I am using Pangolin to set the Projection/ModelView Matrix, then passing it to the shader shown, followed by a trivial fragment shader.


#version 450 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 rgb;
uniform mat4 mvp;
uniform float rgb_alpha;
out VS_FS_INTERFACE {
    flat vec4 vRgb;
} vs_out;
void main()
{
  gl_Position = mvp * vec4(position, 1.0);
  vs_out.vRgb = vec4(rgb, rgb_alpha);
}

I have made a min-working example to demonstrate the root of my problem, which is the proper projection of points from a cloud to an image, where the projection matches the image that generated the cloud.

The code very lightly depends on Pangolin and loads files using PCL, but otherwise it doesn’t use viewports and the FBO is the same size as the default buffer. Trying to keep it minimal! I am not sure how to best attach an example pcd but can do so upon request.

#include <pangolin/pangolin.h>
#include <pangolin/gl/gldraw.h>
#include <chrono>
#include <pcl/io/png_io.h>
#include <pcl/io/pcd_io.h>
#include <Eigen/Core>


struct FBOTest
{
  FBOTest(int w, int h)
  {
    w_ = w;
    h_ = h;
    vao_ = 0;
    setup();
  }

  ~FBOTest()
  {
    glDeleteFramebuffers(1, &fbo_);
    glDeleteRenderbuffers(1, &depth_);
    glDeleteTextures(1, &rgb_);
    glDeleteVertexArrays(1, &vao_);
    glDeleteBuffers(vbos_.size(), vbos_.data());
  }
  void setup()
  {
      glGenFramebuffers(1, &fbo_);
      glBindFramebuffer(GL_FRAMEBUFFER, fbo_);

      // Color
      glGenTextures(1, &rgb_);
      glBindTexture(GL_TEXTURE_2D, rgb_);
      glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, w_, h_, 0, GL_RGB, GL_FLOAT, NULL);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
      glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
      glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, rgb_, 0);
      glBindTexture(GL_TEXTURE_2D, 0);

      // Depth
      glGenRenderbuffers(1, &depth_);
      glBindRenderbuffer(GL_RENDERBUFFER, depth_);
      glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, w_, h_);
      glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_);
      glBindRenderbuffer(GL_RENDERBUFFER, 0);

      std::vector<GLenum> drawbuffers = {GL_COLOR_ATTACHMENT0};
      glDrawBuffers(drawbuffers.size(), drawbuffers.data());

      GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
      if (status != GL_FRAMEBUFFER_COMPLETE){
        if (status == GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT)
          std::cerr<<"ERROR: GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT"<<std::endl;
        if (status == GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT)
          std::cerr<<"ERROR: GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT"<<std::endl;
        if (status == GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER)
          std::cerr<<"ERROR: GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER"<<std::endl;
        if (status == GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER)
          std::cerr<<"ERROR: GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER"<<std::endl;
        if (status == GL_FRAMEBUFFER_UNSUPPORTED)
          std::cerr<<"ERROR: GL_FRAMEBUFFER_UNSUPPORTED"<<std::endl;
        if (status == GL_FRAMEBUFFER_UNDEFINED)
          std::cerr<<"ERROR: GL_FRAMEBUFFER_UNDEFINED"<<std::endl;
        else
          std::cerr<<"Framebuffer fail, :"<<status<<std::endl;
      }

      std::string fbo_vert =
        "#version 450 core
"
        "layout (location = 0) in vec3 position;"
        "layout (location = 1) in vec3 color;"
        "uniform mat4 mvp;"
        "out vec4 vColor;"
        "void main() {"
        "gl_Position = mvp * vec4(position, 1.0);"
        "vColor = vec4(color, 1.0);"
        "}";

      std::string fbo_frag =
        "#version 450 core
"
        "in vec4 vColor;"
        "layout (location = 0) out vec4 fColor;"
        "void main() {"
        "fColor = vColor;"
        "}";

      prog_.AddShader(pangolin::GlSlVertexShader, fbo_vert, {}, {});
      prog_.AddShader(pangolin::GlSlFragmentShader, fbo_frag, {}, {});
      prog_.Link();

      glBindFramebuffer(GL_FRAMEBUFFER, 0);
  }

  void setup_vao(const std::vector<float>& xyz, const std::vector<uint8_t>& rgb)
  {
    count_ = xyz.size();
    glBindFramebuffer(GL_FRAMEBUFFER, fbo_);
    glGenVertexArrays(1, &vao_);
    glBindVertexArray(vao_);
    vbos_.clear();
    vbos_.resize(2);
    glGenBuffers(2, vbos_.data());

    glBindBuffer(GL_ARRAY_BUFFER, vbos_[0]);
    glBufferData(GL_ARRAY_BUFFER, count_ * sizeof(GLfloat), xyz.data(), GL_STATIC_DRAW);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    glBindBuffer(GL_ARRAY_BUFFER, vbos_[1]);
    glBufferData(GL_ARRAY_BUFFER, count_ * sizeof(GLbyte), rgb.data(), GL_STATIC_DRAW);
    glVertexAttribPointer(1, 3, GL_UNSIGNED_BYTE, GL_TRUE, 0, NULL);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    glEnableVertexAttribArray(0);
    glEnableVertexAttribArray(1);
    glBindVertexArray(0);
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
  }

  void draw()
  {
    glBindFramebuffer(GL_FRAMEBUFFER, fbo_);
    prog_.Bind();
    glBindVertexArray(vao_);
    glDrawArrays(GL_POINTS, 0, count_);
    glBindVertexArray(0);
    glFinish();
    prog_.Unbind();
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
  }

  void blit()
  {
    glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo_);
    glReadBuffer(GL_COLOR_ATTACHMENT0);
    glBlitFramebuffer(0, 0, w_, h_, 0, 0, w_, h_, GL_COLOR_BUFFER_BIT, GL_NEAREST);
    glReadBuffer(GL_NONE);
    glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
  }

  pangolin::GlSlProgram& get_prog()
  {
    return prog_;
  }

  int w_, h_;
  GLuint vao_, fbo_, rgb_, depth_;
  std::vector<GLuint> vbos_;
  pangolin::GlSlProgram prog_;
  int count_;
};


int main(/*int argc, char* argv[]*/)
{
  int w=640;
  int h=480;
  pangolin::CreateWindowAndBind("Test", w, h);
  FBOTest fbtst(w, h);

  glEnable(GL_DEPTH_TEST);
  glEnable(GL_BLEND);
  glDepthMask(GL_TRUE); // Enable depth test (z-buffer)
  glDepthFunc(GL_LESS); // z buffering
  glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

  float fx=525.0;
  float fy=525.0;
  float cx=319.5;
  float cy=239.5;
  auto proj = pangolin::ProjectionMatrix(w, h, fx, fy, cx, cy, 0.1, 1000);
  auto mv = pangolin::ModelViewLookAt(0, 0, -1, 0, 0, 1, pangolin::AxisNegY);
  auto cam = pangolin::OpenGlRenderState(proj, mv);
  fbtst.get_prog().Bind();
  fbtst.get_prog().SetUniform("mvp", cam.GetProjectionModelViewMatrix());
  fbtst.get_prog().Unbind();

  std::cout<<"w "<<w<<" h "<<h<<" fx "<<fx<<" fy "<<fy<<" cx "<<cx<<" cy "<<cy<<std::endl;
  std::cout<<"Projection mat 
"<<cam.GetProjectionMatrix()<<std::endl;
  std::cout<<"ModelView mat 
"<<cam.GetModelViewMatrix()<<std::endl;
  std::cout<<"MVP mat 
"<<cam.GetProjectionModelViewMatrix()<<std::endl;

  std::string cloud_fname = "test.pcd";
  pcl::PointCloud<pcl::PointXYZRGB> pcd;
  pcl::io::loadPCDFile (cloud_fname, pcd);
  std::vector<float> xyz;
  std::vector<uint8_t> rgb;
  for (auto p : pcd.points) {
    xyz.push_back(p.x);
    xyz.push_back(p.y);
    xyz.push_back(p.z);
    rgb.push_back(p.r);
    rgb.push_back(p.g);
    rgb.push_back(p.b);
  }
  fbtst.setup_vao(xyz, rgb);

  while (!pangolin::ShouldQuit()) {
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    fbtst.draw();
    fbtst.blit();
    pangolin::FinishFrame();
  }

}

Moving the last post to a separate thread, as it is a conceptually different question.

What is this function supposed to do ?


auto proj = pangolin::ProjectionMatrix(w, h, fx, fy, cx, cy, 0.1, 1000);

This does not look to be an orthographic projection.
Plus, you don’t really need a modelview matrix, since it should be the identity.

That would be the creation of the projection matrix, which I think is the root of my problem. I have tried alternative formulations by hand, based on some of the following blogs, without success.
https://strawlab.org/2011/11/05/augmented-reality-with-OpenGL
http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl/

They seem to not be using modern opengl but claim that the same projection matrix derived is the one I should use in my shader.

It is indeed not an orthographic projection, but instead a perspective projection, intended to project the 3d points into the window as would happen for a real camera. Should I do an orthographic projection after the perspective projection? How exactly would I do this if so?

I completely agree here, although I have tried variations based on the fact that e.g. openni has z-forward, x-right, y-down convention while opengl is z-back, x-right, y-up.

Updated code in creating the Proj matrix I have tried to get to work, again without success…

  int w=640;
  int h=480;
  float x0=0.0;
  float y0=0.0;
  float skew=0.0;
  float fx=525.0;
  float fy=525.0;
  float cx=319.5;
  float cy=239.5;
  float right = w;
  float left = 0.0;
  float top = h;
  float bottom = 0.0;
  float near = 0.1;
  float far = 20.0;

  // https://strawlab.org/2011/11/05/augmented-reality-with-OpenGL
  // Y-down
// [2*K00/width, -2*K01/width,    (width - 2*K02 + 2*x0)/width,                            0]
// [          0, 2*K11/height, (-height + 2*K12 + 2*y0)/height,                            0]
// [          0,            0,  (-zfar - znear)/(zfar - znear), -2*zfar*znear/(zfar - znear)]
// [          0,            0,                              -1,                            0]

  Eigen::Matrix4f frustum2 = Eigen::Matrix4f::Zero();
  frustum2(0,0) = 2.0*fx/w;
  frustum2(0,1) = -2.0*skew/w;
  frustum2(0,2) = (w - 2.0*cx + 2.0*x0)/w;
  frustum2(1,1) = 2.0*fy/h;
  frustum2(1,2) = (-h + 2.0*cy + 2.0*y0)/h;
  frustum2(2,2) = -(far + near)/(far - near);
  frustum2(2,3) = -2.0*far*near/(far - near);
  frustum2(3,2) = -1.f;

  fbtst.get_prog().Bind();
  //fbtst.get_prog().SetUniform("mvp", cam.GetProjectionModelViewMatrix());
  fbtst.get_prog().SetUniform("mvp", frustum2.data());
  fbtst.get_prog().Unbind();


Finally got it.

As mentioned before, and shown https://gist.github.com/astraw/1341472#file_projection_math.py

the ModelView matrix is not identity, but rather identity with flipped y and z axes.
Then, the new mvp is proj * modelview as expected. Updated fix to the previous post that works is below


  Eigen::Matrix4f eye = Eigen::Matrix4f::Identity();
  eye(1,1) = -1; // opengl is y-up, opencv y-down
  eye(2,2) = -1; // opengl is z-back, opencv z-forward

  Eigen::Matrix4f frustum2 = Eigen::Matrix4f::Zero();
  frustum2(0,0) = 2.0*fx/w;
  frustum2(0,1) = -2.0*skew/w;
  frustum2(0,2) = (w - 2.0*cx + 2.0*x0)/w;
  frustum2(1,1) = 2.0*fy/h;
  frustum2(1,2) = (-h + 2.0*cy + 2.0*y0)/h;
  frustum2(2,2) = -(far + near)/(far - near);
  frustum2(2,3) = -2.0*far*near/(far - near);
  frustum2(3,2) = -1.f;

  Eigen::Matrix4f mvp2 = frustum2 * eye;

  fbtst.get_prog().Bind();
  //fbtst.get_prog().SetUniform("mvp", cam.GetProjectionModelViewMatrix());
  fbtst.get_prog().SetUniform("mvp", mvp2);
  fbtst.get_prog().Unbind();

For the original Pangolin attempt, the only problem was my creation of the modelview matrix. Instead of :

  auto mv = pangolin::ModelViewLookAt(0, 0, -1, 0, 0, 1, pangolin::AxisNegY);

it should have been:

  auto mv = pangolin::ModelViewLookAt(0, 0, 0, 0, 0, 1, pangolin::AxisNegY);

It seems I misinterpreted the first vector to be a direction rather than the xyz offset of the mv matrix.

Thanks all for the help.

I still have a doubt. Except if I misunderstood your original issue, the use of a non-orthographic projection matrix should not have solved your problem.

You are correct that my problem is not entirely solved. For the case of the window being the same dimensions as the original camera, the 3D data correctly projected to the window when I corrected my ModelView parameters. When the window dimensions are different than that of the camera, I do not get a proper projection. I had expected that, if setting one of my viewports to the original camera dimensions, and rendering to a texture with the same dimension, I would get the correct image in my texture, which could then be a) saved to file, and b) rendered on a viewport of a different size. This assumption was not correct. I know there are similar questions out there but I have not yet found the kernel of knowledge I need.

My problem was really 2 fold. One was properly projecting a 3D scene to an exact recreation of what a camera would see from a given viewpoint (which my last post resolved, and which requires perspective projection), and the other was not quite understanding how to scale properly using viewports (to see the same image in a smaller viewport). I am using an FBO of the original camera dimensions, then taking the resulting texture and rendering it to various different viewports, which seems to be working as expected.