Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 2 FirstFirst 12
Results 11 to 14 of 14

Thread: Loading a bitmap image to use it as a texture / background on canvas for drawing

  1. #11
    Intern Newbie
    Join Date
    Apr 2014
    Posts
    47
    Quote Originally Posted by thor36 View Post
    MtRoad - thank you for your explanations and code. I have tried your code and I still get the "ribbon".

    My bad. Use the first code snippet to fix the ribbon. I must have forgotten to change it in the second.

  2. #12
    Intern Newbie
    Join Date
    Apr 2014
    Posts
    47
    Quote Originally Posted by thor36 View Post
    Also now it's back to the problem I had in the beginning, where whole image consisted only of black and blue(hues) colours. Flashing is gone however.
    Sorry I missed it earlier. A bound texture is multiplied by the current color when performing texturing. Add glColor3f(1,1,1) before you draw the texture.

    Code :
        glEnable( GL_TEXTURE_2D ); 
        glBindTexture( GL_TEXTURE_2D, texture );
     
        glColor3f(1.0f,1.0f,1.0f); // HERE!
        glBegin (GL_QUADS);
        glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
        glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
        glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
        glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
        glEnd();
     
        glDisable(GL_TEXTURE_2D);

  3. #13
    Junior Member Newbie
    Join Date
    Mar 2014
    Posts
    26
    MtRoad, now it works properly, thank you very much for your effort. And I should've paid closer attention to the "ribbon" vertices as well before just copying your code It is maybe a little off-topic now, but while we are at it I may get another thing cleared up. The OpenGL coordinates (x,y) for drawing are not like we are used to from mathematics lessons. Is there a built-in function or maybe a simple method that can be written, that will transform the input x-y coordinates in such a way that, they can be passed as we are used from math lessons and be drawn properly in that way on the screen ?

  4. #14
    Intern Newbie
    Join Date
    Apr 2014
    Posts
    47
    Quote Originally Posted by thor36 View Post
    Is there a built-in function or maybe a simple method that can be written, that will transform the input x-y coordinates in such a way that, they can be passed as we are used from math lessons and be drawn properly in that way on the screen ?
    I'm not sure I understand your question, so I'll just explain what your code does.

    Vertices perform a series of transforms before they are drawn on screen. You are using the older "fixed-function" pipeline of OpenGL here, which several of your actions for you. Even with the vertex/geometry/fragment shaders several steps happen regardless.

    Fixed Function Pipeline: Vertices get "transformed" (multiplied) by matrices with 4 columns and 4 rows. Vertices first get multiplied by the ModelView matrix, which really combines the model matrix and a viewing (camera) matrix into one. A model matrix transforms from model space, to world space. The view matrix transforms from world space to eye (camera) space. The projection matrix, projects from eye space into "normalized device coordinates" which in OpenGL is a cube from (1,1,1) to (-1,-1,-1), also called the "canonical view volume". Everything outside the canonical view volume gets "clipped", z-coordinates dropped and then mapped to the screen you provide. So there isn't a single function to use, it's a combination of all these which result in the output.

    Here the functions and what spaces they affect.

    Code :
    glMatrixMode( GL_PROJECTION )
      - all further matrix operations affect the projection matrix
     
    glMatrixMode( GL_MODELVIEW )
      - all further matrix operations affect the modelview matrix
     
    glViewport
      - changes screen coordinate mapping, most code for
        full screen windows or mobile apps handles this for you
     
    glOrtho
      - Creates a projection matrix to map a cube with specific dimension
        to the canonical view volume. Note this is a "CUBE!" so there is 
        no perspective generated since we are just scaling one cube to another
        and then dropping the Z-coordinate in screen mapping.
     
    gluPerspective
      - Creates a projection matrix to map a frustum to the canonical view volume.
        since the smaller point of the frustum is closer to the viewer, when the frustum (pyramid with tip chopped off)
        gets transformed into a cube, it stretches objects closer to the camera, which we
        perceive as "perspective foreshortening" (close objects get bigger!)

    This is why I changed your glMatrixMode parameter for glOrtho to GL_PROJECTION.

    Now for the answer to the question I think you want. To make vertex (x,y) to be pixel (x,y) just call glOrtho using the bounds of your screen. This
    makes pixels get sampled at locations that should map close to physical element locations. I'm doing
    this from what I remember, so it might not work exactly as written.
    Code :
    glMatrixMode( GL_PROJECTION );
    glLoadIdentity();
    glOrtho(0, screenWidth, 0, screenHeight, 0, 1);

    For further reading try:
    http://www.realtimerendering.com/ or the OpenGL wiki. The iPhone book is decent (not great), but it has a lot of useful math in it.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •