Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 6 of 6

Thread: drawing individual pixels with opengl

  1. #1
    Junior Member Newbie
    Join Date
    Dec 2012
    Posts
    17

    drawing individual pixels with opengl

    Hy,
    As the title suggests I wanted to draw individual pixels to the screen. I've come a long way, but this is the essential code I've got now:
    Code :
    int main() {
    	glewExperimental=GL_TRUE; 
    	glewInit();
    	GLuint BUFFER;
    	glGenBuffers(1,&BUFFER);
    	glBindBuffer(GL_PIXEL_UNPACK_BUFFER,BUFFER);
    	glBufferData(GL_PIXEL_UNPACK_BUFFER,WIDTH*HEIGHT,NULL,GL_DYNAMIC_DRAW);
    	unsigned char* buffer_map = (unsigned char*) glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
    	fill_buffer(buffer_map, WIDTH, HEIGHT,objects);
    }
     
    void fill_buffer(unsigned char* buffer, unsigned int width, unsigned int height) {
    	for (unsigned int pixelX = 0; pixelX < width; ++pixelX) {
    		for (unsigned int pixelY = 0; pixelY < height; ++pixelY) {
    			buffer[pixelX*height+pixelY] = 255;
    		}
    	}
     
    	glDrawPixels(width,height,GL_RED,GL_UNSIGNED_BYTE,buffer);
    }

    It seems to work right. The buffer is intitialised correctly, plus it is loaded with the right values (as an test I just wanted to fill my screen with red). However though, the screen stays black, instead of being red.
    Anyone a suggestion why this would be?

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2012
    Location
    Australia
    Posts
    1,117
    You code is trying to copy a buffer from itself since you have the buffer bound to receive the data and in glDrawPixel you have said it is the buffer to send.

    Try just allocating your buffer in memory and then calling your fill_buffer routine.

  3. #3
    Member Regular Contributor
    Join Date
    Aug 2008
    Posts
    454
    Have you remembered to unmap the buffer before using it?

  4. #4
    Junior Member Newbie
    Join Date
    Dec 2012
    Posts
    17
    First I did not know you had to unmap first...
    So here is an update of the code:
    Code :
    int main() {
    	glewExperimental=GL_TRUE; 
    	glewInit();
    	GLuint BUFFER;
    	glGenBuffers(1,&BUFFER);
    	glBindBuffer(GL_PIXEL_UNPACK_BUFFER,BUFFER);
    	glBufferData(GL_PIXEL_UNPACK_BUFFER,WIDTH*HEIGHT,NULL,GL_DYNAMIC_DRAW);
     
    	while (running) {
    		unsigned char* buffer_map = (unsigned char*) glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
    		fill_buffer(buffer_map, WIDTH, HEIGHT,objects);
    		glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER);
    		window.display();
    	}
    }
     
    void fill_buffer(unsigned char* buffer, unsigned int width, unsigned int height) {
    	for (unsigned int pixelX = 0; pixelX < width; ++pixelX) {
    		for (unsigned int pixelY = 0; pixelY < height; ++pixelY) {
    			buffer[pixelX*height+pixelY] = 255;
    		}
    	}
     
    	glDrawPixels(width,height,GL_RED,GL_UNSIGNED_BYTE,buffer);
    }

    I wanted to allocate the memory on the graphics card, so I got better performance. I just don't really understand how to get the pixel values in the buffer to the screen.

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Jan 2012
    Location
    Australia
    Posts
    1,117
    If you are setting pixels you are not going to get any performance but what I suggested is still the quickest. Your code is still copying the buffer to itself please read how glDrawPixels works. If you try to update a buffer on the card via mapping you cannot guarantee the driver will not need to copy the buffer into memory for you to update, then copy it back, then you will copy it to the display buffer - this is going to be slower than updating a buffer in memory and copying it to the display buffer via glDrawPixels

  6. #6
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    490
    Quote Originally Posted by genzm19 View Post
    So here is an update of the code
    How are you creating your drawing surface? Before you can draw anything, you either need to use the platform's windowing API to create a window and service events, or use a cross-platform library, either a lightweight graphics-oriented library such as GLUT, GLFW, SDL, SFML, or a full GUI toolkit such as Qt, GTK or wxWidgets.
    There's no point in getting into details such as buffer objects until you can actually get something on screen.

    Once you have your overall program framework working, you would create and fill the buffer in the initialisation function (after you have a window and GL context), then call glBindBuffer() and glDrawPixels() in the display (aka redraw, expose) function.

    But note that glDrawPixels() is deprecated (i.e. it's not available in an OpenGL 3.x "core profile" context, or in OpenGL ES). The modern way to do it is to store the data in a texture then draw a pair of textured triangles (quads are also deprecated).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •