Software renderer

I set out to write a software renderer (as a final goal is write a voxel renderer). But how to interact with frame buffer of my video adapter? If you know a simple demo with source code of software renderer (voxel renderer), please share with me.

Use SDL, or simply look at old tutorials from the 1990-2002 era.

This does not really have anything to do with openGL.

easiest way is to use
glDrawPixels to draw your framebuffer
then swap the buffer

Correct.
This is the opengl component of the answer. However to get an opengl context will take some platform specific glue. The simplest way to get this cross platform is to use a portable window layer like glut.
Alternatively just use platform specific windowing and bliting calls. e.g. Direct draw on windows.

I have tried glDrawPixels and the result is:


int main(int argc, char** argv) {
	glutInit(&argc, argv);
	glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
	glutInitWindowSize(window_width, window_height);
	glutCreateWindow("OpenGL glDrawPixels demo");
	app.BuildScene();
	glutDisplayFunc(display);
	glutIdleFunc(display);
	glutKeyboardFunc(keyboard);
	glutKeyboardUpFunc(keyboardUp);

	glEnable(GL_DEPTH_TEST);
	glClearColor(0.0, 0.0, 0.0, 1.0);

	glutMainLoop();
}
void display() {
	app.timer.Tick();
	
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	
	glDrawPixels(window_width, window_height, GL_RGB, GL_FLOAT, app.pixels);

	if  (app.timer.GetFPSTimeElapsed() == 0) {
		std::cout << app.timer.GetFrameRate() << std::endl;
	}
	glutSwapBuffers();
}

If I use this display function I have FPS = ~110;


void display() {
	app.timer.Tick();
	if  (app.timer.GetFPSTimeElapsed() == 0) {
		std::cout << app.timer.GetFrameRate() << std::endl;
	}
	glutSwapBuffers();
}

If I use this display function I have FPS = ~3500;

This is a very high difference, considering that there are no calculations.
How to increase performance?

turn off vsync?

Yes, it is turned off

I noticed you are using the GL_RGB format. Did you read the Wiki?
http://www.opengl.org/wiki/Common_Mistakes#Unsupported_formats_.233

I have read, but still don`t understand, what is problem?

Hint is that GL_RGBA8 or GL_BGRA8 may be better candidates for raw speed.
Incidentally, measure elapsed time per frame, not frame per seconds because it is not linear. Here you compare “doing nothing” with “clear+sending data+draw”. You can do CPU calculations in parallel with GPU transfer, search for PBO tutorials.

Hint is that GL_RGBA8 or GL_BGRA8 may be better candidates for raw speed.

Small point: GL_BGRA8 is not a format. GL_BGRA is a pixel transfer format, but that’s not the same thing as a sized internal format.