glDrawArrays() Seg Fault

right after I called glDrawArrays() I got:

***** FATAL EXCEPTION RECEIVED *******
"
***** Vectored Exception Handler: Received fatal exception EXCEPTION_ACCESS_VIOLATION PID: 1048

******* STACKDUMP *******
stack dump [0]
stack dump [1]
stack dump [2]
stack dump [3]
stack dump [4]
stack dump [5]
stack dump [6]
stack dump [7]

Exiting after fatal event (FATAL_EXCEPTION). Fatal type: EXCEPTION_ACCESS_VIOLATION
Log content flushed flushed sucessfully to sink

I was trying to run a test, which calls a bunch of gl funtions, I wrote a wgl class which helps to create the window and context.

The test did not use OVB, it used glVertexAttribPointer.

I have read a lot of people are using OVB with glDrawArrays, but Im not using that, so how can I make glDrawArrays() work?

The most common cause of a seg fault after glDraw* is that you have a vertex attrib array enabled but no vertex attrib pointer set for it.

Help us to help you: you say “a bunch of GL functions” but you’re not showing any code. You’re making it very difficult for us to troubleshoot what might be happening, or provide any useful advice.

I have created a window and glClearColor which works fine.
Then I called:

wglMakeCurrent

glBindFramebuffer(GL_FRAMEBUFFER, 0)

glViewPort(0, 0, window_width, window_height)

glUseProgram(17) //

glVertexAttribPointer(0, 2, DATA_FORMAT_FLOAT32, false, 2 * sizeof(pos[0]), data) //data is a void * but it was a float[8] in the
function that calls glVertex*

//I have glEnableVertexAttribArray() in my createProgram.

glUniform4fv()

glDisable(GL_BLEND)

//until this point glGetError() returns 0

glDrawArrays(GL_TRIANGLE_STRIP, 0 , 4) // the error shows when executing this line

Why have you hardcoded 17? This should be the value returned by createProgram.

17 is a program id returned by createProgram, I have stored it in a variable.

Another thing I need to mention is that the program works fine in Linux, I’n converting it to windows. Then there is a seg fault. I don’t know if there is an extra step that i should do in windows? For example, several days ago, I found out that I need glFlush() after glClearColor(), otherwise the glClearColor() don’t work.

Im using mingGW, converting glx and egl in project to wgl

glFlush() is almost never needed. In situations where it makes a difference, you’re more likely to need glFinish(). glFlush() forces pending commands to be executed at some point in the future; glFinish() waits until they have finished executing.

It’s fairly clear that there’s a problem with your code which is manifesting itself in non-obvious ways. Memory corruption often has such symptoms, but there are other possibilities.

I suggest looking at an existing cross-platform toolkit such as GLUT or GLFW for hints on using the Windows API.

I tired to load glFinish() but I cannot load it with wglGetProcAddress nor GetProcAddress(opengl32,procName). Where can I get the glFinish()?

Maybe this line is important :

Program received signal SIGSEGV, Segmentation fault.
0x0000000069ebad39 in nvoglv64!DrvPresentBuffers () from /cygdrive/c/windows/system32/nvoglv64.DLL

The error shows that it’s nvoglv64.DLL. I thought it was not an error for the program?

as it has been said before, “access voilation” errors occur often when your vertexarray isnt set up correctly
show us the full code of how you’ve build your vertexarray

glVertexAttribPointer(0, 2, DATA_FORMAT_FLOAT32, false, 2 * sizeof(pos[0]), data);

have you bound a GL_ARRAY_BUFFER right before that line ??

what is “DATA_FORMAT_FLOAT32” ??
the type must be one of those:

GL_BYTE, GL_UNSIGNED_BYTE, GL_SHORT,
GL_UNSIGNED_SHORT, GL_INT, GL_UNSIGNED_INT, GL_FIXED,
GL_HALF_FLOAT, GL_FLOAT, or GL_DOUBLE

Link to opengl32.lib, #include <gl/gl.h> and you do not need to use either wglGetProcAddress or GetProcAddress.

From the way this thread is going, it really reads as though you’re just making things difficult for yourself. Before you consider porting an OpenGL program to Windows, you should learn how OpenGL actually works on Windows. A tip: if your code crashes and you blame the OS, it’s probably not the OS; it’s probably your code.

Some helpful links:
https://www.opengl.org/wiki/Getting_Started
https://www.opengl.org/wiki/Platform_specifics:_Windows

For what it’s worth, I suspect that you’re probably trying to load function pointers manually but have the wrong DLL calling conventions. But it’s virtually impossibly to help you because you’re drip-feeding information, what you give us is incomplete and you seem to have your own wrapper around things.

No, I didn’t use GL_ARRAY_BUFFER.

I solved this problem by setting the NVIDIA thread optimization off.

Yes, the project has a lot of files and I’m respond for wgl part of it, therefore, I need to use getprocaddress since it is the way how linux run it. But thanks a lot.

No, you didn’t solve the problem.

You found a workaround that works for you right now. You have no guarantee that it will work for anybody else and the underlying cause is still in your code.

Let me emphasise this. glDrawArrays, on NVIDIA, under Windows, and when set up and used correctly works just fine without any hacky workarounds.

You still have a problem in your code, and it’s going to blow up in your face at some point in the future.

But Im still not sure where the problem is.

This is what I saw other people’s answer:
VBO setup problem, but I’m not using VBO
out of bond, this might be the cases, im in windows, i cannot use valgrind. Which memory debug tool is the best in windows?
glVertexAttribPointer(0, 2, DATA_FORMAT_FLOAT32, false, 2 * sizeof(pos[0]), data) data should be 0, but I have tried, when data is 0, the GL_ARRAY_BUFFER should be set. Then there is a loop, the program is not using VBO, there is no glBindBuffer().

if there is no “glBindBuffer(…)”, how can glDrawArrays(…) know what array of data it should draw ???
that makes no sense ! there must be anywhere a glBindBuffer(), at least when you set up your vertexarray

example


void Init()
{
// create shader

	float vertices[] = {
		// x    y     z             u     v             normal
		0.0f, 0.0f, 0.0f,		0.0f, 1.0f,		0.0f, 0.0f, 1.0f,
		1.0f, 0.0f, 0.0f,		1.0f, 1.0f,		0.0f, 0.0f, 1.0f,
		0.0f, 1.0f, 0.0f,		0.0f, 0.0f,		0.0f, 0.0f, 1.0f,
	};
	
	glGenVertexArrays(1, vao);
	glBindVertexArray(vao);
	
	glGenBuffers(1, vbo);
	glBindBuffer(GL_ARRAY_BUFFER, vbo);
	glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

// BEFORE you specify any attribute pointers, there must be a buffer bound from which to read the data !!!
// no GL_ARRAY_BUFFER, no data source
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 8, (void*)(sizeof(float) * 0));
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 8, (void*)(sizeof(float) * 3));
	glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 8, (void*)(sizeof(float) * 5));
// now all 3 attributes use the same buffer "vbo", because it was bound when the pointers where specified	

	glBindBuffer(GL_ARRAY_BUFFER, 0);

	glBindVertexArray(0);
}


void render()
{
	glUseProgram(shader);
	
	glBindVertexArray(vao);

// when you enable an attribute, how does openGL "know" from where to get the data ???
// answer: see above
	glEnableVertexAttribArray(0);
	glEnableVertexAttribArray(1);
	glEnableVertexAttribArray(2);
	
	glDrawArrays(GL_TRIANGLES, 0, 3);

	glDisableVertexAttribArray(0);
	glDisableVertexAttribArray(1);
	glDisableVertexAttribArray(2);
	glBindVertexArray(0);
	
	glUseProgram(0);
}

1 Like

[QUOTE=john_connor;1282767]if there is no “glBindBuffer(…)”, how can glDrawArrays(…) know what array of data it should draw ???
that makes no sense ! there must be anywhere a glBindBuffer(), at least when you set up your vertexarray
[/QUOTE]
In the compatibility profile, attribute arrays can be stored either in buffer objects or in client memory. If no buffer is bound to GL_ARRAY_BUFFER, the data argument is treated as a pointer to client memory (that’s why it’s a void* rather than an integer).

Sorry for necroing, I’ll just leave a comment on what helped me solve a similar (or the same?) issue:
john_connor’s piece of code points out that the buffer must be bound before specifying attribute pointers. People usually only mention that VAO binding must precede buffer binding but neglect to mention where attribute specification must take place. Thanks John!