Works on nVidia, but not ATI

Hi! I am makin an OpenGL game framework in C#, and I just switched to clean OpenGL 3.2 and removed all fixed functionality. This works just fine on my GeForce 9800, but not at all on ATI Radeon 5770.

The issue seems to be shader related, but compilation and linking is successfull, and glValidate returns all okay. GetLocation functions returns expected values, but nothing appears when rendering.

Has anyone encountered a similar issue? I think it must be some issue with glUniform or glAttribPointer, but I have no idea what.

Familiar to anyone?

Nop, many of us have GL 3.2 working (almost) fine on ATI, including me.
You are probably doing something wrong.

If you’re using a Core/Forward-Compatible context, it’s likely you’re using something that was removed. Nvidia isn’t as strict in catching these as ATI is.

Try the AMD_debug_output extension, it should return anything it’s having a problem with.

Yeah, I guess I am doing something wrong, but there is so little code that can actually go wrong… I could try to disable the core profile and see if that is the issue.

Also I’ll look into the AMD_debug_output extension, thanks :slight_smile:

Be worry, nVidia has a “it always works” policy in their drivers which is fine while working only on nVidia but you may have written OpenGL code which is not conform which won’t work on AMD. It doesn’t mean that AMD drivers doesn’t have bugs! :wink:
Good luck!

GL_AMD_debug_output didn’t tell me anything, but glGetError() after glDrawElements returned GL_INVALID_OPERATION, which seems to point to that there is an issue with the buffers, and vertex attribute binding, not with the shaders.

I’m still stumped on this though.

What I basicly do, is create the buffers, fill them with data using BufferData, and then for every object this happens (i.e. SetVertexBuffer is called, and then SetIndexBuffer):

public void SetVertexBuffer(OpenGL.IVertexBuffer vert)
{
	if (vertex_buffer != null)
		vertex_buffer.MakeNonCurrent();
	if (vert != null)
	{
		// Apply shader attributes for this vertex buffer
		if(active_shader != null && active_shader is StdMaterial)
			vert.ApplyStdMaterial(active_shader as StdMaterial);
		// Bind vertex buffer
		vert.MakeCurrent();
	}
	vertex_buffer = vert;
}

public void SetIndexBuffer(OpenGL.IIndexBuffer indices)
{
	if (index_buffer != null)
		index_buffer.MakeNonCurrent();
	if (indices != null)
		indices.MakeCurrent();
	index_buffer = indices;
}

{VertexBuffer}.MakeCurrent() does this:

public override void MakeCurrent()
{
	base.MakeCurrent();

	for (int i = 0; i < elements.Count; i++)
	{
		if (elements[i].attribute != 0xffffffff)
		{
			OpenGL.glEnableVertexAttribArray(elements[i].attribute);
			if(elements[i].gl_type == OpenGL.Const.GL_INT)
				OpenGL.glVertexAttribIPointer(elements[i].attribute, elements[i].dimensions, elements[i].gl_type, size_of_t, elements[i].offset_value);
			else 
				OpenGL.glVertexAttribPointer(elements[i].attribute, elements[i].dimensions, elements[i].gl_type, OpenGL.boolean.FALSE, size_of_t, elements[i].offset_value);
		}
	}
}

where base.MakeCurrent() basicly calls glBindBuffer(). Where target for vertex buffers is GL_ARRAY_BUFFER, and GL_ELEMENT_ARRAY_BUFFER for index buffers.

elements[i].attribute is retrieved from the shader earlier in the Draw() function.

Can anyone point out something obvious here? I’m pulling my hair over this.

You must use a Vertex Array Object.

The default VAO has been deprecated…

i was just about to write the same under the assumption you are on a GL 3.2 core profile (forward compatible flag?) context. i tried to quickly look up the deprecation of the default VAO in the spec but did not find it… Groovounet do you have the reference in the spec?

Edit, found it:

OpenGL 3.2 spec, chapter E.2.2 Removed Features

Client vertex and index arrays - all vertex array attribute and element array index pointers must refer to buffer objects. The default vertex array object (the name zero) is also deprecated. Calling VertexAttribPointer when no buffer object or no vertex array object is bound will generate an INVALID_OPERATION error, as will calling any array drawing command when no vertex array object is bound.

I should probably have mentioned that base.MakeCurrent() calls glBindBuffer(target, handle); where target is GL_ARRAY_BUFFER for VB and GL_ELEMENT_ARRAY_BUFFER for IB.
Vertex buffer and IndexBuffer inherits from BufferObject<T> which contains this behavior.

ApplyStdMaterial assigns element[i].attribute.

The offset value is a pointer to the first element in the structure, and not a real pointer. Maybe this is the wrong way to do it?

Here is a OpenGL Call only list of what happens in a normal draw operation:

glUseProgram(4);
glBindFragDataLocation(4, 1, "out_frag");
glBindBuffer(GL_ARRAY_BUFFER, 2);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, OpenGL.boolean.FALSE, 32, 0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, OpenGL.boolean.FALSE, 32, 12);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 3);
glDrawElements(GL_TRIANGLES, 384, GL_UNSIGNED_INT, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glUseProgram(0);

Here is the structure in use:

public struct VertexPositionTexCoordNormal
{
	public Vector3 Position;
	public Vector3 Normal;
	public Vector2 TexCoord;

	public static readonly VertexBufferDescriptor Descriptor = new VertexBufferDescriptor(
		new ElementType[] 
		{
			ElementType.Position3Float,
			ElementType.Normal3Float,
			ElementType.TexCoord2Float
		}, typeof(VertexPositionTexCoordNormal));

}

This would be so much easier if I had an ATI card.

Thanks for all suggestions so far :slight_smile:

does putting


glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);

before your code solve the problem? If so it’s because the default VAO (the VAO named 0) has been deprecated as Groovounet mentioned. NVidia still allows it to be used without throwing the required errors, but ATI doesn’t allow it.

Less not forget that this VAO named 0 is something the ARB came up with when they release VAOs.

GeirGrusom you show how even the name of VAO is misleading. I still hate it, it’s nothing like the vertex layout object we requested for years.

I am not familiar with glGenVertexArray etc…
This is basicly what I do in my code:

int buffer;
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, data_size, &data, GL_STATIC_DRAW);

I’m am starting to think maybe I’m not all that updated on OpenGL… I’ll try to implement VAO functions, and see if it works. Thanks :slight_smile:

Honestly, almost the best thing you have to do with VAO is to add these 2 lines just after context creation:

glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);

And then you do as you always did.
You can use properly the VAO if you want but I still hardly manage to understand what’s properly concerning VAO and the performance benefit is low and sometime negative… so do you really want to bother?

There’s a bit in the wiki about VAOs http://www.opengl.org/wiki/Vertex_Array_Objects.

A vertex array object (VAO) is basically a container for all the vertex related state that you are setting. Switching to another VAO with glBindVertexArray will switch the entire state, similar to having made all the glVertexAttribPointer/glEnableVertexAttribArray calls again.

In your case, your code could look something like this:


// generate VAO + set state
glGenVertexArrays(1, &vaoID);
glBindVertexArray(vaoID);
glBindBuffer(GL_ARRAY_BUFFER, 2);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, OpenGL.boolean.FALSE, 32, 0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, OpenGL.boolean.FALSE, 32, 12);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 3);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);

// rendering
glUseProgram(4);
glBindFragDataLocation(4, 1, "out_frag");
glBindVertexArray(vaoID);
glDrawElements(GL_TRIANGLES, 384, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
glUseProgram(0);

However in practise, using VAOs in this way can be slower than just making all those glVertexAttribPointer/glEnableVertexAttribArray calls again (when measured on NVidia card a few months ago anyway), so just binding a VAO at the start of your app that you have generated gets round the deprecated default array (you’re now just modifying and using a named VAO, rather than the default one).

edit: bah nevermind, my tester hadn’t recieved the newest version from the repository :stuck_out_tongue:

Thanks a lot for all your help guys :smiley:

@Dan glBindFragDataLocation(4, 1, “out_frag”); is only applied after program linking which make it quite useless in your sample.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.