glVertexAttribPointer - invalid operation [nVidia]

nVidia, latest drivers (at the time of writing).
Forward-compatible 3.3 context.

glVertexAttribPointer throws GL_INVALID_OPERATION.
VAO is bound right after context creation.

All calls are wrapped with error testing functions, definitely the VAO calls (checked many times) - and they don’t fail.
That means the context is correct and VAO should be bound.
It isn’t unbound until the context is destroyed. Since other GL calls (except glDraw*** and glVertexAttribPointer) do not fail, the context isn’t destroyed.

The demo (Win32, ~4MB): http://ge.tt/9xjcWaB

Since I don’t own an nVidia graphics card myself, I can only guess what OS it’s been tested on (I’ve had many helpful testers, it only didn’t fail for 1 or 2 nVidia users). My guess is that between those, there was Win7.

Exactly 0 errors on ATI graphics cards (except for the GLEW ones, which I just ignore - there can only be glGetString errors, complaining about lack of an extensions string, glewExperimental = true).

Using multiple contexts, having function pointers set up for each context (although there’s probably no need because I don’t use multiple different contexts and I’ll remove that soon if it’s OK to do that).

What I need to know is: where could the bug be?
Due to 2 days spent testing without finding anything conclusive, I’m quite sure it’s a driver bug but I’d like to hear a 2nd opinion.

Could you please quickly post the lines setting up the VAO, VBO, IBO (if any) and pointers?

(removed error checks for all GL calls (“GL_CALL( glXXX() )”) because they’re irrelevant here)

VAO:
[GLuint HackVAO;]
glGenVertexArrays( 1, &HackVAO );
glBindVertexArray( HackVAO );

[V,I]BO-create:
[bool index;]
GLuint buf = 0;
glGenBuffers( 1, &buf );
glBindBuffer( index ? GL_ELEMENT_ARRAY_BUFFER : GL_ARRAY_BUFFER, buf );
glBindBuffer( index ? GL_ELEMENT_ARRAY_BUFFER : GL_ARRAY_BUFFER, VertexBuffer );
return buf;

[V,I]BO-update:
glBufferData( BufferTarget( Type ), Data.Size(), &Data[0], Mode == VMBM_Static ? GL_STATIC_DRAW : GL_DYNAMIC_DRAW );

Pointers:
[SVertexElement& e = vd->Elements[ i ];]
[assume:
e.Attrib=0
e.DataCount=2
GLTypes[ e.DataType ]=GL_FLOAT
e.Stride=32
e.Off=0
arr=0]
bool nrmlz = e.Attrib == VE_ATTRIB_COLOR && e.DataType != VE_TYPE_F32;
glVertexAttribPointer( e.Attrib, e.DataCount, GLTypes[ e.DataType ], nrmlz, e.Stride, (void*)(((BYTE*)arr) + e.Off ) );

Anyway, looks like I’ve found the problem myself:
normalize = true fails with floats even though the spec doesn’t say it should.
EDIT: Scratch that, looks like it still doesn’t work… here’s the demo: http://ge.tt/8oPaWbB

Alrighty then…

The answer was actually quite simple once I’ve got to debug it on my PC - VAO was created on another context (by [flawed] design). Therefore it was shared in graphics card / driver combos which didn’t strictly follow the specification, which says: VAOs are not shareable.

Case solved.
In case you want to test it: http://ge.tt/8vjRCeB