But when i compile and link I get this code error:
<stdlib>(9311) : error C5102: semantics attribute “COLOR” has too big of a numeric index (1)
<stdlib>(9311) : error C5102: semantics attribute “COLOR” has too big of a numeric index (2)
<stdlib>(9311) : error C5102: semantics attribute “COLOR” has too big of a numeric index (3)
<stdlib>(9311) : error C5041: cannot locate suitable resource to bind parameter “<null atom>”
<stdlib>(9311) : error C5041: cannot locate suitable resource to bind parameter “<null atom>”
<stdlib>(9311) : error C5041: cannot locate suitable resource to bind parameter “<null atom>”
With the last NVIDIA drivers i support glDrawBuffers, and I check how many buffers my card support, with glGet, and I get:
GL_MAX_DRAW_BUFFERS == 4
I dont know if its a problem with the pixelFormat, or with the NVIDIA GLSL compiler. I forgot something?
Note: this message appears in my our program using SDL to configure the window, and with ShaderDesigner.
When i execute glewinfo.exe (an auxiliar application which cames with the glew library) it shows this:
…
GL_VERSION_2_0: OK
glAttachShader: OK
glBindAttribLocation: OK
glBlendEquationSeparate: OK
glCompileShader: OK
glCreateProgram: OK
glCreateShader: OK
glDeleteProgram: OK
glDeleteShader: OK
glDetachShader: OK
glDisableVertexAttribArray: OK
glDrawBuffers: OK
…
Later I read the file NVIDIA_OpenGL_2.0_Support and its said the same that you, that NV3x dont support MRT, but, well… why appears glDrawBuffers to be supported?, and why i get GL_MAX_DRAW_BUFFERS to be 4? its a NVIDIA policy?(something like: hey look at me, we are cool, we support Full OpenGL2.0).
Originally posted by API: why appears glDrawBuffers to be supported?, and why i get GL_MAX_DRAW_BUFFERS to be 4? its a NVIDIA policy?(something like: hey look at me, we are cool, we support Full OpenGL2.0).
While it could indeed be seen as fraud to report 2.0 when it’s not fully supported, and while nVidia also have a history of being dishonest to the extent it could be considered fraud, I think the explanation isn’t malice but a will to actually allow their users the capabilities they are actually able to provide.
To draw a parallel (only as an example), the C++ standard was ratified in March 1998. Microsoft released updates to VC6 after that date, why technically they were wrong in calling it a “C++ compiler” as the compiler(s) didn’t fully support everything in the C++ specification. As many might know, even today (2005) there are things in that standard not many compilers support.
I think it’s an unfortunate combination of a specific OpenGL version requiring full support for everything new added, while not at the same time adding extensions for older versions that older hardware already can support (this btw has been an issue with every OpenGL version/revision), and hardware vendors (at least initially) only bothering to implement what they have hardware to support, deferring the software implementations for older hardware to a rainy day.
That said, I do think vendors could improve in listing, and keeping up-to-date lists, of what hardware is affected, and how, with each driver version/revision.
Haven’t had time to respond to the forums in quite a while, but this thread has been brought to my attention.
OpenGL 2.0’s MRT support is effectively optional – MAX_DRAW_BUFFERS is not required to be larger than 1. The GeForce FX series (including the 5900 in the original post) does not support MRT, and should be returning 1. The GeForce 6 and 7 series both should return 4.
We have a bug in our currently shipping driver where the query of MAX_DRAW_BUFFERS was always returning 4 on chips that support the MRT API. That bug was due to our OpenGL 2.0 support code, where we started accepting the query on GeForce FX parts but didn’t change the code to return an appropriate value.
That bug is already fixed in our next driver release. I apologize for any problems this caused. The suggestion of looking for GL_ARB_draw_buffers in the extension string would be a decent workaround.
Pat Brown, NVIDIA
p.s., Note that when Cass replied about tested with current drivers, he’s referring to drivers built from our development branch, which haven’t shipped to the general public yet.
Thanks for your attention. One of the reasons of my original post was verify if there are a “magic way” to use MRT, because it would be very useful with my currently work, about opacity maps rendering a hair volume.
Other reason was the frustation, because at firsts we was made my code thinking that I couldn’t use MRT, but when I install the last drivers and glGet returns GL_MAX_DRAW_BUFFERS==4 I thought that this will be a great performance advance… well, when later i see that it couldn’t be, it was a big deception.