Problems with drawBuffers

Hi, I have a GeForce 5900, and I cannot configure multiple buffers, to implement MRT.
I have a GLSL with this code:

void main(void)
{
gl_FragData[0] = vec4(0.0, 0.0, 0.0, 0.2);
gl_FragData[1] = vec4(0.0, 0.0, 0.0, 0.4);
gl_FragData[2] = vec4(0.0, 0.0, 0.0, 0.6);
gl_FragData[3] = vec4(0.0, 0.0, 0.0, 0.8);
}

But when i compile and link I get this code error:
<stdlib>(9311) : error C5102: semantics attribute “COLOR” has too big of a numeric index (1)
<stdlib>(9311) : error C5102: semantics attribute “COLOR” has too big of a numeric index (2)
<stdlib>(9311) : error C5102: semantics attribute “COLOR” has too big of a numeric index (3)
<stdlib>(9311) : error C5041: cannot locate suitable resource to bind parameter “<null atom>”
<stdlib>(9311) : error C5041: cannot locate suitable resource to bind parameter “<null atom>”
<stdlib>(9311) : error C5041: cannot locate suitable resource to bind parameter “<null atom>”

With the last NVIDIA drivers i support glDrawBuffers, and I check how many buffers my card support, with glGet, and I get:

GL_MAX_DRAW_BUFFERS == 4

I dont know if its a problem with the pixelFormat, or with the NVIDIA GLSL compiler. I forgot something?

Note: this message appears in my our program using SDL to configure the window, and with ShaderDesigner.

Any idea?

API

AFAIK, NV3x series doesn’t support MRT. To confirm that, check for GL_ARB_draw_buffers in extensions string.

NV4x and newer support MRT.

yooyo

When i execute glewinfo.exe (an auxiliar application which cames with the glew library) it shows this:


GL_VERSION_2_0: OK

glAttachShader: OK
glBindAttribLocation: OK
glBlendEquationSeparate: OK
glCompileShader: OK
glCreateProgram: OK
glCreateShader: OK
glDeleteProgram: OK
glDeleteShader: OK
glDetachShader: OK
glDisableVertexAttribArray: OK
glDrawBuffers: OK

Later I read the file NVIDIA_OpenGL_2.0_Support and its said the same that you, that NV3x dont support MRT, but, well… why appears glDrawBuffers to be supported?, and why i get GL_MAX_DRAW_BUFFERS to be 4? its a NVIDIA policy?(something like: hey look at me, we are cool, we support Full OpenGL2.0).

API

Originally posted by API:
why appears glDrawBuffers to be supported?, and why i get GL_MAX_DRAW_BUFFERS to be 4? its a NVIDIA policy?(something like: hey look at me, we are cool, we support Full OpenGL2.0).
While it could indeed be seen as fraud to report 2.0 when it’s not fully supported, and while nVidia also have a history of being dishonest to the extent it could be considered fraud, I think the explanation isn’t malice but a will to actually allow their users the capabilities they are actually able to provide.

To draw a parallel (only as an example), the C++ standard was ratified in March 1998. Microsoft released updates to VC6 after that date, why technically they were wrong in calling it a “C++ compiler” as the compiler(s) didn’t fully support everything in the C++ specification. As many might know, even today (2005) there are things in that standard not many compilers support.

I think it’s an unfortunate combination of a specific OpenGL version requiring full support for everything new added, while not at the same time adding extensions for older versions that older hardware already can support (this btw has been an issue with every OpenGL version/revision), and hardware vendors (at least initially) only bothering to implement what they have hardware to support, deferring the software implementations for older hardware to a rainy day.

That said, I do think vendors could improve in listing, and keeping up-to-date lists, of what hardware is affected, and how, with each driver version/revision.

Well, about this… i think that the problem its not say that support glDrawBuffers, the problem is make a glGet and get GL_MAX_DRAW_BUFFERS to be 4.

I think that there arent any problem if NVIDIA say “we support glDrawBuffers, but GL_MAX_DRAW_BUFFERS is 1”. Because imagine a code like:

glGetIntegerv(GL_MAX_DRAW_BUFFERS, &params);
if(params>1)
useMRT()
else
noUseMRT()

I can’t use this code!!! It will crashed. I cant make a code who ask to the board “hey, do you support MRT?” because the driver will lie.

API

Check for GL_ARB_draw_buffers in extension string. If driver report it then hw have MRT support, otherwise MRT is not supported.

On my 5600Go (fw 77.76), GL_ARB_draw_buffers is not in extensions string and my card is not MRT capable.

yooyo

Hmm, this certainly could be a bug. Do you happen to have nvemulate enabled?

We’ve tested here with current drivers, and they properly report 1 for MAX_DRAW_BUFFERS.

Nevermind. We’ve verified that this is in fact a bug that’s already been fixed in internal drivers.

You should see the correct number reported in release 80 drivers and beyond.

Haven’t had time to respond to the forums in quite a while, but this thread has been brought to my attention.

OpenGL 2.0’s MRT support is effectively optional – MAX_DRAW_BUFFERS is not required to be larger than 1. The GeForce FX series (including the 5900 in the original post) does not support MRT, and should be returning 1. The GeForce 6 and 7 series both should return 4.

We have a bug in our currently shipping driver where the query of MAX_DRAW_BUFFERS was always returning 4 on chips that support the MRT API. That bug was due to our OpenGL 2.0 support code, where we started accepting the query on GeForce FX parts but didn’t change the code to return an appropriate value.

That bug is already fixed in our next driver release. I apologize for any problems this caused. The suggestion of looking for GL_ARB_draw_buffers in the extension string would be a decent workaround.

Pat Brown, NVIDIA

p.s., Note that when Cass replied about tested with current drivers, he’s referring to drivers built from our development branch, which haven’t shipped to the general public yet.

Thanks for your attention. One of the reasons of my original post was verify if there are a “magic way” to use MRT, because it would be very useful with my currently work, about opacity maps rendering a hair volume.

Other reason was the frustation, because at firsts we was made my code thinking that I couldn’t use MRT, but when I install the last drivers and glGet returns GL_MAX_DRAW_BUFFERS==4 I thought that this will be a great performance advance… well, when later i see that it couldn’t be, it was a big deception.

API

Sorry about the confusion, API, and thanks for bringing it to our attention.

Cass