PDA

View Full Version : NVIDIA Forceware 75.90 => OpenGL 2.0.0 & fbo



Zak McKrakem
02-22-2005, 03:14 AM
This is good news for me and I suppose to some of you.
You can find them here: http://www.station-drivers.com/page/nvidia%20forceware.htm

They expose OpenGL 2.0.0 and have the frame buffer object extension.

I have made no test so I don't know the quality of the implementation.

Zak McKrakem
02-22-2005, 03:20 AM
One question. They just expose GL_ARB_shading_language_100 but in the renderer string they say OpenGL 2.0.0 Can I presume that it is glsl 1.10?

Thanks

Cab
02-22-2005, 04:46 AM
Nice.

In my GF6800GT it has some of the latest extensions: GL_ARB_color_buffer_float, GL_ARB_draw_buffers, GL_ARB_half_float_pixel, GL_ARB_texture_float, GL_EXT_framebuffer_object

sqrt[-1]
02-22-2005, 04:58 AM
Zak: Call glGetString with GL_SHADING_LANGUAGE_VERSION to get the version supported.

Guardian
02-22-2005, 06:26 AM
what about ATI ?

anyway, my radeon 9800 pro cannot even support arb_texture_non_power_of_two extension so i doubt it will be opengl 2.0 compliant :(

bobvodka
02-22-2005, 07:21 AM
ATI wont have another driver release until about 2 weeks into march (as they are on a once a month cycle) and the current 5.2 drivers dont have owt new in them

and the card support the EXT version of the texture rectangle extension, which is part of the what the ARB_texture_non_power_of_two is based on so the card will support it once the extension is written and is in general quite capible of supporting OGL2.0 (full support for GLSL in hardare not withstanding as currently neither ATI nor NV can claim that)

KRONOS
02-22-2005, 07:54 AM
So far NVIDIA's implementation has been working without bugs (regarding fbo).


Originally posted by bobvodka:
and the card support the EXT version of the texture rectangle extension, which is part of the what the ARB_texture_non_power_of_two is based on so the card will support it once the extension is written and is in general quite capible of supporting OGL2.0Suporting texture rectangle doesn't mean suporting ARB_texture_non_power_of_two. That would mean that a GF3 would support ARB_texture_non_power_of_two too.

Korval
02-22-2005, 08:33 AM
and the card support the EXT version of the texture rectangle extension, which is part of the what the ARB_texture_non_power_of_two is based on so the card will support it once the extension is written and is in general quite capible of supporting OGL2.0ARB_NPOT has nothing to do with GL 2.0 support. It's still an extension and not bound to the GL 2.0 core. And ATi cards can't handle it yet.


(full support for GLSL in hardare not withstanding as currently neither ATI nor NV can claim that)Technically, an NV40-based card can. It has vertex program texturing (in a limited form, but it is there). It's ludicrously high fragment program instruction count lets it handle noise functions. What else is there that it cannot support in hardware?

KRONOS
02-22-2005, 08:40 AM
Originally posted by Korval:
ARB_NPOT has nothing to do with GL 2.0 support. It's still an extension and not bound to the GL 2.0 core.NPOT is part of GL2.0 core.

plasmonster
02-22-2005, 11:09 AM
They expose OpenGL 2.0.0 and have the frame buffer object extension.Weeeeeeeeeeeeeeeeeeeeeeeeee :)

...Sorry.

Korval
02-22-2005, 11:45 AM
NPOT is part of GL2.0 core.Hmm... you're right. They did stick that into the core. In a 9:1:2 vote, with ATi being the only one to vote against it.

Maybe ATi should spend more time implementing features rather than holding up API's ;)

zed
02-22-2005, 11:47 AM
15mb file for those with dialup
http://downloads.guru3d.com/download.php?det=999
damm i just downloaded new drivers a couple of days ago :(
double damm, now ive gotta add support for all this new stuff :(

SirKnight
02-22-2005, 01:35 PM
HOOAH!

:D

-SirKnight

KRONOS
02-22-2005, 02:02 PM
Is it impossible to create a framebuffer with no color attachment? For example, when rendering to a depth texture (e.g.: shadow maps), there is no need to have a color texture/renderbuffer. But the driver complains that the framebuffer is not "complete" because there is no color attachment. Is this a hardware limitation or what?

Korval
02-22-2005, 02:27 PM
Are you sure the driver is complaining about it being FB complete, instead of just "unsupported" (which is implementation dependent and there's nothing you can do about it)? The spec clearly states that you only need to have some attachment in place in the framebuffer, not a color attachment.

zed
02-22-2005, 02:53 PM
is it just me but after installing them somehow lost the ability with nvemulate to enforce strict warnings + write program assembly, how do i reenable these again?

| opengl version = 2.0.0
| opengl vendor = NVIDIA Corporation
| opengl renderer = GeForce FX 5900XT/AGP/SSE2/3DNOW!
| opengl shading_language = 1.10 NVIDIA via Cg 1.3 compiler

GL_ARB_half_float_pixel
GL_EXT_framebuffer_object

yooyo
02-22-2005, 06:04 PM
@Korval

I was trying to test FBO but for any texture format I choose drivert returns UNSUPPORTED... :(

Is there any list of suitable texture format?

Looks like driver just expose FBO entry points and nothing more.

Im using 6800U.

yooyo

sqrt[-1]
02-22-2005, 06:13 PM
Be interesting to see what apps these new drivers break . (apps that don't check the OpenGL version string properly)..

It is already indicated on other forums that chronicles of riddick is compaining that it needs OpenGL 1.3 .

Korval
02-22-2005, 07:20 PM
I was trying to test FBO but for any texture format I choose drivert returns UNSUPPORTED...Have you tried using renderbuffers instead? Have you tried a rectangular texture with the RGBA8 format?

ffish
02-23-2005, 12:08 AM
_Very_ limited test on my 6800GT doesn't give me UNSUPPORTED. Render target is RGBA32F_ARB.

KRONOS
02-23-2005, 02:46 AM
Originally posted by Korval:
Are you sure the driver is complaining about it being FB complete, instead of just "unsupported" (which is implementation dependent and there's nothing you can do about it)? The spec clearly states that you only need to have some attachment in place in the framebuffer, not a color attachment.Yeap, it returns GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER_EXT.

-NiCo-
02-23-2005, 03:04 AM
Originally posted by KRONOS:
Yeap, it returns GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER_EXT.Accordin g to issue 72 of the spec it should indeed be possible to attach only one depth-renderable image.

The FB object is incomplete if the color attachment referred to by readbuffer is not color-renderable. If I'm not mistaking the readbuffer is initialized to color attachment 0. Maybe you can set the Readbuffer to GL_NONE. According to revision history #103, this is allowed.

Nico

KRONOS
02-23-2005, 03:34 AM
Originally posted by -NiCo-:

Originally posted by KRONOS:
Yeap, it returns GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER_EXT.The FB object is incomplete if the color attachment referred to by readbuffer is not color-renderable. If I'm not mistaking the readbuffer is initialized to color attachment 0. Maybe you can set the Readbuffer to GL_NONE. According to revision history #103, this is allowed.It didn't work neither.
FRAMEBUFFER_INCOMPLETE_READ_BUFFER_EXT is generated when: "The value of FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT must not be NONE for the color attachment point named by READ_BUFFER."

Can someone please explain it in other words? :D

-NiCo-
02-23-2005, 03:37 AM
IMHO it means that there has to be a color-renderable image attached to the attachment point, referred to by readbuffer.

Nico

yooyo
02-23-2005, 04:01 AM
[quote]Originally posted by Korval:
[B]

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);Now it works...

yooyo

KRONOS
02-23-2005, 05:59 AM
Originally posted by -NiCo-:
IMHO it means that there has to be a color-renderable image attached to the attachment point, referred to by readbuffer.
But the spec says a color renderbuffer/texture is not needed. And I call glDrawBuffer(GL_NONE). I think this is probably a bug related with MTR (and I am working with a FX5700, so no MTR support).

spasi
02-23-2005, 06:04 AM
Hi everyone,

Regarding "render-to-depth-texture" with EXT_fbo, this is what I see here:

- To avoid FRAMEBUFFER_INCOMPLETE errors, I used glReadBuffer(GL_NONE) and glDrawBuffer(GL_NONE).

- Unfortunately when there's only a DEPTH_ATTACHMENT (whether it's a renderbuffer or a depth texture), I always get a FRAMEBUFFER_UNSUPPORTED error. I'm certain this is a valid configuration, the spec states that clearly.

- When I added another texture (with the same size) to COLOR_ATTACHMENT0, it works without an UNSUPPORTED error (with either a renderbuffer or a depth texture to DEPTH_ATTACHMENT). Rendering does not work properly though, all I get is random artifacts. Also, the depth texture remains empty. By removing the DEPTH_ATTACHMENT, I get what I would expect for a shadow map pass with no depth testing.

-NiCo-
02-23-2005, 06:41 AM
Originally posted by KRONOS:
[But the spec says a color renderbuffer/texture is not needed. And I call glDrawBuffer(GL_NONE). I think this is probably a bug related with MTR (and I am working with a FX5700, so no MTR support).Looks like you misread my other post where I suggested to set readbuffer to NONE, whereas you're setting drawbuffer to NONE. From the post above (spasi) this seems to resolve the FRAMEBUFFER_INCOMPLETE error.

Nico

KRONOS
02-23-2005, 07:31 AM
Originally posted by -NiCo-:
Looks like you misread my other post where I suggested to set readbuffer to NONE, whereas you're setting drawbuffer to NONE. From the post above (spasi) this seems to resolve the FRAMEBUFFER_INCOMPLETE error.:D I forgot to say I call glReadBuffer(GL_NONE) too, very much like spasi said. And like spasi, I still get the unsupported error and so I attached a texture to the framebuffer and it works (without artifacts, either the depth and color textures). However, not being able to create a colorless framebuffer seems like a driver bug (they are beta drivers) rather than a hardware limitation that I previously mentioned...

Anyway, it felt so good erasing the PBuffer class... :D

-NiCo-
02-23-2005, 07:43 AM
Originally posted by KRONOS:
However, not being able to create a colorless framebuffer seems like a driver bug (they are beta drivers) rather than a hardware limitation that I previously mentioned...I did not test it myself yet, but it sure looks like a bug.


Originally posted by KRONOS:
Anyway, it felt so good erasing the PBuffer class... :D Amen to that :)

Nico

Nico_dup1
02-23-2005, 10:34 AM
Has anyone used glew successfully to load fbo? GLEW_EXT_framebuffer_object is false although GL_EXT_framebuffer_object is included in the extension string...

(and glew is initialized correctly - at least other extensions like GLEW_ARB_multitexture are true).

Korval
02-23-2005, 11:04 AM
However, not being able to create a colorless framebuffer seems like a driver bug (they are beta drivers) rather than a hardware limitation that I previously mentioned...After thinking about it a while, the driver is doing legal behavior. The spec says that the framebuffer can be complete with nothing bound to the color buffer, as long as the read buffer is NONE. However, this says nothing about whether or not it is supported. Drivers can declare any combinations of framebuffer bindings as unsupported for any reason. Of course, there must be some binding that is supported, but it doesn't have to be depth-only.

Perhaps the hardware needs a color buffer bound, for whatever reason.

-NiCo-
02-24-2005, 01:23 AM
Has anyone been able to render to a slice of a 3D texture?

I get a GL_FRAMEBUFFER_UNSUPPORTED_EXT. According to the spec this happens if the combination of internal formats is not supported. But when rendering to a 2D texture with the same internal format, I do not get the error.
Am I doing something wrong or is the GL_FRAMEBUFFER_UNSUPPORTED_EXT error also dependent on the texture target (2D/3D)?

Nico

knackered
02-24-2005, 02:00 AM
The impatience of you guys. Tearing your hair out over half implementations in alpha drivers. Just wait till you get beta drivers through the nvidia developer program.

Toni
02-24-2005, 02:26 AM
Originally posted by knackered:
The impatience of you guys. Tearing your hair out over half implementations in alpha drivers. :) Totally agree, i suppose NV will release a beta driver soon, so well, playing with drivers is ok, but not for doing anything more serious than preparing interfaces for RTT.

After all, all that bugs, issues and thinggie you're detecting here are probably well known by nv people and i expect they will be fixed (or not :) )

execom_rt
02-24-2005, 06:25 AM
It's not nVidia or ATI who got a renderer with a GL_VERSION string >= 2.0

Look at this:

http://delphi3d.net/hardware/viewreport.php?report=317

This video card has OpenGL string to 2.03, back to 2002.

:D

Martin.Blomberg
02-24-2005, 07:54 AM
Running on a Geforce 6800GT the FrameBuffer seams to work. When reading back information with glReadPixels the image shows up alright. But when trying to bind the colorbuffer as a texture for rendering it wont show up?

I'm using the code from the examples in the extensionspecification. Have anyone got it working and can spot some errors in my code?

I also found out that when having LINEAR_MIPMAP_LINEAR set the framebuffer object wont't get complete until all mipmaplevels have been bound.
/Martin


RenderTarget::RenderTarget(void)
{
glGenFramebuffersEXT(1, &fb);
glGenTextures(1, &color_tex);
glGenRenderbuffersEXT(1, &depth_rb);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);

// initialize color texture
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0,
GL_RGBA, GL_INT, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0);

// initialize depth renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
GL_DEPTH_COMPONENT24, 512, 512);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, depth_rb);

CHECK_FRAMEBUFFER_STATUS();

glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}

RenderTarget::~RenderTarget(void)
{

}
RenderTarget::BindAsRenderTarget()
{
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
}
RenderTarget::UnBindAsRenderTarget()
{
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
RenderTarget::BindAsTexture()
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, color_tex);
}
RenderTarget::UnBindAsTexture()
{
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
}

yooyo
02-24-2005, 10:45 AM
I write this and it works...



glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex));
glTexParameteri( GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXX, TEXY, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);yooyo

Matt Zamborsky
02-24-2005, 03:46 PM
hi all, I tried this extension today, but i got strange results. When I create framebuffer without zbuffer all is ok, but when I add zbuffer i gen UNSUPPORTED error.

I have GF FX 5900.

Could anyone help?

Korval
02-24-2005, 03:54 PM
Most likely, the bit depths of your color buffer and depth buffer are not allowed. Typically, mixing 16-bit buffers with 32-bit ones is not allowed by the hardware.

Either that, or as people have pointed out, it is due to incomplete/buggy drivers.

Martin.Blomberg
02-24-2005, 11:16 PM
So you are saying this code gives you a working texture when binding it as an active texture and using it for rendering objects? It will simple not show up for me..ahhhh must be doing something wrong here.


Originally posted by yooyo:
I write this and it works...



glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex));
glTexParameteri( GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXX, TEXY, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);yooyo

yooyo
02-25-2005, 12:33 AM
Someone said that without all mipmaps in texture and linear_mipmap_linear filter fbo returns unsupported. So I try to generate all mipmaps before glFramebufferTexture2DEXT call and fbo accept it :)

After rendering in fbo bind texture as usual and call glGenerateMipmapEXT(GL_TEXTURE_2D). This call should build all mipmaps.

Trick is to use:


glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri( GL_TEXTURE_2D, GL_GENERATE_MIPMAP_SGIS, GL_TRUE ); // force driver to build mipmaps
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXX, TEXY, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);or



glGenTextures(1, &color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXX, TEXY, 0, GL_RGBA, GL_UNSIGNED_BYTE, texture_data);
glGenerateMipmapEXT(GL_TEXTURE_2D); // build mipmaps
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);yooyo

Overmind
02-25-2005, 12:55 AM
About the depth buffer, I had the UNSUPPORTED problem, too, until I changed GL_DEPTH_COMPONENT to GL_DEPTH_COMPONENT24... It seems the general format specifier doesn't work...

Another question: I initialize my textures with glTexImage2D(..., NULL), similar to allocating a buffer object, and it works correctly. Is this supposed to work? That is, can I rely upon it working in other drivers (ATI, ...)? Or do I have to allocate a dummy buffer?

I didn't find anything in the spec, but perhaps I'm just missing something...

KRONOS
02-25-2005, 01:45 AM
Originally posted by Overmind:
Another question: I initialize my textures with glTexImage2D(..., NULL), similar to allocating a buffer object, and it works correctly. Is this supposed to work? That is, can I rely upon it working in other drivers (ATI, ...)? Or do I have to allocate a dummy buffer?

I didn't find anything in the spec, but perhaps I'm just missing something...From 1.5 spec:
If the data argument of TexImage1D, TexImage2D, or TexImage3D is a null pointer (a zero-valued pointer in the C implementation), a one-, two-, or threed- dimensional texture array is created with the specified target, level, internalformat, width, height, and depth, but with unspecified image contents. In this case no pixel values are accessed in client memory, and no pixel processing is performed. Errors are generated, however, exactly as though the data pointer were valid.

Matt Zamborsky
02-25-2005, 06:33 AM
I use the GLEW and GLUT for initializing OGL, and still have the problem. I changed GL_DEPTH_COMPONENT to GL_DEPTH_COMPONENT24 but it still doesnt work. Could someone posts a example of creating framebuffer object for rendering to texture with depth buffer?

Trenki
02-26-2005, 05:29 AM
The following works on my GF4 Ti 4200. I does not report a FRAMEBUFFER_UNSUPPORTED error.

For the rendering i used a window with a resolution of 640x480. The framebuffer object is 512x512, so i had to adjust the Viewport before rendering to the framebuffer object.
At the end I draw a screen sized quad using the rendered texture. The result does not look as it should! It seems as if only the blue color components are actually rendered into the texture!

When executing this i had enabled GL emulation via the nvemulate tool. After installing the Forceware 75.90 drivers I cannot disable emulation anymore! Does anyone know how to disable emulation manually?


GLuint fb;
GLuint color_tex;
GLuint depth_rb;

glGenFramebuffersEXT(1, &fb);
glGenRenderbuffersEXT(1, &depth_rb);
glGenTextures(1, &color_tex);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, 512, 512, 0, GL_RGB, GL_INT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);

glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 512, 512);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);

CHECK_FRAMEBUFFER_STATUS();

glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0,0,512,512);

glBegin(GL_TRIANGLES);
glColor3f(1.0f, 0.0f, 0.0f); glVertex2f(-0.5f, -0.5f);
glColor3f(0.0f, 1.0f, 0.0f); glVertex2f( 0.5f, -0.5f);
glColor3f(0.0f, 0.0f, 1.0f); glVertex2f( 0.0f, 0.5f);
glEnd();

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glBindTexture(GL_TEXTURE_2D, color_tex);
glEnable(GL_TEXTURE_2D);
glViewport(0,0,640,480);

//draw with texture

glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);

gluOrtho2D(0.0, 1.0, 0.0, 1.0);

glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex2f(1.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f); glVertex2f(1.0f, 1.0f);
glTexCoord2f(0.0f, 1.0f); glVertex2f(0.0f, 1.0f);
glEnd();

3B
02-26-2005, 10:16 AM
Originally posted by Trenki:
It seems as if only the blue color components are actually rendered into the texture!
Try adding a glColor4f(1,1,1,1) before you draw the final quad maybe? :)

Though I did notice similar symptoms (red channel not always rendering) when I forgot to unbind the texture to which I was rendering...

Matt Zamborsky
02-26-2005, 12:38 PM
thanks for replies, but i have still the problem :) . I think I do something wrong but i dont know what. So I plead very much all people here could someone post whole application?

Trenki
02-26-2005, 01:01 PM
Originally posted by B3:
Try adding a glColor4f(1,1,1,1) before you draw the final quad maybe?
This was it! How stupid from me :rolleyes:

As for whole application request: The code from my previous post is the whole OpenGL code of the application.
For initialisation i use SDL. The CHECK_FRAMEBUFFER_STATUS() macro is from the extension spec.

Overmind
02-27-2005, 02:05 AM
http://omega.fragless.org/demos/rtt-test.tar.gz

It's a little demo that renders a few reflecting spheres... Nothing special, and the code is horribly unstructured. It is really just a test for a crossplatform build, so don't expect too much.

I included Linux and Windows binaries. If you want to build it yourself, you need Jam (ftp://ftp.perforce.com/jam/).

You have to launch the application from the top level source directory, not from the binary directory, otherwise the texture is not found (and it will propably crash, I was too lazy to write error detection code :rolleyes: ).

Matt Zamborsky
02-27-2005, 02:54 AM
thank you very much for demo. I tried it and I was very surprised when i get the unsuported error again. So I think that error is in my forceware instalation(I downloaded the forceare from guru3d). As I said I have GeForce FX5900. Could anyone explain this fu.. error?

Matt Zamborsky
02-27-2005, 02:59 AM
wow. I just find the error finaly :) . I have set the DOOM3 profile in profiles and i get this error, so be careful in settings the profile in Settings view.

nystep
02-27-2005, 10:17 AM
I just installed these drivers and i dunno if i'm the only one with this problem, but it seems vesa is no longer supported.. (or is it not supported by default on geforce6800gt?) .

regards,

flo
02-27-2005, 12:12 PM
Has someone already succeeded in creating a stencil render buffer? For me, creating one as follows



glGenRenderbuffersEXT(1, &stencilID);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, stencilID);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_STENCIL_INDEX8_EXT, 512, 512);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, stencilID);gives a framebuffer complete result, but the number of bitplanes returned by glGetIntegerv(GL_STENCIL_BITS, ...) is zero nonetheless.

Trenki
02-27-2005, 01:16 PM
For me the above code gives an "invalid enumerant" error for the lines with glRenderbufferStorageEXT and glFramebufferRenderbufferEXT.

Apparently the driver does not yet understand the GL_STENCIL_INDEX8_EXT and the GL_STENCIL_ATTACHMENT_EXT enumerants.

Does anyone know what effect the use of GL_STENCIL_INDEX_EXT (without bitcount indication) should have?
The spec tells glRenderbufferStorageEXT can take it in the internalformat parameter, but it looses no word about what effect it should have. It isn't even listed in the table 2.nnn!

I used a GF4 Ti 4200 (i had emulation enabled; does anyone know how to disable it manually? nvemulate will not do it any more with the forceware 75.90 drivers on my system).

Nutty
02-28-2005, 06:25 AM
I cant seem to get FBO support either.. Using 75.90 in 2k, on a 6800GT.

EDIT
It's working now. I switched from dualview to single display. Still not available. Then just re-installed the drivers again, and the extension appeared. Working fine in dualview again.

Matt Zamborsky
02-28-2005, 06:25 AM
i tried to make the error one more time but it isnt caused with doom3 profile but with something setting in settings panel. I am trieng to find the setting but i amnot succesful. It is certain a driver bug.

Nutty
02-28-2005, 09:14 AM
Found the problem. FBO doesn't work in Multi-device compatibility mode or Multi-device performance mode. Only Single-display mode. :(

Matt Zamborsky
02-28-2005, 10:23 AM
I tried to set multidisplay and it works :) . I think the implementation in this extension is fully of bugs, but its understandable, because its the first imlementation with this extension(or isnt it?).

idr
02-28-2005, 10:31 AM
I only skimmed the replies, so the question about a depth-only FBO may have been answered. In order to use an FBO without a color buffer bound, you much set the draw buffer (by calling glDrawBuffer or glDrawBuffers) to GL_NONE.

This is implied by the spec language. We have already recognized that we need to make it more explicit.

Toni
03-04-2005, 03:03 AM
Originally posted by Nico:
Has anyone used glew successfully to load fbo? GLEW_EXT_framebuffer_object is false although GL_EXT_framebuffer_object is included in the extension string...

(and glew is initialized correctly - at least other extensions like GLEW_ARB_multitexture are true).Yes, seems the problem is that the extension implementation doesn't fully support all the entrypoints. In particular, doesn't support glFrameBufferTexture1DEXT.
When Glew detects the extension, as it has glFrameBufferTexture1DEXT as missing, glew decides that the extension is not supported.

U can see that in the glewInfo utility provided by glew

3B
03-04-2005, 07:38 AM
Originally posted by toni:
[QUOTE]Originally posted by Nico:
Yes, seems the problem is that the extension implementation doesn't fully support all the entrypoints. In particular, doesn't support glFrameBufferTexture1DEXT.
When Glew detects the extension, as it has glFrameBufferTexture1DEXT as missing, glew decides that the extension is not supported.

U can see that in the glewInfo utility provided by glewActually, that looks like a bug in glew : the function should be glFramebufferTexture1DEXT. I'd noticed that function was misnamed, but not that it affected detecting the extension...search and replace glFrameBufferTexture1DEXT for glFramebufferTexture1DEXT in glew.c,glew.h and glewinfo.c and recompile fixes the problem...

LarsMiddendorf
03-04-2005, 10:21 AM
Did someone compare the speed of fbo to the pbuffer, especially when rendering depth only?
How expensive is switching the framebuffer?

divide
03-04-2005, 12:48 PM
Same question: is framebuffer switching faster than pbuffer's ?

Overmind
03-04-2005, 01:47 PM
Another related question: Is it better to switch framebuffers, or to bind new textures to the same framebuffer? When should I make two framebuffers and when should I just rebind textures?

3B
03-04-2005, 08:40 PM
Originally posted by Overmind:
Another related question: Is it better to switch framebuffers, or to bind new textures to the same framebuffer? When should I make two framebuffers and when should I just rebind textures?Issue #8 in the FBO spec says validating framebuffer state might be 'expensive' (one of the reasons for having an fb object to switch in the first place), so I assume switching framebuffers is intended to be the fast path. Example 4 also uses 1 fb per texture.

mrbill
03-04-2005, 10:04 PM
Originally posted by KRONOS:

From 1.5 spec:

If the data argument of TexImage1D, TexImage2D, or TexImage3D is a null pointer (a zero-valued pointer in the C implementation), a one-, two-, or threed- dimensional texture array is created with the specified target, level, internalformat, width, height, and depth, but with unspecified image contents. In this case no pixel values are accessed in client memory, and no pixel processing is performed. Errors are generated, however, exactly as though the data pointer were valid.A strong caution. If ARB_pixel_buffer_object is supported, and if you have a non-zero pixel unpack buffer object:

From GL_ARB_pixel_buffer_object:

If the data argument of TexImage1D, TexImage2D, or TexImage3D
is a null pointer (a zero-valued pointer in the C implementation)
and the pixel unpack buffer object is zero, a one-, two-, or three-
dimensional texture array is created with the specified target, level,
internalformat, width, height, and depth border, but with unspecified
image contents. In this case no pixel values are access in client
memory, and no pixel processing is performed. Errors are generated,
however, exactly as though the data pointer were valid. Otherwise if
the pixel unpack buffer object is non-zero, the data argument is
treatedly normally to refer to the beginning of the pixel unpack
buffer object's data.

Ambiguous matter. The manpage for glBitmap states:

From glBitmap man page:

NOTES
To set a valid raster position outside the viewport, first
set a valid raster position inside the viewport, then call
glBitmap with NULL as the bitmap parameter and with xmove
and ymove set to the offsets of the new raster position.
This technique is useful when panning an image around the
viewport.


The core spec itself is silent on this matter, and so is ARB_pixel_buffer_object. Is this a manpage bug? A spec bug? Your interpretation may vary.

-mr. bill

Overmind
03-05-2005, 12:50 AM
Originally posted by 3B:
Issue #8 in the FBO spec says validating framebuffer state might be 'expensive' (one of the reasons for having an fb object to switch in the first place), so I assume switching framebuffers is intended to be the fast path. Example 4 also uses 1 fb per texture.So for example if I want to render to a cubemap, I would create 6 framebuffers, and bind a cubemap face and a shared depth renderbuffer to each of them. And if I want to render to a 2D texture with the same dimensions, I make another new framebuffer object, again reusing the shared depth renderbuffer. Is this correct?

3B
03-06-2005, 01:36 PM
Originally posted by Overmind:
So for example if I want to render to a cubemap, I would create 6 framebuffers, and bind a cubemap face and a shared depth renderbuffer to each of them. And if I want to render to a 2D texture with the same dimensions, I make another new framebuffer object, again reusing the shared depth renderbuffer. Is this correct?Thats how I read the spec...Though I suppose if MAX_COLOR_ATTACHMENTS > 1, you could attach more than 1 face or texture to a framebuffer and switch with DrawBuffer, no idea how that would compare speed wise to switching framebuffers.

ffish
03-06-2005, 09:47 PM
Originally posted by Overmind:
Another related question: Is it better to switch framebuffers, or to bind new textures to the same framebuffer? When should I make two framebuffers and when should I just rebind textures?I'd be super-interested in vendor comments on this one too. What's the fast path? I would've thought that using multiple fbos would be slower, since you have to switch fbos and also attach (and detach? or is that unnecessary for rendering from a previously attached texture of an unbound fbo? Maybe, according to example 4) and detach textures as I render into them and render from them. Using one fbo I just bind the fbo when I need it and attach and detach textures where I need to - I lose the overhead of switching fbos. A quick skim over the spec doesn't really say one way or another (excepting 3B's comments).

Toni
03-06-2005, 11:40 PM
Originally posted by 3B:
Actually, that looks like a bug in glew : the function should be glFramebufferTexture1DEXT. I'd noticed that function was misnamed, but not that it affected detecting the extension...search and replace glFrameBufferTexture1DEXT for glFramebufferTexture1DEXT in glew.c,glew.h and glewinfo.c and recompile fixes the problem...[/QB]Indeed :)

Thanx

Zak McKrakem
03-10-2005, 07:25 AM
76.10 out
Time to try again :p

http://www.station-drivers.com/forum/viewtopic.php?t=1174

Matt Zamborsky
03-10-2005, 07:41 AM
oh new drivers :) very fast, maybe the stencil buffer in EXT_framebuffer_object can be used. :)

3B
03-10-2005, 07:06 PM
Originally posted by Matt Zamborsky:
oh new drivers :) very fast, maybe the stencil buffer in EXT_framebuffer_object can be used. :) doesn't look like it :(
same list of supported renderbuffer formats as far as I can see (and same odd chunk of GL_INVALID_OPERATION instead of GL_INVALID_ENUM for the NV_float_buffer formats 0x8880-0x888b)

Matt Zamborsky
03-11-2005, 06:23 AM
oh, I see :( . But it's still beta drivers so it's still good to have at least something .

V-man
03-11-2005, 06:51 PM
Does anyone see a problem with this code?
I think it should work, but I'm getting some weird trouble. Could it be because NV drivers are beta?



uint fb;
uint color_tex;

glGenFramebuffersEXT(1, &fb);
glGenTextures(1, &color_tex);

glGenRenderbuffersEXT(1, &depth_rb);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);

//Initialize color texture
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexParameteri(GL_TEXTURE_2D,GL_GENERATE_MIPMAP_S GIS, GL_TRUE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);

CheckFramebufferStatus();

glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);

glDeleteTextures(1, &color_tex);
glDeleteRenderbuffersEXT(1, &fb);

CGDeveloper
03-11-2005, 10:01 PM
I also happened ever,when i updated GLEW to version1.3.1, it works now.


Originally posted by Nico:
Has anyone used glew successfully to load fbo? GLEW_EXT_framebuffer_object is false although GL_EXT_framebuffer_object is included in the extension string...

(and glew is initialized correctly - at least other extensions like GLEW_ARB_multitexture are true).

yooyo
03-12-2005, 02:21 AM
Here is a working FBO example (http://rttv.users.sbb.co.yu/GLFramework01.zip) . Take look into FBOTest class.

yooyo

divide
03-12-2005, 04:52 AM
Anyone tried to switch a hundred times FBO per frame ? That was the main point preventing me to use RTT, I need to compute a hundred small independent renders per frame and context switching was way too slow with RTT method.

3B
03-13-2005, 05:37 AM
Got bored and decided to do some FBO speed testing, thought I would post some numbers before going to sleep since everyone seems curious :)

Numbers so far (with lots of stuff being done inefficiently) on 2.6GHz p4, 6800 GT, 76.10 betas :

10 sorta shiny spheres, reflecting 3 levels deep = 30-35 FPS. (180 fb switches/frame, 320 tris/sphere, ~140k tris for entire scene)
with just the glClear in the reflection passes = ~60-70 FPS
with just the BindFramebufferEXT = ~85-90 FPS

so looks like pretty easily 5-10k fb switches/sec depending on how much you do per buffer...

misc implementation details :
cube maps are 256x256xRGBA16F (doesn't look fill limited, about the same speed at 64x64)
No mipmaps on the cubemaps, since I couldn't get that working with these drivers...

1 Framebuffer per cubemap face, 2 cubemaps per object for a total of 120 framebuffers

24bit depth renderbuffer shared between all framebuffers

Overmind
03-13-2005, 06:37 AM
Just out of curiosity, could you try the speed of the same scene without framebuffer switches? That is, rendering the reflection passes on screen instead of to the texture...

3B
03-13-2005, 07:05 AM
Originally posted by Overmind:
Just out of curiosity, could you try the speed of the same scene without framebuffer switches? That is, rendering the reflection passes on screen instead of to the texture...quick hack of just skipping the BindFramebufferEXT with no other code changes looks to be about ~7-10% higher frame rate for the normal or clear only cases...
planning on doing a real pbuffer and copy from backbuffer version for more valid comparison at some point once I'm more awake, will depend on how much time I spend on optimizations or making it look nicer instead :)

edit: more numbers to confuse the issue :)
remembered that I was rendering to float textures, so tried GL_RGBA instead...
GL_RGBA16F: with BindFramebuffer = 35fps, without = 39
GL_RGBA: with bind = 41fps, without = 60fps...
--
actually, looks like without the BindFramebuffers, its wasting a lot of time doing glClears on the entire window instead of just the 256x256 area, so the above numbers are quite a bit lower than they should be, rough guess looks like about a factor of 2x...

divide
03-13-2005, 08:54 AM
Thanks for the test stuff 3B ! Keep on testing, I'd like to hear the difference with pbuffer in the same conditions :)

potmat
03-15-2005, 12:38 PM
Anybody have any idea how to use fbo with MRT (multiple render targets)? That is, if my fragment shader has multiple color outputs, how do I bind to them and display them as a texture? Previously this was done with pbuffers, but there must now be a better way through fbo correct?

Valkyr
03-15-2005, 03:50 PM
Originally posted by potmat:
Anybody have any idea how to use fbo with MRT (multiple render targets)? That is, if my fragment shader has multiple color outputs, how do I bind to them and display them as a texture? Previously this was done with pbuffers, but there must now be a better way through fbo correct?There is some psuedo code in the specification file on how to do this:
http://www.opengl.org/documentation/extensions/EXT_framebuffer_object.txt
just search for this line: Here is an pseudo-code example using option (E):

Other question, does the extension support the use of GL_TXTURE_RECTANGLE_NV instead of GL_TEXTURE_2D, cause I can't get my test working...

Korval
03-16-2005, 12:14 AM
Other question, does the extension support the use of GL_TXTURE_RECTANGLE_NV instead of GL_TEXTURE_2D, cause I can't get my test working...I don't recall seeing anything in the spec to disallow rectangle textures. And they could be faster than regular textures as they may not be stored in a swizzled format. But, once again, the implementation is still not finished.

KRONOS
03-16-2005, 02:28 AM
NVIDIA just posted a presentation regarding FBO, that was part of their GDC2005 GL talks.



In order of increasing performance:
– Multiple FBOs
• create a separate FBO for each texture you want to render to
• switch using BindFramebuffer()– can be 2x faster than wglMakeCurrent() in beta NVIDIA drivers

– Single FBO, multiple texture attachments
• textures should have same format and dimensions
• use use FramebufferTexture() to switch between textures

– Single FBO, multiple texture attachments
• attach textures to different color attachments
• use glDrawBuffer() to switch rendering to different color attachments

Visualc++
03-16-2005, 11:27 AM
In a Geforce FX 5200 is not support Ώcan anybody help me?.

potmat
03-17-2005, 09:15 AM
Regarding the Nvidia presentation on FBO, has anybody actually got anything working with COLOR_ATTACHMENTn_EXT? Is is even supported or recognized by any drivers (even beta) yet? Would have been nice for Nvidia to give a complete working piece of code with this (if such a thing is even possible yet).

EDIT

I can get GL_COLOR_ATTACHMENT0_EXT to display as a texture using the sample code from the FBO spec. But getting GL_COLOR_ATTACHMENTn_EXT to display seems to turn up nothing.

3B
03-18-2005, 12:48 AM
Originally posted by potmat:
Regarding the Nvidia presentation on FBO, has anybody actually got anything working with COLOR_ATTACHMENTn_EXT? Is is even supported or recognized by any drivers (even beta) yet?COLOR_ATTACHMENTn_EXT work for me (6800 GT, 75.90 and 76.10 drivers), at least 0-2, don't remember if I've tried 3 or not, which is the highest supported on my card. Both one buffer at a time and multiples with ATI_DrawBuffers()...

Don't forget to actually draw to more than ATTACHMENT0, either call DrawBuffer() and render a pass for each attachment, or DrawBuffers() and output to more than 1 at a time.

3B
03-18-2005, 09:23 AM
more FBO benchmarking results...

Simpler test, trying to isolate FBO overhead more..
1. render quad to texture A
2. render quad textured with A to texture B
3. render quad textured with B to texture A
repeat 2 and 3 for 2 seconds, optionally doing a glClear for each render, and/or generating mipmaps.
(or same thing, but doing 64 textures in paralell)

with texture size = 32x32 GL_RGBA
CopyTexSubImage from front buffer = ~50k/sec without mipmaps, ~10kwith mipmaps
FBO per texture = ~15k/sec without mipmaps, ~5k/sec with both
single FBO switching with FramebufferTexture = 80k/sec with no mipmaps, no clear. ~35k/sec with clear, ~30k/sec with mipmaps, ~20k with mipmaps and clear
Pbuffer = ~3-6k/sec
GenerateMipmaps called repeatedly by itself = ~35k /sec

above 128x128, the single FBO/no clear/no mip case starts to slow down : 79k/sec at 128x128, ~40k at 256x256

at 512x512:
CopyTexSubImage from front buffer = ~6k/sec without mipmaps or clear, ~2.7k with both
FBO per texture = ~10k/sec without mipmaps or clear, ~3.2k/sec with both
single FBO switching with FramebufferTexture = 12k/sec with no mipmaps, no clear. ~5.5k/sec with clear, ~5k/sec with mipmaps, ~3.3k with mipmaps and clear
Pbuffer = ~4-5k/sec with 1 texture, ~2-3k with 64
GenerateMipmaps called repeatedly by itself = ~8k /sec

--
didn't test using FBO and DrawBuffers to switch textures yet, seems like that would probably end up being the same as the single FBO case once you render to much more than 4 textures, but will probably try it soon anyway...

divide
03-18-2005, 12:40 PM
great tests, thanks ! :)

Overmind
03-19-2005, 01:11 AM
Wow, thats a huge difference between FBO switching and binding another texture...

Regarding the case with DrawBuffers: What happens when I bind more than 4 color attachments? Will I get an unsupported error, or am I just unable to render to more than 4 in one pass?

3B
03-19-2005, 01:25 AM
Originally posted by Overmind:
Wow, thats a huge difference between FBO switching and binding another texture...

Regarding the case with DrawBuffers: What happens when I bind more than 4 color attachments? Will I get an unsupported error, or am I just unable to render to more than 4 in one pass?It should fail to bind if you try a higher numbered attachment point. Also, don't rely on the number 4, check the value of MAX_COLOR_ATTACHMENTS_EXT, minimum according to the spec is 1 (and it looks like it could change with the format of the texture, so to be completely correct, I think you would need to do the check after binding the first texture to the fb).
The number you can render at once is a different value, from the Draw_Buffers spec(s), which could theoretically be less than the number that can be bound at once, but we'll have to wait and see what the driver dev's decide on that :)

potmat
03-21-2005, 08:24 AM
Can anyone post some code for using multiple color attachments? Both the code to set up the attachments and then how to display them. It would be greatly appreciated.

3B
03-21-2005, 03:09 PM
Originally posted by potmat:
Can anyone post some code for using multiple color attachments? Both the code to set up the attachments and then how to display them. It would be greatly appreciated.simple example : (with no error checking, etc. in particular, should check # of valid attachments and draw buffers, currently 4 on my 6800GT, dunno about older cards. Also, verify the framebuffer is valid before trying to render to it, see the example macro in the spec. )

////////////////////////////////////////////////////////////////////////
GLuint fb,rb,tex[4];
//misc setup
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,1,0,1,0,16);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0,0,-1);
glDisable(GL_CULL_FACE);
glDisable(GL_LIGHTING);
glDisable(GL_BLEND);
glDisable(GL_DEPTH_TEST);

//allocate space for the textures
glGenTextures(4,tex);
for (int i=0;i<4;i++) {
glBindTexture(GL_TEXTURE_2D,tex[i]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTE R,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTE R,GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,256,256,0,GL_ RGBA,GL_INT,0);
//glGenerateMipmapEXT(GL_TEXTURE_2D); //add if using mipmaps
}
//create and bind the framebuffer, and add the textures
glGenFramebuffersEXT(1,&amp;fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,fb);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_CO LOR_ATTACHMENT0_EXT,GL_TEXTURE_2D,tex[0],0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_CO LOR_ATTACHMENT1_EXT,GL_TEXTURE_2D,tex[1],0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_CO LOR_ATTACHMENT2_EXT,GL_TEXTURE_2D,tex[2],0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_CO LOR_ATTACHMENT3_EXT,GL_TEXTURE_2D,tex[3],0);
//optional, add a depth buffer (stencil is similar, but doesn't work on nv 75.90 or 76.10 drivers)
//glGenRenderbuffersEXT(1,&amp;rb);
//glBindRenderbufferEXT(GL_RENDERBUFFER_EXT,rb);
//glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_DE PTH_COMPONENT24,256,256);
//glBindRenderbufferEXT(GL_RENDERBUFFER_EXT,0);

//set up for rendering
glViewport(0,0,256,256); //set viewport to size of texture
GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT,GL_COLOR_ATTACHMENT1_EXT, GL_COLOR_ATTACHMENT2_EXT,GL_COLOR_ATTACHMENT3_EXT} ;
glDrawBuffersARB(4,buffers); //enable all 4 attachments for drawing
glsl.activate(); //enable the shader
glBegin(GL_QUADS); //draw something
glVertex3f(0,0,0);
glVertex3f(1,0,0);
glVertex3f(1,1,0);
glVertex3f(0,1,0);
glEnd();
glsl.off();
//switch to OS provided framebuffer (aka the window)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
glViewport(0,0,512,512); //and reset the viewport
//set up for drawing
glEnable(GL_TEXTURE_2D);
glClearColor(1,0,1,1);
glClear(GL_COLOR_BUFFER_BIT);
glColor4f(1,1,1,1);
//bind each texture and draw it as a quad
for (int i=0;i<4;i++) {
float x1 = (i&amp;1)*0.5;
float y1 = (i&amp;2)*0.25;
float x2 = x1+0.5,y2=y1+0.5;
glBindTexture(GL_TEXTURE_2D,tex[i]);
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex3f(x1,y1,0);
glTexCoord2f(1,0);
glVertex3f(x2,y1,0);
glTexCoord2f(1,1);
glVertex3f(x2,y2,0);
glTexCoord2f(0,1);
glVertex3f(x1,y2,0);
glEnd();
}
//swap buffers as needed for display...shaders :

//vertex prog
void main() {
gl_Position = ftransform();
}
//fragment prog
void main() {
//draw something to each of the output targets
gl_FragData[0] = vec4(1.0,0.0,0.0,1.0);
gl_FragData[1] = vec4(0.0,1.0,0.0,1.0);
gl_FragData[2] = vec4(0.0,0.0,1.0,1.0);
gl_FragData[3] = vec4(1.0,1.0,1.0,1.0);
}should draw 4 quads to the screen, red,green,blue and white...

divide
03-25-2005, 02:50 AM
I tried to assign my FBO to a float texture but I obviously forgot something:


glGenTextures(1, &amp;FBOTexture);
glBindTexture(GL_TEXTURE_RECTANGLE_NV, FBOTexture);

glTexImage2D (
GL_TEXTURE_RECTANGLE_NV,
0,
GL_FLOAT_RGBA16_NV,
256,
256,
0,
GL_RGBA,
GL_FLOAT,
0
);

glTexParameteri(GL_TEXTURE_RECTANGLE_NV, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_NV, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

glGenFramebuffersEXT(1, &amp;FBOObject);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, FBOObject);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, FBOTexture, 0);It only produces NULL values to the target float texture (though I end my fragment program with gl_FragColor = vec4(1.,1.,1.,1.); )

Should I initialize something specific ?
3B, can you paste the way you worked with FBO and float textures ?

By the way, is GL_TEXTURE_RECTANGLE_NV format obligatory when working with float textures ? couldn't it be a classic power of two texture ?

-NiCo-
03-25-2005, 03:54 AM
divide,

You're assigning the texture rectangle as a texture2D color attachment, that won't work. You have to attach it as GL_TEXTURE_RECTANGLE_NV.

It's also possible to use the TEXTURE_2D target, but then you have to initialize a floating point texture according to the ATI_texture_float spec.

Nico

divide
03-25-2005, 04:27 AM
Right. I corrected this but it still doesn't work :-/
Tried the ATI method, doesn't work either.


glGenTextures(1, &amp;FBOTexture);
glBindTexture(GL_TEXTURE_2D, FBOTexture);

glTexImage2D (
GL_TEXTURE_2D,
0,
GL_RGBA_FLOAT32_ATI,
256,
256,
0,
GL_RGBA,
GL_FLOAT,
0
);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

glGenFramebuffersEXT(1, &amp;FBOObject);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, FBOObject);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, FBOTexture, 0);

3B
03-25-2005, 03:29 PM
try the arb FP extension: GL_RGBA32F_ARB, GL_RGBA16F_ARB, etc. Those are what I've been using, and they work for me...hmm, looks like those are actually the same value as the ATI extension (and apple as well).

changing my sample above to
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA32F_ARB,256,25 6,0,GL_RGBA,GL_INT,0);
or
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA16F_ARB,256,25 6,0,GL_RGBA,GL_INT,0);
works here...

Are you checking for errors?

Random things to try:
if using 32bit floats, disable blending and whatever else NV doesn't support for those...

make sure the texture isn't bound when you render

did you call DrawBuffer(GL_COLOR_ATTACHMENT0_EXT) before you draw anything to the FBO? (don't remember if it defaults to that automatically or not)

yooyo
03-25-2005, 04:24 PM
Finally I got working depth buffering using FBO. This is with new FW 76.43. I didn't try with older 75 drivers. Here is a complete setup code:


Init code:

// create FBO
glGenFramebuffersEXT(1, &amp;fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);

// create color texture &amp; attach to FBO
glGenTextures(1, &amp;color_tex);
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXX, TEXY, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glGenerateMipmapEXT(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, color_tex, 0);

// initialize depth renderbuffer
glGenRenderbuffersEXT(1, &amp;depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, TEXX, TEXY);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);

// create depth texture & attach to FBO
glGenTextures(1, &amp;depth_tex);
glBindTexture(GL_TEXTURE_2D, depth_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, TEXX, TEXY, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, depth_tex, 0);


This is GLSL shader code:

uniform sampler2DShadow depthTex;

void main(void)
{
gl_FragColor = texture2D(depthTex, gl_TexCoord[0]);
}

And this is render code:

glBindTexture(GL_TEXTURE_2D, depth_tex);
glEnable(GL_TEXTURE_2D);

sh->Program.Bind(); // use GLSL shader
sh->Program.sendUniform("depthTex", 0);
glColor3f(1,1,1);
glBegin(GL_QUADS); // draw quad
{
glTexCoord2f(0,0); glVertex3f(-5, -5, 0);
glTexCoord2f(1,0); glVertex3f( 5, -5, 0);
glTexCoord2f(1,1); glVertex3f( 5, 5, 0);
glTexCoord2f(0,1); glVertex3f(-5, 5, 0);
}
glEnd();
glDisable(GL_TEXTURE_2D);
sh->Program.Unbind(); // "unuse" shaderyooyo

funkeejeffou
03-26-2005, 08:33 AM
Hello,

I've just got the extension FrameBuffer_Object working on my GF6600(drivers 76.40) but I am having some questions remaining.

- Is using 32bits floats really slower than 16bits one for color_attachments(they will be binded later as a texture2D)?
- Can we have more than one depth buffer for a framebuffer?(I know the question might seem weird...)
- Can we render to more than one framebuffer in a same pass?
- When rendering to a framebuffer having multiple color_attachment, is using gl_FragData[n] in my fragment shaders the good way for writing to the specific color_attachment buffer "n"?

When you initialize a float texture, you should put "GL_FLOAT" for the type in glTexImage2D right?
Why is 3B using GL_INT???

Cheers, Jeff.

divide
03-26-2005, 11:57 PM
3B: No way, I tried with GL_RGBA16F_ARB and GL_RGBA32F_ARB, it still produce NULL texels.

Switching back to GL_RGBA8 makes it work again, so I assume I activated the right things.
I'm not using blending.

I assume my Geforce FX 5200 is able to support thoses float textures extensions, right ?

The tricky things is maybe that I'm using 3 different textures in my second pass:
Whereas the first pass just write values to the FBO without the help of any other texture datas, the second use 3 different textures, and one of these is the result of FBO. It's working when I bind a RGBA8 FBO, but it doesn't when binding RGBA16F or RGBA32F.

To set 3 different textures for the second pass, I proceed as follow:


glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,Material->GLDiff);
glActiveTextureARB(GL_TEXTURE2_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,Material->GLDisp);
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,FBOTexture);


glUniform1iARB(FBOTexLoc, 0);
glUniform1iARB(DiffuseTexLoc, 1);
glUniform1iARB(DisplaceTexLoc, 2);Then when I need to use the FBO I just switch off texture0 like this:


glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, FBOObject);And reactivate it for the second pass like this:


glBindTexture(GL_TEXTURE_2D,FBOTexture);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);Is something wrong ?

Matt Zamborsky
03-27-2005, 02:07 AM
GL_RGBA16F_ARB is part of ARB_texture_float which is similar to ATI version, but GF FX doesn't support any of these extensions. You must use NV_float_buffer extension but beware of the restrictions(eg. rectangular textures)

ScottManDeath
03-27-2005, 11:51 AM
Originally posted by Matt Zamborsky:
GL_RGBA16F_ARB is part of ARB_texture_float which is similar to ATI version, but GF FX doesn't support any of these extensions. You must use NV_float_buffer extension but beware of the restrictions(eg. rectangular textures)So, on the GF FX I could use texture rectangle and the internal formats from NV_float_buffer (e.g. GL_FLOAT_RGBA32_NV) together with EXT_framebuffer_object? So wouldn't need pbuffer in conjunction with NV_float_buffer ?

This would be fine because my dev machine has just a GF FX and I don't want to install VS on my test machine with the GF 6800.

Matt Zamborsky
03-27-2005, 11:52 PM
Yes, you could use framebuffer object in conjuction with NV_float_buffer, I get it work right now and it works perfectly. I think the spec of NV_float_buffer will be updated after complete implementing of framebuffer object extension.

If someone wants the demo of EXT_framebuffer_object with NV_float_buffer and ARB_texture_rectangle + GLSL just say and I post it. ;)

ScottManDeath
03-28-2005, 02:02 AM
Originally posted by Matt Zamborsky:
If someone wants the demo of EXT_framebuffer_object with NV_float_buffer and ARB_texture_rectangle + GLSL just say and I post it. ;) I see this as a clear invitation :)

Thanks in advance!

divide
03-28-2005, 02:24 AM
Matt, I'd like to have a look at it too ! :)

Matt Zamborsky
03-28-2005, 04:03 AM
here it is. And have a fun.

framebuffer demo (http://storage.darxidegames.com/framebuffer_object.rar)

Revision: If you downloaded it, pls download it again, I made some changes to code.

divide
03-28-2005, 09:28 AM
Matt, you wrote a note in your readme about a possible fallback to fixed point values ? How is that possible ? I want to write float values comprised between 0.0 and 1.0 I cannot because the drivers will make them fallback to fixed point values ??

Matt Zamborsky
03-28-2005, 10:20 AM
Fallback to fixed point isnt allowed, but it's good to make a test if implementation doesnt make any unallowed things :) , the driver doesn't implement this extension in all way, and how can I be sure if I have floating point format ? :)

ScottManDeath
03-28-2005, 02:25 PM
Thanks for sharing the code. I played a little bit with your code, also doing the storing/setting of the viewport like your second revision. I added some animation so now I see that something is happenging :cool: . Anyone interested?

Which hw / driver are do you use? I use a GF FX 5600 go and the 75.90 drivers. I had to change textureRect into texture2DRect because otherwise I got a compilation error.

Checking the spec says that your code is the correct one, maybe you have a newer driver than me?

Edit: I upgraded to FW 76.41 and now GLSL behaves as it should :)

How do I render to depth texture while also rendering to color texture? I tried both of the following, but I only got a white depth texture? My intention is to render the depth into a texture and use that texture as a luminance value in a subsequent render pass, so no shadow mapping.


// generate framebuffer
glGenFramebuffersEXT(1, &amp;fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);

// generate 2D texture(color buffer)
glGenTextures(1, &amp;color_tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, color_tex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_FLOAT_RGBA32_NV, rt_w, rt_h, 0, GL_RGBA, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_RECTANGLE_ARB, color_tex, 0);

CHECK_FRAMEBUFFER_STATUS();

// generate depth renderbuffer
glGenRenderbuffersEXT(1,&amp;depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT,depth_rb );
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_DE PTH_COMPONENT24,rt_w,rt_h);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL _DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depth_rb);

CHECK_FRAMEBUFFER_STATUS();

// Depth texture
glGenTextures(1, &amp;depth_tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, depth_tex);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_DEPTH_COMPONENT24, rt_w, rt_h, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_RECTANGLE_ARB, depth_tex, 0);

CHECK_FRAMEBUFFER_STATUS();

glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);My fragment shader looks like this:

varying vec2 T;

uniform samplerRect texture;

void main (void)
{
gl_FragColor = textureRect(texture,T);
}

divide
03-28-2005, 10:24 PM
Where did you get thoses 76.41 ? :confused:

M/\dm/\n
03-28-2005, 11:17 PM
76.41: http://downloads.guru3d.com/download.php?det=1022

In some rieviews i read that FSAA in DOOM3 wasn't working.

ScottManDeath
03-29-2005, 04:16 AM
Ok, it works now. I was just to stupid to see it, even with glasses ;) . The depth values in my z texture were nearly one, so it's clear that it seemed to be white. Now I adjusted the position of my object and it does what it is supposed to do:

Thats the final setup code, with NV_float_buffer and ARB_texture_rectangle. There is no need for a depth renderbuffer when using a texture as depth attachment.

// generate framebuffer
glGenFramebuffersEXT(1, &amp;fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);

// generate 2D texture(color buffer)
glGenTextures(1, &amp;color_tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, color_tex);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_FLOAT_RGBA32_NV, rt_w, rt_h, 0, GL_RGBA, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_RECTANGLE_ARB, color_tex, 0);
CHECK_FRAMEBUFFER_STATUS();

// Depth texture
glGenTextures(1, &amp;depth_tex);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, depth_tex);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_DEPTH_COMPONENT24, rt_w, rt_h, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_RECTANGLE_ARB, depth_tex, 0);
CHECK_FRAMEBUFFER_STATUS();

Matt Zamborsky
03-29-2005, 05:31 AM
OK, thanks for downloading demo :) .
I have GF FX5900 with 76.41 driver.
I play a little with depth buffer of framebuffer object and I found out that, I cant use NPOT depth buffer like 1024x768, but I found nothing about this restriction in spec. So the solve for this problem is depth rectangle texture, I think. But I couldnt setup it correctly. Now I see your code for this I am going to try it.

Tried anyone doing HDR with this extension? I am doing some test and as I see the speed doesnt fall down with 16bit per channel. Maybe because I have 256bit bandwith and my bottleneck is pixel shader performance. So HDR wont be so hurting for rendering with this extension, am I right?
ofcourse if dont calculate the time for tonemapping.

3B
03-29-2005, 03:25 PM
do 76.41 drivers have working stencil renderbuffers yet? or mipmap generation on cubemaps?

Matt Zamborsky
03-30-2005, 08:50 AM
Originally posted by 3B:

Originally posted by Overmind:
Just out of curiosity, could you try the speed of the same scene without framebuffer switches? That is, rendering the reflection passes on screen instead of to the texture...quick hack of just skipping the BindFramebufferEXT with no other code changes looks to be about ~7-10% higher frame rate for the normal or clear only cases...
planning on doing a real pbuffer and copy from backbuffer version for more valid comparison at some point once I'm more awake, will depend on how much time I spend on optimizations or making it look nicer instead :)

edit: more numbers to confuse the issue :)
remembered that I was rendering to float textures, so tried GL_RGBA instead...
GL_RGBA16F: with BindFramebuffer = 35fps, without = 39
GL_RGBA: with bind = 41fps, without = 60fps...
--
actually, looks like without the BindFramebuffers, its wasting a lot of time doing glClears on the entire window instead of just the 256x256 area, so the above numbers are quite a bit lower than they should be, rough guess looks like about a factor of 2x...It's quite old post, but you can use scissor test to specify only 256x256 area and then call clear.

Cab
03-30-2005, 10:05 AM
Originally posted by Matt Zamborsky:
It's quite old post, but you can use scissor test to specify only 256x256 area and then call clear.Are you sure? I think that glClear clears everything even if glScissor is enabled. I don't have documentation here but this is what I recall.

Hope this helps.

[EDIT] It seems you are right:
http://www.mevis.de/~uwe/opengl/glClear.html
"The pixel ownership test, the scissor test, dithering and the buffer writemasks affect the operation of glClear. The scissor box bounds the cleared region. Alpha function, blend function, logical operation, stenciling, texture mapping, and z-buffering are ignored by glClear."

funkeejeffou
04-02-2005, 09:28 PM
Hi,

Anyone succeeded in using float buffers in 32 bits precision? I have a GF6600 with the 76.41 drivers and when specifying a GL_RGB32F_ARB textures color attachments, my framebuffer fails to create.

I really need 32 bits floats cause I am having serious visual artefacts when rendering.

Thanks in advance,
Jeff.

LarsMiddendorf
04-05-2005, 08:40 AM
I've installed the beta driver version 76.41 and framebuffer objects don't work anymore. Even the following example from the extension spec returns GL_FRAMEBUFFER_UNSUPPORTED_EXT. How can I solve this problem? Thanks.



glGenFramebuffersEXT(1, &amp;fb);
glGenTextures(1, &amp;color_tex);
glGenRenderbuffersEXT(1, &amp;depth_rb);
glGenRenderbuffersEXT(1, &amp;stencil_rb);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);

// initialize color texture
glBindTexture(GL_TEXTURE_2D, color_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, 512, 512, 0,
GL_RGB, GL_INT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,
GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, color_tex, 0);

// initialize depth renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
GL_DEPTH_COMPONENT24, 512, 512);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, depth_rb);

// initialize stencil renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, stencil_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
GL_STENCIL_INDEX, 512, 512);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
GL_STENCIL_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, stencil_rb);

Matt Zamborsky
04-05-2005, 08:45 AM
forget about stencil at this time, it isnt functional, and it seems to be the hiden bug, which I had too. Try to reinstall driver, set another profile in settings and the settings too.

LarsMiddendorf
04-05-2005, 09:07 AM
Thanks. It works now. I removed the stencil renderbuffer, and changed some random settings in the control panel.

potmat
04-07-2005, 09:59 AM
I've implemented multiple color buffers for multiple outputs from fragment shaders. It works however strange things are happening, the images appear to blend together. That is, faces that are occulded are still visible, objects appear to be "see through". I've tried playing around with glCullFace and normal directions but to no avail. This only seems to happen when using multiple color buffers. Anyone have any ideas? Here's an image to demonstrate: Sample Image (http://www.tedp.net/media/teapot.JPG)

potmat
04-07-2005, 10:02 AM
I should have noted that in the sample image, the problem is that you can see the handle of the teapot, "through" the teapot.

Jens Scheddin
04-07-2005, 12:34 PM
Originally posted by potmat:
I've implemented multiple color buffers for multiple outputs from fragment shaders. It works however strange things are happening, the images appear to blend together. That is, faces that are occulded are still visible, objects appear to be "see through". I've tried playing around with glCullFace and normal directions but to no avail. This only seems to happen when using multiple color buffers. Anyone have any ideas? Here's an image to demonstrate: Sample Image (http://www.tedp.net/media/teapot.JPG) Probably no depth test enabled?

potmat
04-07-2005, 12:41 PM
Problem solved. The depth test WAS enabled, but it lacked this line


glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);

Java Cool Dude
04-13-2005, 10:24 PM
I put together a demo that uses FBOs, however I didn't really notice neither a gain nor a loss in performance after switching from Pixel Buffers...
hmmm
Screenshot (http://www.realityflux.com/abba/C++/GLSLReflRefrChrm/Refl%20Refr%20Chrm%20Aberr.jpg)
Binaries (http://www.realityflux.com/abba/C++/GLSLReflRefrChrm/ReflRefrChrmAberr.zip)
Source (http://www.realityflux.com/abba/C++/GLSLReflRefrChrm/ReflRefrChrmAberrSrc.zip)
Engine source (http://www.realityflux.com/abba/C++/SXMLEngine/SXMLEngine.zip)

Tojiro
04-14-2005, 08:47 AM
Nice Demo, but I can see why it wouldn't show much of a difference in performance. Correct me if I'm wrong, but I'd imagine that the effect your doing only requires 2 or 3 rtt operations per frame, right? That would mean 3 or 4 context switches with the pbuffering system. While I'm sure there is SOME slowdown there, when it's on that small of a scale it probably won't even make a blip on modern hardware, so the switch to FBO would probably seem negligable.

The real improvments are going to show up when we start using rtt as an integral part of our scenes. For example, in his latest Quakecon keynotes, Carmack mentioned that he was using shadow buffers in his research engine. Since FBO wasn't avalible at that point, he was using pBuffers for all render to texture operations, which required "Hundreds of context switches per frame", something he remarked was slowing down rendering significantly. I would imagine that in a situation like that the switch to FBO's (if well implemented) would give a significant increase in speed.

Just my $0.02

Java Cool Dude
04-14-2005, 09:50 AM
I'm updating the dynamic cubemap about 30 times a second which corresponds to half the amount of context switches when using PBuffers.
With FBOs, I was expecting a slightly better behavior of the frame rate, but it was about the same.
I guess I'm shader limited in this case.
PS: notice when the bump scale is pushed all the way up, the frame rate falls dramatically. I'm lead to believe this is due to the way texture caches work since when normals change significally from one fragment to the other, cache data locality gets disturbed which results in intense flushing and buffering...slighlty off topic I know :p

weesel
04-16-2005, 09:53 PM
ok, i've been fighting these render buffers for a long time. here's the problem...the fbo check status returns complete, i try to render and get nothing, however if i do a glClear with whatever color it will show up on the texture, but no matter what i seem to do no actual drawing will show. btw, if i draw the screen where i would be drawing to the fbo, it works fine. any ideas?

tamlin
04-16-2005, 10:18 PM
Originally posted by weesel:
...
if i draw the screen where i would be drawing to the fbo, it works fine. any ideas?Note: I haven't read up on FBO, why I don't know if any of the following makes sense.

About the only thing I can come to think of, would be if buffers such as depth or alpha could be set separately for FBO's vs. frame buffer. It should be easy enough to verify by turning off depth, alpha and stencil testing.

weesel
04-16-2005, 11:50 PM
not sure what the problem was before, but i dropped the code i was using into what i was writing it for and it work, so alls well that ends well...almost that is. i am getting a different problem now, when i render to the fbo, it seems that part of it is getting cut off. the top and right sides of the texture are not getting drawn to. i am using the same size view ports. ???

Zeross
04-25-2005, 04:17 AM
At the moment I'm experimenting with shadow maps and FBO, it works Ok but if I have to attach a dummy color buffer to my frame buffer object otherwise its status is incomplete.

From the spec it seems unnecessary to do that (issue 45), so is this a limitation of the current drivers (76.43) or is this my fault ?

skynet
04-25-2005, 04:42 AM
Maybe you forgot to set read and drawbuffer to GL_NONE?

Zeross
04-25-2005, 05:03 AM
Originally posted by skynet:
Maybe you forgot to set read and drawbuffer to GL_NONE?In fact if I don't set the draw buffer to GL_NONE the status is GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER_EXT, and the same with the read buffer (GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER_EXT)

If I set them both to GL_NONE the status becomes GL_FRAMEBUFFER_UNSUPPORTED_EXT

Bumper
05-04-2005, 01:35 PM
I tried the example below and I get the status UNSUPPORTED.

It work without depth buffer.

What is wrong with the depth buffer?


// create FBO
glGenFramebuffersEXT(1, &amp;fb);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
// create color texture &amp; attach to FBO
glGenTextures(1, &amp;tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXX, TEXY, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glGenerateMipmapEXT(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, tex, 0);
// initialize depth renderbuffer
glGenRenderbuffersEXT(1, &amp;depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, TEXX, TEXY);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);
// create depth texture & attach to FBO

glGenTextures(1, &amp;depth_tex);
glBindTexture(GL_TEXTURE_2D, depth_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, TEXX, TEXY, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, depth_tex, 0);my drivers are 76.45 and my card is gf6800gt