float RBG unsupported as FBO attachment?

I’m just trying to attach a GL_RGB32F_ARB texture to the FBO, and I get an GL_FRAMEBUFFER_UNSUPPORTED_EXT error. I’ve just looked at an nVidia slide stating that acceleration for ARB float RGB formats are working. Is it? I can’t use the NV formats, because they have to be TEXTURE_RECT, which is seriously limited, and is also not standard…

EDIT: same thing happens using GL_RGB_FLOAT32_ATI, but it works with GL_FLOAT_RGB32_NV (which I can’t use)…

This is on an nVidia GF6600 with 81.95 drivers.

Thanks,

Andras

Try with RGBA. 2 and 3 channel textures /framebuffers/etc always seem to be the last ones to get features like floating point filtering, blending, pbuffer formats, etc because they’re comparatively less used.

No, it does not work with RGBA either. I’m still hoping that I’m doing something wrong, but it’d help if someone could confirm/deny the support for ARB NPOT float RGB(A) texture attachment to FBOs…

Well, actually, there’s confirmation here: http://download.nvidia.com/developer/presentations/2005/SIGGRAPH/fbo-status-at-siggraph-2005.pdf
But then why do I get a framebuffer unsupported error??

You should not be getting GL_FRAMEBUFFER_UNSUPPORTED_EXT for GL_RGB32F_ARB unless you have linear filtering enabled for that texture.

I’ll try that here in a sec, but could you explain why does filtering matter when a texture is bound to an FBO? I always thought that filtering is only used during sampling the source texture…

You need to do thoses following steps:

  • glGenFramebuffersEXT

  • glBindTexture

  • Set texture parameters : You must set GL_TEXTURE_WRAP_S and T and GL_TEXTURE_MIN and MAG filter. If you don’t do it, FBO will fail.

  • glTexture2D

  • if you are creating a ‘depth’ texture, you must write glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); (again not explicit in the spec, else it will fails).

  • glFramebufferTexture2DEXT.

With float texture and NPOT texture: WRAP must be GL_CLAMP_TO_EDGE
With float texture, texture filtering must be disabled (GL_NEAREST).

Hope that will help

Thanks, it works now. Do you know why we need to set these parameters? I’m just curious.

I guess that if you don’t specify the read and draw buffers, GL will try to read and draw to those buffers, which aren’t set for the fbo as you just (I guess) stippled a texture attachement and no rbo attachment, hence leading to an unproper initialized fbo.
I hope some day the driver will be able to detect such a situation (only RTT enabled) so that it can automagically disable reading and drawing to/from the logical buffers.
Anyway automagic could look nice but that’s not a good thing.

Also note that sometimes even if an fbo hasn’t been completed it could still work. But this is not a good way to do things, this turns out.

Hmm, it seems to work even if I don’t set the read and draw buffers.
What I must set though, is the filtering. What does it mean, when I set the FBO attached texture’s mag and min filters to linear? And it seems to work with non fp textures, but what does it do? I understand that you might read a value from the FBO, eg. while blending, but you never sample from between two texels, so I don’t see why filtering is relevant…

I’m not sure if I understand correctly what you asked. Anyway, let’s make a try. Setting texture parameters are important to know how to draw the texture and for how to get all its values depending on the uses. Just because a texture won’t be drawn on its full size on the screen and perfectly in front of the camera (like in an ortho projection). So GL does have to know how to calculate all the values of each pixels regarding the real values stored in the real texture. That’s what magnification and minification just do. Setting them to linear will provoke just linear interpolation when calculating the texture pixels for different sizes regarding the distance to the camera.
Texture parameters must be set on the texture creation.

I’m not quite sure about what you’re saying either, so I guess we are even :wink:
I do understand what min/mag filtering does, when I use the texture to render from (so I’m reading from it eg. in a shader). What I do not understand is when I attach a texture to an FBO’s color attachment, then I am only writing into that texture, and do not read it. It’s almost like uploading an image with glTexSubImage, and that one obviously does not care about filtering, right?

One doesn’t attach a texture to an fbo color attachment, one attaches a texture to an fbo directly.

Also, having texture parameters defined is an important thing, and GL seems not to define default parameters for them. Just because of the things I told you on my last reply: textures, at 99.99999 %, won’t be used as this. So there must have a predefined way to interpolate the pixels for fragments to be useable. And this is true not only for reading back from the texture, but also for writing to it even if the viewport has the same size (just because a pixel/texel is also an interpolation). Maths do that work and there must have a way to transform maths results into texels. Just guess at a line that’s not parallel to the viewport axis.

Originally posted by jide:
One doesn’t attach a texture to an fbo color attachment, one attaches a texture to an fbo directly.
This is simply not true. Forgive me, I don’t want to sound rude, but have you ever used FBOs? You can’t just attach a texture to an FBO “directly”, you have to specify the attachment point, which can be color, depth or stencil. There’s even multiple color attachment points, to enable MRTs (multiple render targets).

Also, having texture parameters defined is an important thing, and GL seems not to define default parameters for them.
This is also incorrect. OpenGL is a state machine. It’s always in one state or another, never stateless… The default setting for the mag filter is GL_LINEAR and the min filter is GL_NEAREST_MIPMAP_LINEAR, what every OpenGL programmer learns, when first tries to render a texture that has no mipmap levels.

Just because of the things I told you on my last reply: textures, at 99.99999 %, won’t be used as this. So there must have a predefined way to interpolate the pixels for fragments to be useable. And this is true not only for reading back from the texture, but also for writing to it even if the viewport has the same size (just because a pixel/texel is also an interpolation). Maths do that work and there must have a way to transform maths results into texels. Just guess at a line that’s not parallel to the viewport axis.
Your explanation still doesn’t make much sense to me. While it is possible to sample a point that covers multiple texels (min), or that is in between texels (mag), it is not possible to write between pixels, so filtering itself does not make sense.
That said, there must be a reason, and I’m guessing that at some point, the attached texture is actually sampled, I just don’t know where and why would that happen.

Originally posted by execom_rt:
- if you are creating a ‘depth’ texture, you must write glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); (again not explicit in the spec, else it will fails).
This is mentioned in example 7 in the spec.

No I never read the FBO specs :slight_smile:

I thought you meant a texture attached to a renderbuffer object.

For the second point I don’t know what to tell. So why does it simply not work if you don’t specify the texture parameters ?

And for the last point, still don’t know what to tell now.

Originally posted by jide:
For the second point I don’t know what to tell. So why does it simply not work if you don’t specify the texture parameters ?
I did specify them, but they were set to linear filtering. And jra101 said that floating point textures have to use nearest filtering, when attached to an FBO. Now I just want to know why! :slight_smile:

Okay I think I got your point now.

It looks too confusing from my point of view… I think I did it way easier, dont have the source at hand right now.

Question - If you use linear filtering with texture rectangle, then why don’t you switch to standart textures with ARB_texture_non_power_of_two or smthn… Only GF6+ supports linear FP texture filtering and they support ARB_npot2.

Fallback path would be tex_rectangle + fp16 with nearest filtering…

I am using standard ARB NPOT float RGBA textures (see first post in thread). And linear filtering works on them, when they are used as a source, but I have to set the filter to nearest, when attaching to the FBO…