problem with creating single component float texture

This one works:
::glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_FLOAT_R32_NV, width, height, border, GL_RGBA, GL_UNSIGNED_BYTE, 0);

But this one does not (INVALID_ENUM):
::glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_FLOAT_R32_NV, width, height, border, GL_R, GL_FLOAT, 0);

How come?

I’m using nVidia GF6600 81.95

already tried
GL_RED
instead of
GL_R

Originally posted by andras:
[b]This one works:
::glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_FLOAT_R32_NV, width, height, border, GL_RGBA, GL_UNSIGNED_BYTE, 0);

But this one does not (INVALID_ENUM):
::glTexImage2D(GL_TEXTURE_RECTANGLE_NV, 0, GL_FLOAT_R32_NV, width, height, border, GL_R, GL_FLOAT, 0);

I’m using nVidia GF6600 81.95[/b]
How come? you need to use GL_TEXTURE_RECTANGLE_NV? Why not use GL_TEXTURE_2D NPOT is automatic on 6 series cards…

Yup, that’s what I’ve thought too, but sadly, no:

Textures with a base internal format of FLOAT_R_NV, FLOAT_RG_NV, FLOAT_RGB_NV, and FLOAT_RGBA_NV are known as floating-point textures. Floating-point textures are only supported for the TEXTURE_RECTANGLE_NV target. Specifying an floating-point texture with any other target will produce an INVALID_OPERATION error.
EDIT: Of course, I could use the ARB float formats, but those ones you cannot bind to an FBO… at least not the one or two component ones.

2 things come to mind just quickly…

You’re specifying GL_UNSIGNED_BYTE in the first call, but GL_FLOAT in the second call for the data type. Might that be the cause?

Secondly, I’ve only used 16bit float textures before, have you tried using 16bits for that single channel? It might work!

And also, have a look around on the nvidia developer area, I found an article a while back saying what float formats were accelerated in hardware… its not all of them, infact I thought RGBAF32’s werent supported in hardware at all, must be wrong since your first call works :slight_smile:

Edit: And I think TEXTURE_RECTANGLE_NV or _ARB doesnt support borders at all…

I just had to use GL_RED instead of GL_R, Chuck0 was right. I have to say that this part of the specs is a litte confusing.

Not as confusing as floating point texture support though. I just wanted a simple one channel float texture. Well, I can’t use the ARB format, because they won’t work with FBO. But if I use the NV format, then I have to use TEXTURE_RECT. But if I use TEXTURE_RECT, then I don’t have GL_REPEAT! Oh, boy… And to figure out all this, I have to dig through all these extensions, because these limitations are always hidden somwhere under the issues paragraph…

Will one component floats be fully supported any time soon, or should I just forget about them??

single-channel (float) textures as FBO attachments are not part of the spec, this hopefully will be defined in a followup extension. NVIDIA supports them nontheless, which comes in very handy in my applications. After some lengthy discussions with the NV driver guys, I came to agree that supporting luminance FBO textures only through their vendor-specific extension does indeed make sense from the point of view of the GL extension mechanism. The drawback is of course that the nv_float_buffer extension is texrect(arb|nv|ext) only with all the known drawbacks.

Hmm, can I read texture rectangles in GLSL at all? I mean, we only have sampler2D and texture2D()! Does this work with texture rect??

They will work with NPOT textures. For rectangular textures (whether NV or ARB) use:
samplerRECT and
textureRECT(…)
Similarly for other variants.

Thanks, this works! It’s called sampler2DRect, and texture2DRect though.

One more question: Is vertex texture fetch accelerated from texture rectangles? I get horrible performance (< 1 fps) trying to read it using texture2DRect in the vertex shader! :frowning:

<WARNING, INCOMING RANT> :wink:
Hah, now I can not even attach a luminance8, or luminance8_alpha8 texture to an FBO?? I mean, come on! We really need the ARB to wrap up the FBO specs, because in it’s current form, it’s just way too restricted! Yeah, I know it’s not obvious how to render into luminance, but in this shader era, it’s really not “luminance”, just a component like anything else, it’s just a name… If they called it simply “red” then it would work??
Seriously, when is SuperBuffers going to arrive?
And we also have to get rid of all this legacy stuff that’s holding us back, like the notion of luminance that only makes sense in the fixed function pipeline… Just look at DirectX 10! They have ripped out the fixed pipeline altogether!
</RANT OVER>

Originally posted by andras:
One more question: Is vertex texture fetch accelerated from texture rectangles? I get horrible performance (< 1 fps) trying to read it using texture2DRect in the vertex shader! :frowning:
Rectangle vertex textures are not hardware accelerated on GeForce 6/7 GPUs, you need to use 1D or 2D textures.