PDA

View Full Version : render single channel floating point image to quad



asael
12-02-2011, 04:14 AM
Please forgive me if this has been asked before. I've spent quite a bit of time reading the documentation and searching around today about this issue.

I want to take an array of floats that I've created CPU-side and render them to a quad on screen. I'm using SDL on Windows, am using GLEW to handle my extensions, and am targeting Opengl 2.0. Before I dive into what I've tried, I want to make it clear that I'm just looking for a way to render a single channel float array. Using RGBA formats would not be desirable as it would require writing each pixel repeatedly.

I've tried binding my texture using different floating point internal formats like GL_LUMINANCE32F_ARB and GL_FLOAT_R_NV, but it doesn't seem to work properly when using glTexSubImage2D. It seems to only pay attention to one out of every three values. However, it works fine with glDrawPixels.

I create my texture like this


glGenTextures(1, &tex_id);
glBindTexture(GL_TEXTURE_RECTANGLE, tex_id);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_LUMINANCE32F_ARB, TILE_SIZE, TILE_SIZE, 0, GL_LUMINANCE, GL_FLOAT, render_target);

I've created a couple of PBOs for buffering


glGenBuffers(2, pbo_ids);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo_ids[0]);
glBufferData(GL_PIXEL_UNPACK_BUFFER, TEXTURE_SIZE, 0, GL_STREAM_DRAW);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo_ids[1]);
glBufferData(GL_PIXEL_UNPACK_BUFFER, TEXTURE_SIZE, 0, GL_STREAM_DRAW);


I copy from a pbo using


glBindTexture(GL_TEXTURE_RECTANGLE, tex_id);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo_ids[pbo_index]);
glTexSubImage2D(GL_TEXTURE_RECTANGLE, 0, 0, 0, TILE_SIZE, TILE_SIZE, GL_LUMINANCE, GL_FLOAT, 0);


The working glDrawPixels call is simple


glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo_ids[pbo_index]);
glDrawPixels(TILE_SIZE, TILE_SIZE, GL_LUMINANCE, GL_FLOAT, 0);

There are some other points I'm looking for. I'd prefer not to normalize my values on the cpu. I thought I could use one of the floating point internal formats supported by my Gefore 8000GT, since the OpenGL Superbible says it won't clamp values.

Anyway, I would really appreciate any help. I've seen a couple of similar posts to this but they've all dealt with RGBA formats. Any ideas?

ZbuffeR
12-02-2011, 09:52 AM
sound like a driver bug, especially if drawpixels works as expected.
1) Do you really need GL_TEXTURE_RECTANGLE ?
AFAIK your card should work with GL_TEXTURE_2D directly, even if non-power-of-two. Worth a try.
2) Does replacing GL_LUMINANCE in the glTexSubImage2D by GL_RED makes a change ?

asael
12-02-2011, 01:19 PM
I gave both of your suggestions a try, but no dice. I used GL_TEXTURE_RECTANGLE over GL_TEXTURE_2D because I read that it would be faster for textures without filtering (GL_NEAREST). Using GL_TEXTURE_2D doesn't make a change though unfortunately.

I've attached a zip with some source that reproduces the flaw.

BionicBytes
12-02-2011, 04:18 PM
You could try GL_R32F as the internal format and GL_RED as the source format. OpenGL 2.1 always had issues with single channel floating point formats which are natively supported and accelerated. That's why the later ARB texture RG extension and OpenGL 3 was so welcome.

asael
12-02-2011, 06:02 PM
No luck with GL_R32F and GL_RED either. If what you're saying is that OpenGL 3 supports single channel float textures then maybe I should consider using it. I only thought that maybe OpenGL 2 was better supported and so it would be worth figuring it out. However, it looks like I'll just have more headaches trying to workaround platform specific bugs.

While I'm at it, is there any reason to use 3 over 4?

Alfonse Reinheart
12-02-2011, 06:23 PM
While I'm at it, is there any reason to use 3 over 4?

It's all about how wide a net you want to cast. There is a lot of 3.x hardware out there, and there's a lot less 4.x hardware. Also, if you're not using the hardware features of 4.x, then why restrict your application to 4.x only?