Grayscale rendering

Hi,

i am looking for the fastest way to switch from color rendering to grayscale(and back).is there a eay to do it on the fly without reload converted textures? i am not looking for perfect grayscale,only simulate effect of loosing colors.
Thansk for help,

JIPO

There has been a similar discussion in the forum about a week ago.

yes,but that discussion was mainly about rebuilding textures…i am looking for some solution with additional blending pass(es) to simulate noir atmosphere,not real grayscale.
or is there a way to change display setting? like color saturation and others…

I started investigating post processing methods with blends etc. but I haven’t gotten anywhere, unfortunately. There could be away maybe (haven’t really thought it thouroughly about it yet) if we could map components differently (like with the glPixelMap function) when writting into the frame buffer : the R component of the source into the G component of the destination for example.

The ARB_imaging stuff could perhaps be of some use… I noticed that NVidia’s latest drivers (7.xx) support this (on GeForce2 at least, don’t know about the others). However, I don’t know if it’s hardware accelerated or not. I plan to find out soon, though

Yes, in theory, you could use ARB_imaging to transform the framebuffer into grayscale. You’d use CopyPixels of the entire screen (or at least the region in question) with a color matrix, and with ReadBuffer and DrawBuffer both GL_BACK, before SwapBuffers. You could put the NTSC R, G, B scaling values in the appropriate matrix elements. You could also select the “degree” of grayscale that you wanted, if you want only a partial grayscale effect.

Don’t blame me if it’s really slow. (Well, actually, if there was anyone to blame, it’d be me, seeing as I implemented it, so you can blame me, and you are fully within your rights to blame me, but I refuse to accept your blame.)

A faster way to implement it might be to use CopyTexSubImage to copy that portion of the buffer into a texture, then to render using that texture, using NV_register_combiners to implement a 3x3 color matrix. It’s actually a pretty efficient way to use the combiners, since it takes advantage of a lot of per-pixel math!

A dot B computes texture dot Crr,Cgr,Cbr --> Spare0
C dot D computes texture dot Crg,Cgg,Cbg --> Spare1

A dot B computes texture dot Crb,Cgb,Cbb --> Spare0
CD computes Spare0 * 1,0,0 --> Texture0

EF computes 0,1,0 * Spare1
AB + (1-A)C + D computes 0,0,1Spare0 * 1,1,0EF + Texture0 = 0,1,0Spare0 + 0,1,0Spare1 + Texture0

You need 6 constant colors – three sets of matrix coefficients and three primary colors. Two can go in the constant0 and constant1 registers, two can go in the primary and secondary interpolated colors, one can go in the fog color, and the last can be put in a 1x1 texture and bound to the second texture unit. So it just BARELY fits.

Note that we can’t use register combiners to implement the color matrix internally because the combiners have a [-1,1] range limitation and insufficient precision (pixel path operations are generally assumed to be performed with floating point precision).

  • Matt

Oops, typo, it should be 0,0,1*Spare0 on the last line of the combiners description.

I can’t edit posts still, argh.

  • Matt

Hmm… I think I was a bit misguided about ARB_imaging. I got pretty excited about it at first because I was under the impression that it enabled me to perform glPixelTransfer-type stuff on fragments as they are drawn, instead of having to use glCopyPixels() on the entire framebuffer.

I understand perfectly well that this isn’t possible for things like the convolution filters, but I think it’d be great if we could do simple per-fragment color transformations without having to resort to some kind of postprocess read/modify/write scheme. (Hint, hint)

EDIT: Matt, what’s this about not being able to edit posts?

[This message has been edited by Tom Nuydens (edited 11-23-2000).]

Since we are talking “slow”, why not use my “do it real slow” technique (expect to be impressed by the speed … or rather the non-speed, those who are affraid of the 1/x kind of FPS rate should skip this post ).

Here we go:

// Your func that does all the drawing.
displayObjects();

GLint depth_enabled;
GLint fog_enabled;
GLint texture2d_enabled;
GLint texture1d_enabled;
// this is to speed up a bit, you can remove this if you like it sloooow.
if(depth_enabled = glIsEnable(GL_DEPTH_TEST))
glDisable(GL_DEPTH_TEST);
if(fog_enabled = glIsEnabled(GL_FOG))
glDisable(GL_FOG);
if(texture2d_enabled = glIsEnable((GL_TEXTURE_2D))
glDisable(GL_TEXTURE_2D);
if(texture1d_enabled = glIsEnabled(GL_TEXTURE_1D))
glDisable(GL_TEXTURE_1D);

GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
GLubyte *pixels = new GLubyte [viewport[2] * viewport[3]];
GLint saved_matrix_mode;
glReadBuffer(GL_BACK);
glDrawBuffer(GL_BACK);

// Why read the green component … but why ???
glReadPixels(0, 0, viewport[2], viewport[3], GL_GREEN, GL_UNSIGNED_BYTE, pixels);
// The usual blabla
glGetIntegerv(GL_MATRIX_MODE, &saved_matrix_mode);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0.0, (GLfloat) viewport[2], 0.0, (GLfloat) viewport[3]);
glRasterPos2i(0, 0);

// Here is the tricky bit .
glDrawPixels(viewport[2], viewport[3], GL_LUMINANCE, GL_UNSIGNED_BYTE, pixels);

glPopMatrix();
glMatrixMode(saved_matrix_mode);

if(depth_enabled)
glEnable(GL_DEPTH_TEST);
if(fog_enabled)
glEnable(GL_FOG);
if(texture2d_enabled)
glEnable(GL_TEXTURE_2D);
if(texture1d_enabled)
glEnable(GL_TEXTURE_1D);

glutSwapBuffers(); // Or whatever API you like.

Et voila, with that u can at least create still images and you don’t even need to use extensions !
Yes, it works … I tested it …
I use the green component as a source but blue or red can be more suitable depending on the image. I tried with luminance as source but the resulting image was much too bright (I don’t know why but most of the image was white, maybe you can help ?).
If you think it sucks … beh, you’re right !
You can send some insults here : moz@ifrance.com

I don’t think that it is mathematically/physiologically correct to use any single color component as a luminance component. But it’s easier to understand than what matt proposed…

Uhmmm, what do I have to read to get such brilliant ideas like you do, matt? That is, I use matrices, calculate them etc. but don’t know the background. Is anything (human) readable on the web?

You’re right Michael, it’s not correct to use a single color component as a luminance component (though it can give quite good results, especially with green which is the most weighted component when you compute the luminance).
As I said :

Originally posted by Moz:
I tried with luminance as source but the resulting image was much too bright (I don’t know why but most of the image was white, maybe you can help ?).

So if any of you guys know why, please tell me !
Actually the framerate with my “technique” was not so bad (I use a PIII 500 w/ software rendering under winNT, it runs at over 1FPS in 1024x768 fullscreen and almost smoothly in a small window, very small though). Maybe with a nice Geforce2 Ultra … there will still be the problem of accessing the “luminance” buffer in RAM for every frame, but it might be quite fast.

Moz

I get 3.33 FPS “w/ colors” and 1.12 FPS “grayscale” in a maximized window w/ screen resolution of 1024x768.

It simply won’t let me edit posts – it claims the password is wrong every time.

ARB_imaging is strictly for the pixel path. If you want to perform math on individual pixels, well, NV_register_combiners is all that we can offer today. In the future, we will offer more. I doubt we will offer a per-pixel 4x4 floating-point color matrix explicitly, because no one would use it and it would be a big waste of gates, but as you can see, you can already implement a 3x3 fixed-point color matrix, and future extensions will allow yet more complicated math.

If you aren’t doing any complicated shading operations, just either a texture or a single interpolated color, you can do this without any CopyTexImage or ReadPixels-type stuff – just use that register combiners setup while doing your regular rendering.

ReadPixels of GL_LUMINANCE will actually read back R+G+B as your luminance value, causing saturation. You can use PixelTransfer({RED,GREEN,BLUE}_SCALE, value) to set up weights for R/G/B. If you set them right, you could read back “correct” luminance.

I recommend the register combiners technique as the only real way to do this without a massive performance drop. Ranking the options proposed so far from slowest to fastest:

CopyPixels and ARB_imaging color matrix
ReadPixels of luminance with channel weightings, DrawPixels of luminance
ReadPixels of one channel, DrawPixels of luminance
CopyTexSubImage, render w/ register combiners 3x3 color matrix
Render directly w/ register combiners 3x3 color matrix in one pass

  • Matt

I assume one needs a GeForce card to have register combiners. Am I right? I’m a bit out-of-date with my hardware here, as you remember…

Moz: i had the same idea,but i dont want SUCH SMALL frame rate…game must run fast:-))) so the question is still here: how to do it without buffer reads a without vendor specific extensions? 3d card specific code nevermore:-)
i got some interesting results like some psychedelic rendering modes,byt not grayscale.but thanks a lot,

JIPO

I get small framerates only because I don’t have a hardware accelerator (but actually the ratio of FPS between color rendering and grayscale may be even bigger with an accelerator).
By the way, thanx Matt. I was thinking of using glPixelTransfer but didn’t know how to do it.

Moz

TNT 2 has register combiners.

no they don’t.

They have the NV_texture_env_combine4 extension, which isn’t nearly as versatile.

I know it has that, but I’d swear I remember seeing a check mark next to the NV_register_combiners extension in some pdf file I downloaded awhile back.

Nope, only GeForce and up have register combiners. I checked the NVIDIA extension document just to make sure.