PDA

View Full Version : Downscaled Buffers



noncopyable
12-06-2007, 05:46 AM
Hello.

I have trouble understanding one of the statements in Masaki Kawase's "Real-Time High Dynamic Range Image-Based Lighting" presentation (page 50/51), which you have 5 downscaled buffers and their composition.

Either i don't understand meaning of "downscale",
(since my english not that good, and i am not that familiar with some technical terms) or those level of images aren't downscaled versions of the original image.

What i am saying is, basicaly, if i take original image and downscale using an image editor,
then magnify to original size, i don't get those results. Could you help me to understand what is going on? :)
presentation (http://www.daionet.gr.jp/~masa/column/2004-04-04.html)

Thank you.

Ysaneya
12-06-2007, 05:57 AM
The obvious thing is, it's done in HDR space. So of course using an image editor like Photoshop (which doesn't work in HDR space) will lead to different results..

ZbuffeR
12-06-2007, 06:45 AM
Maybe you missed the blur which is done at each downscale separately, as explained page 49 ?

Page 50 the downscaled textures are magnified with nearest fitlering up to the original size. Same page 51, but magnified with bilinear filter.

He applies a small-width blur to the original texture, then downscale it, applies same blur again, downscale, blur, etc.

Then he adds all these textures and the result is the image page 52.

I am not sure if he downsample the original image or the one already blurred. Anyway the technique seem to work well, tested quickly with Gimp, with the low range limitation of course. You should use CinePaint to prototype it correctly.

noncopyable
12-06-2007, 06:53 AM
The obvious thing is, it's done in HDR space. So of course using an image editor like Photoshop (which doesn't work in HDR space) will lead to different results..

Those buffers are the step after extracting high-luminance regions (brightpass), and normalized (0-1), black areas are the minimum high luminance regions, at this point it has nothing to do with HDR space, is it? If so, i hope you have an example downscale shader that gives the results in presentation.

Thank you.

noncopyable
12-06-2007, 07:06 AM
Maybe you missed the blur which is done at each downscale separately, as explained page 49 ?

Page 50 the downscaled textures are magnified with nearest fitlering up to the original size. Same page 51, but magnified with bilinear filter.


That blur he mentioned at page 49, is the difference between page 50 and 51.
Both page 50 and 51 he magnified results with bilinear filter, the only difference is he applied blur before magnifying images.

Could you tell me about the steps you did in Gimp, if you got same results?

Thank you.

zeoverlord
12-06-2007, 07:47 AM
The obvious thing is, it's done in HDR space. So of course using an image editor like Photoshop (which doesn't work in HDR space) will lead to different results..

Those buffers are the step after extracting high-luminance regions (brightpass), and normalized (0-1), black areas are the minimum high luminance regions, at this point it has nothing to do with HDR space, is it? If so, i hope you have an example downscale shader that gives the results in presentation.

Thank you.

Your right, it has little to do with HDR.
The technique described on pages 49-52 is really a slightly more advanced variant of regular mipmap bloom and is usually used as fake HDR.
Naturally it's a great compliment to HDR, but it's not required.

Ysaneya
12-06-2007, 10:05 AM
Those buffers are the step after extracting high-luminance regions (brightpass), and normalized (0-1), black areas are the minimum high luminance regions, at this point it has nothing to do with HDR space, is it?

That was my first idea when I saw the pictures.

The reason is, notice the white spot in the middle of the circle. As the buffer gets downsampled and blurred, the white area becomes bigger and bigger.

When blurring in low range, the averaging of the pixels will tend to make the neighbooring pixels become darker..

noncopyable
12-06-2007, 10:17 AM
I know what you mean, aside if you look at the pictures, the pattern does not look same, the pattern between image 1-2 and image 2-3 looks quite different. I start to think these are just images independent from idea/algorithm...

ZbuffeR
12-06-2007, 11:52 AM
That blur he mentioned at page 49, is the difference between page 50 and 51.
Both page 50 and 51 he magnified results with bilinear filter, the only difference is he applied blur before magnifying images.

No. You are wrong. Re-read the presentation.
- page 50 : "Applying Gaussian Filters to Downscaled Buffers"
- page 51 : "Magnify them using bilinear filtering and composite the results, The error is almost unrecognizable"



Could you tell me about the steps you did in Gimp, if you got same results?
new 512*512 image, black background.
use airbursh with light green, add mode, to do image like page 48.
copy original image to new image 0, to ease comparison.
then copy and downscale bilinearly the image 0, halving x and y sizes apply gaussian blur to image 1, width around 4 or 6 seem to work well.
repeat until you have 5 or 6 of these mipmaps.
then upscale (bilinear filter) and copy all mipmaps to new layers on image 0, using "additive" mode.

noncopyable
12-06-2007, 08:33 PM
No. You are wrong. Re-read the presentation.
- page 50 : "Applying Gaussian Filters to Downscaled Buffers"
- page 51 : "Magnify them using bilinear filtering and composite the results, The error is almost unrecognizable"


Then, only difference between two pages are magnifying filters, and i can get the results at page 50, by just downscaling original image then applying a gaussian blur with bigger width? Since this is what i am doing now with the only difference, i use normal gaussian blur.

Thanks for replies.