Simple IBR demo

Just wanted to share a simple demo with you. I have not gotten the VBO to work yet and the demo doesn’t use any lighting yet but you can see the prcinciple at least.

I have been working with these volumes for a while where I originally used depth info to generate shadow volumes. This is a side effect of the technique where you render a model that only consists of images + depth info in the alpha channel.

The technique can handle multiple views and non convex objects. The demo images is however only taken from a cube convex view.

No lod levels are used yet…
http://www.tooltech-software.com/downloads/gizmo3d/binaries/win32/IBR_demo.zip

Tell me what you think ???

I don’t know what’s going on there. The lighting is completely static. I don’t think that was intended. Also, there are some artifacts outside of the model (thin vertical lines).

Can the model be rotated?

Here’s a screenshot .

(Radeon 9500Pro, Cat 3.6)

The light is not yet implemented. The thin lines are I guess a driver bug. No such lines on NVidia or Radeon latest beta drivers.

Use
‘q’ and ‘w’ to move sideways
‘a’ and ‘z’ forward/backwards
‘s’ and ‘x’ up down

use left mouse buton pressed for free movement or arrow keys for discreet movement

The light will be implemented soon… Just working with the geometry right now…

Not much responce for this demo ?

In my eyes this is the future for rendering 3D graphics. If we did have a bit better support in the HW for this, we could easily be rendering ray traced object of real nature in real time…

Was it a too abstact demo or is nobody interested in IBR ?

My colleague was considering doing a project on image-based rendering. I would check it out, but I run linux, not windows.

Damn, your demos never work on my computer. It says that the file GDBASE.DLL is linked to a missing export-function KERNEL32.DLL:TryEnterCriticalSection. Well maybe it’s just my win95…

As for the IBR stuff, I’m interested, but quite sceptic too. The idea of content independent rendering speed is tempting and some of the images look impressive. But then you find out that they’re just small areas rendered with supercomputers, the model takes tons of memory, and when you look at the pictures again, you could almost always make a corresponding polygon model and it’d run in realtime with three year old hardware.

Rendering models as depthmaps could be effective, but the models would be hard to author, let alone animate. Besides, with displacement mapping of free form polygon models becoming supported in hardware, isn’t this a kind of a step back…?

-Ilkka

Originally posted by ToolTech:
Not much responce for this demo ?

I just tried it. All I could see was a flickering white shape on a blue background? The shape looked like two offset semicircles but it flickers quite rapidly. What was I supposed to see?

I’m using P4 2.8GHz, GF4 Ti4400.

“I just tried it. All I could see was a flickering white shape on a blue background? The shape looked like two offset semicircles but it flickers quite rapidly.”

That is all that I see as well !

I can see it, but… it looks like a quad mapped with an image with normals in it, nothing more… shoild it be possible to rotate it or anythng like that?

Seemed to run OK on my PC (geforce 3 - AMD 2400+ XP - Detonator 40.72 - 512Meg Ram). Not sure if the output was correct though (prolly the old drivers - must update one day…)

The statue looked like it was textured with a normal map (ie. it was blue and pink etc.) and didn’t seem to have any depth to it.

Oh…

I know the demo is not very good. You are supposed to se a 3D model. You can move around with the keys that i stated above. The trick is that I generate a volume out of depth maps.e.g shadow volumes out of shadow depth maps or volume render out of depth maps.

The demo requires ARB vertex program. The demo doen’t fall back or tells you if this is not present. I know this is bad.

The demo shaows a normal map. Have not yet fixed the software to do phong lighting so I just textured the model with the normal map.

the thinh is really that there are no polys, just depth map images and that the method can handle concave objects with continious layers. Not discrete layered images.

E.g. Take a camera + a laser range finder and input real time images + depth map from lrf. and you will be able to see a complete 3d world in real time.

Out of curiousity, can you share how do you render them? Relief textures? Tesselation? Slices? Something new maybe?!

-Ilkka

Yes.

It is not based on your named methods.

Relief rendering with prewarped images is too slow yet and concave objects are tricky.

Layered images suffers from sampling artefacts unless you want to warp all images to screen orthogonal slices.

Traditional tesselation requires depth image analysis and perhaps edge detection , filtering etc. to create good closed volumes.

However a combination of all three above methods creates possibnilities to create a good solution… :wink:

e.g use warp pixel shader or CPU etc. to create seamless integration of “image layers”. use vertex program to create “depth” geometry that the image layers can be “glued on”. Use layer depths to sort the layer geometry. This is the trick i am doin.

I think that when uberbuffers or whatever they call them will be available this trick can be done with very high framerates. I have asked NVidia for this for a couple of years now when I created an algo to do stencil volume shadows with shadow maps. This way my guess is that a lot of “complex” scenes will be rendered much faster that traditional polygon modelled scenes…

Are you aware of this paper? http://www.mpi-sb.mpg.de/~jnkautz/publications/index.html

J. Kautz, H.-P. Seidel, Hardware Accelerated Displacement Mapping for Image Based Rendering, Graphics Interface 2001, pages 61-70, June 2001.