Inferred rendering

Hello everyone.

I need to render translucent materials using OpenGL 3.3 and a deferred shading model, after a lot of research I found a technique called inferred rendering that combines both things a deferred shading and translucent materials but my question is with inferred rendering I also need a material shader that deals with the subsurface scattering or I don’t need to use the subsurface scattering for implementing translucent materials with inferred lighting or rendering ?

Thanks in advance.

Inferred rendering is simple writing translucent pixels in a dither pattern so some of the pixels have the background colour. It has nothing directly to do with subsurface scattering. From the examples I have seen it can generate very bad moire.

it’s not related to subsurface scattering, but it uses DSF - filtering method to eliminate interlacing artifacts. and it’s not cheap. inferred rendering has a lot of failure cases.

some presentation

i, personally, use a separate pass for semi-transparent objects. because i can’t think much of the cases, there you actually need correct lighting for semi-transparent objects. and you can still use simple forward lighting for some objects if you really need.

Another way of rendering translucent objects is to use a deep framebuffer. Search for humus and deferred shading. He (humus aka Emil Persson) has a lot of interesting material. However, you still have tradeoffs like even higher memory usage and more bandwidth.

EDIT: I just remembered yet another approach by John Chapman.

Anyway, deferred shading and translucency (transparency is actually incorrect because transparent objects aren’t visible :wink: ) is, pardon my french, a bitch. Either accept the trade-offs or implement a forward rendering path - which I actually consider a trade-off as well. Then agaon, most trade-offs incurring memory consumption and bandwidth will be mitigated by newer hardware with more resources and computing power anyway - not to mention the increased implementation effort needed for a second render path.

Still, think about the systems you’re targeting and then make an educated decision rather than blindly following the first best suggestion.

i also have seen this article. to be truthful, i didn’t read it very carefully yet. i took a glimpse at it and it seems to suggest making forfard rendering pass for translucent objects in which you achieve translucency effect by sampling texture containing alredy lit opaque objects(but before post-processing) and mix it with current fragment, pased on it’s opacity. and then you mix opaque and translucent results. i’m not sure about how it will handle multiple translucent objects in front of each other. but i think it will be easy to make acceptable approximization for game engine. it should be a bit more fast and robust than metod i use, because i have to store accumulated alpha value for translucent pass separately, and it bugs me a lot.