Real-time vs offline rendering

This might be a seemingly naive question, but it has really made me think. I’m interested in isolating the key differences in features and quality between real-time rendering, i.e. with OpenGL, and offline rendering, i.e. with the many open source renderers out there for use in programs like Blender, Maya, etc.

These are conjecture and I have no experience with offline renderers and am just a beginner at real-time rendering, but offline rendering obviously provides a quality that is not yet achievable in real-time and I’m curious just exactly what they are doing differently when processing time is not so much of a priority.

It would seem to me that the most notable differences are:

[ul]
[li]Lighting - Offline renders have better ways of calculating lights, or by ray tracing them?
[/li][li]Anti aliasing - Offline renders have more advanced built-in anti-aliasing algorithms to really smooth out lines and edges?
[/li][li]Vertex resolution - perhaps offline renderers have built-in subdivision methods for creating high resolution geometry?
[/li][li]
[/li][/ul]

It also occurs to me that perhaps you can technically achieve the exact same quality with OpenGL using real-time methods, but that you significantly sacrifice frame processing time, in which case OpenGL more or less resembles offline rendering in such a case. Is this a plausible assumption?

More experienced people will disagree with a few of the subtitles in these observations of yours, but it seems that you are pretty much dead-on, so far as you being a beginner. Your thoughts on the matter are clear and well-thought out and I suspect that you will do well.

I would say that OpenGL is more of a generic GPU driver interface than a renderer.

A renderer is a large collection of procedures and formulas that can calculate lighting whereas a video card driver API such as OpenGL is how you would instruct a video card on how to perform these operations in a way that is optimized for real-time rendering, but also, these days GPU’s are becoming increasingly flexible which is very helpful for incredibly detailed lighting routines.

Could you make an offline renderer using OpenGL, I don’t see why not. You could add as many passes as you like every frame, you could add dozens of lights with shadows for them all. You could write shader programs that are several pages long and you could even have a different shader for every model with many branches in that shader to light different parts of that model in components. I suppose the more you add, the more impressive the results will be, with the performance trade-off.

There is not much stopping you from porting renderman code into your openGL application using GLSL, ‘C for graphics’ or even ‘CPU’ code.

Now that many GPU manufacturers have built transform feed-back into their circuit designs and API’s, there is likely going to be a dramatic increase in this idea of OpenGL based offline renderers. Historically, GPU’s did not have the flexibility that CPU’s have had for decades, they tended to be more ‘one-way.’ Once you sent the GPU info, that was that. It was like setting it in stone. They were highly paralleled and excellent at handling vector math but not very flexible. Now that this has changed, I suspect that this vision many of us share is about when, not if.

As a very simple example, one of the things that is a major challenge in real-time rendering is anti-aliasing. There are several techniques, each with tradeoffs, and still the results are not necessarily perfect. By contrast, in a program like Blender, there is just a simple checkbox in the rendering settings to turn on antialiasing. That’s it, and you get very smooth renders. What exactly would such a checkbox be doing behind the scenes? Are these offline renderers simply executing a big chunk of code for each pixel to make sure it blends perfectly with those around it? Does anyone know what this code might look like, and can such code technically be implemented in a OpenGL shader? I’m sure that offline antialiasing is doing a lot more than the comparatively simple techniques of super sampling, multisampling, etc, in OpenGL. (Or, perhaps a more focused version of this question would be: If you call a “smooth” drawing function in OpenGL, what is the GPU doing compared to what an offline renderer is doing?)

This is really a big question, but basically if your top requirement is “real-time” (60fps or better), you have 16.666ms to do everything. Culling, occlusion, lighting, shadowing, shading, occlusion, antialiasing, …the works. That’s obviously going to preclude using some higher quality rendering methods that don’t fit the time budget on today’s hardware except in limited cases. OTOH, if you’re top requirement is quality not time, lots of these excluded options are practical.

Briefly, most conventional real-time methods (termed “standard rasterization”) can be oversimplified as:


for each object in the world: 
  smash it onto the screen pixels

what you’re left with is your image.

Thanks to GPUs, with a small amount of application assistance (such as culling), this can be made very, very fast. Part of the reason for this speed is that each point on each object is rendered largely independent of all other points on that object and points on other objects in the scene.

…But you can just hear the quality limitations of this in the technique description. Each point rendered on each object doesn’t “know” about the other points on that object (much less the points on other objects in the scene) when it’s rendered. If you “need” it to (such as for shadow casting, refractions, reflections, volumetric absorption/scattering, etc.), you have to arrange for this to be precomputed and passed in some fashion so it can be applied. This is one area where the complexity mounts up in a realtime renderer.

However, take an off-line “high quality” technique such as ray-tracing, which is typically not regarded as realtime (without significant limitations). It can be roughly oversimplified as:


for each pixel on the screen:
  bounce rays from it off screen objects toward the light sources, and integrate their contributions

Here you can see that the tech approach has been flipped around. Instead of iterating over world objects, we’re iterating over screen pixels. Also, instead of each shading computation being “local” to that point on that surface, it’s “global”. That is, for each pixel we can spawn multiple rays and “hunt out” all the potential lighting contributions to this pixel “globally” in the scene of objects and light sources; thus why it and other techniques are termed “global illumination” techniques. And because of this, whereas in the former each shading computation didn’t need direct access to a database of scene objects, in the latter we do need this. So as you might imagine, whereas in the first technique we have a lot of coherent (e.g. sequential) access which helps improvement performance, with the latter we have a lot of random (incoherent) access looking up into this database of scene objects and light sources to trace pseudo-randomly bouncing rays, which is inherently more expensive and harder to “make fast”.

So briefly, the realtime/offline rendering technique difference is largely the “local” vs. “global” illumination thing, with better antialiasing (sampling/filtering) to some extent as well.