Real-time vs offline rendering
This might be a seemingly naive question, but it has really made me think. I'm interested in isolating the key differences in features and quality between real-time rendering, i.e. with OpenGL, and offline rendering, i.e. with the many open source renderers out there for use in programs like Blender, Maya, etc.
These are conjecture and I have no experience with offline renderers and am just a beginner at real-time rendering, but offline rendering obviously provides a quality that is not yet achievable in real-time and I'm curious just exactly what they are doing differently when processing time is not so much of a priority.
It would seem to me that the most notable differences are:
- Lighting - Offline renders have better ways of calculating lights, or by ray tracing them?
- Anti aliasing - Offline renders have more advanced built-in anti-aliasing algorithms to really smooth out lines and edges?
- Vertex resolution - perhaps offline renderers have built-in subdivision methods for creating high resolution geometry?
It also occurs to me that perhaps you can technically achieve the exact same quality with OpenGL using real-time methods, but that you significantly sacrifice frame processing time, in which case OpenGL more or less resembles offline rendering in such a case. Is this a plausible assumption?