PDA

View Full Version : Multiview framebuffer targets



rupert
12-15-2017, 08:08 AM
I'm experimenting with OVR_Multiview on Android OpenGL ES and I'm struggling to understand some of the concepts.

The plan is to use multiview to make stereo rendering more efficient, so the left and right renderings would be side-by-side.

1. Can you render directly to the output framebuffer with multiview or does it have to be a bound texture?
2. If you have to set the viewport to the whole output buffer, how do you offset the two views? i.e. what stops it rendering twice across the whole viewport?

For background, I would rather render to the screen directly than incur the cost of an intermediate texture buffer and I'm writing at the Android SDK / Java level with direct calls to OpenGL and raw shader code (no libraries or fancy abstractions).

Apologies if I'm missing something obvious, but it's not exactly well documented and the "spec" reads like robot barf.

GClements
12-15-2017, 10:49 AM
I'm experimenting with OVR_Multiview on Android OpenGL ES and I'm struggling to understand some of the concepts.

The plan is to use multiview to make stereo rendering more efficient, so the left and right renderings would be side-by-side.

1. Can you render directly to the output framebuffer with multiview or does it have to be a bound texture?

It has to be a bound texture. Specifically, a 2D array texture.


2. If you have to set the viewport to the whole output buffer, how do you offset the two views? i.e. what stops it rendering twice across the whole viewport?

Each view is rendered to one layer of a 2D array texture. The effect is as if you'd performed exactly the same sequence of rendering operations multiple times, each time to a different texture layer. The only vertex shader output which is allowed to change between views is gl_Position, so you're effectively limited to drawing the same scene with different transformations.

Combining the views for output isn't addressed by the extension.

rupert
12-16-2017, 02:08 AM
It has to be a bound texture. Specifically, a 2D array texture.

Each view is rendered to one layer of a 2D array texture. The effect is as if you'd performed exactly the same sequence of rendering operations multiple times, each time to a different texture layer. The only vertex shader output which is allowed to change between views is gl_Position, so you're effectively limited to drawing the same scene with different transformations.

Combining the views for output isn't addressed by the extension.

Thanks for clarifying and for the quick response.

This makes multiview very useful for the 2-step rendering process normally employed for lens correction, which is arguably the core use case. It's less useful in my specific situation, where I would prefer to write straight to the screen and I would be trading one two-step process for another. I'll run some benchmarks at least.