rivten

02-14-2017, 11:32 AM

Hello everyone !

So my question is a little bit tricky. I am working on a global illumination algorithm. For that algorithm to work, I want to compute a large texture that will contain several copies of the initial scene viewed from different positions in space. This big texture will contain every copy in a packed way. Here is a link to an example of what I want to achieve : http://imgur.com/a/XYdqB

The texture is split in 32 by 32 and each group is the scene (here the Cornell Box) viewed from one point.

Anyway, to speed up the process, I want to use instancing. I thought about doing the following (all this happens in the vertex shader) :

1. The instance count is 32*32 : we will draw the scene as many times as there are groups in the megatexture

2. Given the gl_InstanceID, I know in which group I am writing into. I get the camera data related to the point on which to render the scene.

3. I place all my vertices in camera space.

4. Then I need to place all my vertices in NDC space (since the vertex shader expects vertices in clip-space but I make sure every W = 1.0f so that clip space and NDC space are the same).

The tricky part is part 4. Indeed, I need to know the NDC-space-size of one group (this is easy), then take my Camera Space vertices, shrink them and translate them to the right position.

This is already giving me strange behavior when I do this and I wanted to know if there was anything important to take into consideration here.

Moreover, I thought there was going to be some troubles with clipping. Indeed, every vertex in clip-space that _should_ be clipped will not if I shrink it. Therefore, I need to mimic the clipping behavior myself. I cannot think of an obvious way to do that first, and second I know it is important to avoid branching therefore my solution would avoid this too.

I provided sample code of my vertex shader for you to see. Hopefully this will make everything clearer :

void main()

{

//

// NOTE : Computing micro view matrix

// [...]

mat4 MicroMVP = MicroProjection * MicroView * ObjectMatrix;

vec4 NDCPosition = MicroMVP * vec4(Position, 1.0f);

NDCPosition = NDCPosition / NDCPosition.w;

//NDCPosition.w = 1.0f;

float Scaling = (2.0f / float(PatchSizeInPixels)); // PatchSizeInPixels is 32 in my case

NDCPosition.xy *= Scaling;

NDCPosition.xy -= (1.0f - Scaling) * vec2(1.0f, 1.0f);

NDCPosition.xy += PixelCoordInPatch * Scaling;

gl_Position = NDCPosition;

}

Does anyone is familiar with this and have any tips and advices to tackle this problem ? I hope this is clear and I am ready to answer any questions to make it even clearer.

Thanks a lot !

So my question is a little bit tricky. I am working on a global illumination algorithm. For that algorithm to work, I want to compute a large texture that will contain several copies of the initial scene viewed from different positions in space. This big texture will contain every copy in a packed way. Here is a link to an example of what I want to achieve : http://imgur.com/a/XYdqB

The texture is split in 32 by 32 and each group is the scene (here the Cornell Box) viewed from one point.

Anyway, to speed up the process, I want to use instancing. I thought about doing the following (all this happens in the vertex shader) :

1. The instance count is 32*32 : we will draw the scene as many times as there are groups in the megatexture

2. Given the gl_InstanceID, I know in which group I am writing into. I get the camera data related to the point on which to render the scene.

3. I place all my vertices in camera space.

4. Then I need to place all my vertices in NDC space (since the vertex shader expects vertices in clip-space but I make sure every W = 1.0f so that clip space and NDC space are the same).

The tricky part is part 4. Indeed, I need to know the NDC-space-size of one group (this is easy), then take my Camera Space vertices, shrink them and translate them to the right position.

This is already giving me strange behavior when I do this and I wanted to know if there was anything important to take into consideration here.

Moreover, I thought there was going to be some troubles with clipping. Indeed, every vertex in clip-space that _should_ be clipped will not if I shrink it. Therefore, I need to mimic the clipping behavior myself. I cannot think of an obvious way to do that first, and second I know it is important to avoid branching therefore my solution would avoid this too.

I provided sample code of my vertex shader for you to see. Hopefully this will make everything clearer :

void main()

{

//

// NOTE : Computing micro view matrix

// [...]

mat4 MicroMVP = MicroProjection * MicroView * ObjectMatrix;

vec4 NDCPosition = MicroMVP * vec4(Position, 1.0f);

NDCPosition = NDCPosition / NDCPosition.w;

//NDCPosition.w = 1.0f;

float Scaling = (2.0f / float(PatchSizeInPixels)); // PatchSizeInPixels is 32 in my case

NDCPosition.xy *= Scaling;

NDCPosition.xy -= (1.0f - Scaling) * vec2(1.0f, 1.0f);

NDCPosition.xy += PixelCoordInPatch * Scaling;

gl_Position = NDCPosition;

}

Does anyone is familiar with this and have any tips and advices to tackle this problem ? I hope this is clear and I am ready to answer any questions to make it even clearer.

Thanks a lot !