View Full Version : How many matrices should be sent to vertex shader?

redphi

05-06-2012, 12:32 AM

If you have models in a world, you usually transform them from model space to world space to camera space to clip space. That's three transformations. Now I read here:

http://www.arcsynthesis.org/gltut/Positioning/Tut07%20The%20Perils%20of%20World%20Space.html

that sending a model-to-world transformation matrix and a world-to-camera transformation matrix to the shader is usually a bad idea, because you get precision issues (explained in that link). The better way to calculate the model-to-world and world-to-camera matrices using doubles, multiply them to a model-to-camera matrix, then convert that double matrix into a single precision float matrix, then upload that to the shader. (all explained in that link) My question is, why not multiply the camera-to-clip or perspective projection matrix in there as well? Sure the CPU now has more work to do per model in your world, but now your vertex shader is doing less work per vertex, which seems like a good trade.

Aleksandar

05-06-2012, 01:46 AM

If you need just to calculate clip coordinates out of model ones, you can (and should) send only a single model-view-projection matrix (MVP) to a vertex shader. Otherwise, you could send other matrices as well, but it is surely the best recommendation to calculate full MVP matrix on the CPU using double precision.

agrum_

05-06-2012, 04:38 AM

If you're scared about the precision, you can render your world with differents perspective matrix. First render the far items with a 1km to 50km near and far plane, then render 1m to 1km. and you still can split the work. However it could make the render time longer if you don't clean your vertex array list before sendng it to the shaders (meaning avoiding sending the ones who are not in the perspective clip).

For the number of matrices, it depend on your needs, for example, I send to my shaders the classic mvp, but also the mv and mvi (normal matrix) for lighting and the texture matrices for shadowing. But reading the link, I really don't think the model matrix is usefull in any case.

redphi

05-06-2012, 03:03 PM

Is there any reason to send separate matrices, instead of the MVP matrix? I can't seem to think of any reasons why the vertex shader would need access to camera space coordinates or whatnot.

agrum_

05-06-2012, 10:49 PM

As I said, you might need the mv and mvi in the vertex shader to get the coordinates in model view and the normal in model view which is really useful for light computation in the fragment shader. I guess there is other matrices for other purposes but you asked if there where a reason to send separates matrices of the mvp, so yes there is. Still, try to send complete matrices. For example, compute the mvi in the API instead of sending the mv to the shader then compute in it, it will be way faster.

Alfonse Reinheart

05-07-2012, 01:10 AM

Is there any reason to send separate matrices, instead of the MVP matrix?

Yes. In fact, if you read ahead exactly 2 tutorials in that series, (http://www.arcsynthesis.org/gltut/Illumination/Illumination.html) you will see exactly why you need that stop-off point at camera space.

If there is one thing I want people to learn from those tutorials, it's to think for themselves. That's why the "In Review" portion (http://www.arcsynthesis.org/gltut/Positioning/Tut07%20In%20Review.html) gives you the assignment to combine the various matrices together. Look at what you have to do to the code to combine the matrices. See which feels more natural structurally, then decide for yourself whether it is useful.

redphi

05-07-2012, 02:14 AM

Sorry argum_, I didn't mean to ignore you. I just didn't fully understand your post because I hadn't learned any lighting yet at that time.

Alfonse, excellent tutorials! They are quite challenging but extremely well-written. I tend not to do the "In Review" exercises because I am lazy, but I at least think about them.

In the Basic Lighting tutorial, you transform both your directional light vector and the normals on the model to camera space. But it seems like you don't ever use the camera space coordinates for the vertices themselves ever. So in this case, it would be more efficient to send the full model-to-clip matrix. Is this correct or am I missing something?

What if you transformed your directional light vector to model space? Now the vertex shader needs to do even less work, right?

agrum_

05-07-2012, 02:35 AM

True for both ideas. You clearly get my point about avoiding redondant compuation in the shaders.

Alfonse Reinheart

05-07-2012, 12:36 PM

So in this case, it would be more efficient to send the full model-to-clip matrix. Is this correct or am I missing something?

You could. Whether you would ever notice such efficiency is an entirely different matter.

What if you transformed your directional light vector to model space? Now the vertex shader needs to do even less work, right?

Perhaps, but your CPU needs to do more. For each object, you must invert your world-to-model matrix (or, more likely, build the inverse transform of it). Also, the light vector is no longer the same for every object; it must be changed per-object. So there's overhead there.

It might be faster. But it might not. And you might never be able to tell the difference outside of a benchmarking tool.

There's a reason the tutorials don't get into the nuts-and-bolts of performance and optimization. It's not a simple thing.

Powered by vBulletin® Version 4.2.3 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.