First off, I need to mention I haven’t started writing this shader yet; I’m brainstorming. I have written normalmap perpixel lighting in GLSL, and I have just finished a working non-bumpmapped environment map shader for diffuse lighting and pseudo-reflections.
So, basically, I want to be able to perturb cubemap lookup by the normal map texture lookup, and I’m hitting a wall mentally.
In my normal map lighting shaders I – and as far as I can tell this is the de-facto approach – convert the light vector and eye vector to tangent space in the vertex shader, and simply perturb those by the normal map in the fragment shader. Easy-peasy.
But my environment mapping shader acts in “world” space; computing the fragment location and eye vector in world space ( via passed in model matrix uniform ). I obviously can’t just perturb that by the normal map.
I’ve googled ( a lot! ) and haven’t had any luck. 99% of what I find is tex-combining dot3 stuff from the fixed-function days, and the rest is asm for DX, which I don’t grok. And I’ve found no “high level” algorithm descriptions to help me out. If anybody can give me an idea of how to approach this, or even some sample code, I’d be greatly in your debt.
The only way to do cubemap lookups is via a world space vector (cubemap are essentially world space). For per-pixel shaders this means you need to rotate the normal vector to world space for each pixel, preferably with a mat3 multiply. AFAIK, there’s no alternative way to do this.
Anyone ever tried quaternion rotation for better performance?
But I’m no authority on this, I’d be interested to hear if there are better ways.
That’s kind of what I was expecting. I was trying to figure out if I could multiply the per-pixel normal by the inverse of the tangent space matrix and then by the model matrix to bring it into world space, but I don’t think my GPU has the horsepower for that.
At the very least, I don’t know if I can invert a matrix in GLSL…
It’s not that hard. Rotational matrices are orthogonal,so the transpose of the matrix is the inverse! If your normal-tangent-bitangent are orthogonal then just use inverse matrix multiplication which in glsl is easily accomplished by notation of the form vec3mat3 instead of mat3vec3. Shade on!
Y-Tension: Thanks, that’s something I didn’t know! You learn something every day…
So I took a stab at the algorithm ( using a flat normalmap ( 128,128,255 )) for debugging, and the results are plausible but subtly incorrect. In general, moving the camera around the reflections seem about right, there are some discrepencies between the normalmap perturbed reflection and the flat non-perturbed.
I strongly suspect that the issue is my tangent vector generation.
That being said, I think the algorithm is correct now! So thank you!
I chose the “lazy” path.
I took the normalmap shader (which works in tangent space) and just added code that transforms reflected vector back into world space using 3x3 matrix made of normal, binormal and tangent vectors, which are passed to vertex shader anyway (I just needed to pass these to fragment shader).
Here are some screenshots that will clarify the issue. First, two screenshots of non-cubemapped rendering of a torus. The first has vanilla lambertian illumination, the second uses a flat normalmap. As is expected, the two render identically.
Lambertian:
Flat normalmap:
Now, here’s a non-normalmap perturbed cubemap rendering on the torus. Looks and behaves correctly.
And, with the flat normalmap, I’d expect the same output, but nope!
So, I know I’m doing something very wrong. My normalmap perturbed lighting is correct, so I’m confident that my tangents ( computed on cpu and passed to the shader ) are correct. Ergo: my normalmap-purturbed cubemap lookup is malarky.
Sorry for the inline pics, but they’re needed to clarify the situation
It’s a personally written C++ gui rendered in GL. In principle, it’s pretty OK ( some good features ), but it’s not something I’d consider resume material!
True, but lambert illumination uses only the normal…Are you sure the generated tangent orientations are consistent across the torus? If not, texture coordinates will vary greatly across a polygon, causing discontinuities like the ones in your photos.
At the very least, only pass two of T, B, and N. Calculate the third from the cross product of the other two. This should be faster on most hardware (saves a normalize), and will produce cleaner results.