If you’re playing around with deferred rendering or post-process techniques, you’ve probably come across the concept that you can recover camera-space surface normals from camera space position like so:

vec3 reconstructCameraSpaceFaceNormal(vec3 CameraSpacePosition) { vec3 res = normalize(cross(dFdy(CameraSpacePosition), dFdx(CameraSpacePosition))); return res; } |

where C is the camera-space position.

What you might not realise is that you’re accidentally setting yourself up for confusion depending on your graphics driver. For the longest time, I was using this technique to try to implement SSAO without having to bother with storing screen space normals. After fiddling about a bit I noticed that on my desktop with an NVIDIA GTX680 everything looked OK, while on my laptop with intel HD integrated graphics everything looked inverted. I then tried reversing the normal I was getting out of this function. Success! The laptop is now displaying correctly. Failure! The desktop is now screwed up.

The above was all under OSX 10.8; oddly enough at some points using the NVIDIA web drivers for OSX fixed the issue so that both computers agreed – this points to the graphics driver being responsible for the difference. Then I started using Mavericks and the problem was back.

I’d guess that anyone who actually understands maths would have worked it out a lot faster than I did. The issue is with the cross product; for any two vectors at right angles, there are two possible cross products, a left-handed one and a right-handed one:

Basically your driver will pick whichever one it feels like. If you feed the wrong normal to an ambient occlusion shader you will immediately get an inappropriately occluded pixel because SSAO relies on determining whether the projected normal ends up behind the depth buffer value (i.e. inside your scenery), and this will normally be true of a normal which is facing in the opposite direction from a face which is pointing towards the camera.

What you need therefore is a method for determining which normal you thought you needed in the first place. If you’re trying to get a normal appropriate for SSAO this is actually surprisingly easy. You’ve already got a view ray from the camera to the fragment you’re inspecting, because that’s how you were able to reconstruct the normal in the first place. If you take a dot product for the normal vector you’ve reconstructed, and the view vector, you’d expect the result to be positive if the normal is pointed in generally-the-same-direction and negative if the normal is pointed in generally-the-opposite-direction. If the normal is generally-in-the-same-direction then you want the exact opposite of that normal.

You can therefore cheat by getting the opposite of the sign of the dot product:

//both normal and CameraSpacePosition are vec3 normal *= -sign(dot(normal, CameraSpacePosition)); |

If the dot product is positive then the result of the “sign” function is positive and the normal becomes -normal; if the dot product is negative then the result of the “sign” function will be negative and therefore the normal will be unchanged.

Note that I didn’t write this as:

if (dot(normal, CameraSpacePosition) > 0.0 { normal = -normal; } |

because it’s still supposed to be good practise to avoid branching your shaders if at all possible.