Sunlight volumes and scattering

(NB. as with much of my stuff, every part of this that I wrote myself is a dirty hack.)

Light scattering (“god rays”) is a beautiful effect; in fact it’s so beautiful, it’s one of the rare bits of eye-candy that everybody bitches about (OMG so much GRAPHICS) but everybody also secretly loves. Good examples of the technique performed as a post-process can be found here or here.

Here’s what it looks like in nature:

Actual real sunset.  Note the "rays" visible below the sun.
Actual real sunset. Note the “rays” visible below the sun.

And here’s Crytek’s approach:

MAXIMUM SHINY
MAXIMUM SHINY

The implementation above is performed as a sort of radial blur outward from the screen-space position of the light source, masked by a representation of any objects occluding the light – trees, landscape, buildings, character models, etc. Apart from the difficult concept that this process is all backwards in shader language (because you can only influence the fragment you’re currently drawing, you’re marching from the point on screen towards the light source, not the other way round), this is pretty easy to implement. There are a couple of downsides, one of which is very minor and the other of which starts to get on the nerves:

1) This isn’t even slightly true-to-life in terms of physical parameters – it’s an “art” effect, and you tweak it until it looks good. The results won’t be affected by atmospheric conditions, such as fog. This is the very minor downside.

2) This effect only works when the light source is on the screen. You have to fade it out whenever the light source isn’t within the camera’s field of view, or you get “opposite” light shafts such that, for example, the sun is suddenly now setting in the east. In addition, you can’t have any light shafts entering the frame from the side – so, if you look down at your feet, the shafts are suddenly gone. This is the major bugbear.

In order to do a proper “light shafts” effect, then, we need to know where in our scene light can get to, and how much of that light can make it to the camera. Fortunately, the first question can be answered easily if we’re set up to cast shadows from the main light – the shadowmap contains the information needed. Unfortunately, answer to the second question is much more complicated than it sounds. To get round this problem, we’re going to need to find a way to integrate all of the light being scattered in along a ray from the camera to each visible point.

Yes, folks, we’re going to need to write a ray tracer. It’s OK though, we don’t actually need to write a good one.

Here’s how it works: the whole thing is a post-process and requires the depth buffer from the main render of the scene (i.e. what the camera sees) and a shadowmap projected from the main light. Because in this situation the main light is sunlight (and hence the light rays are parallel), and because it makes life easier, the shadowmap is an orthogonal projection. We need something to draw to, which will be a framebuffer with a texture bound as a render target. This target will need to be have the same proportions as the main render; because we’re going to trace rays, and tracing rays is really expensive per pixel, it’s a good idea to make this buffer smaller; one quarter of the size of the main framebuffer works pretty well.

We will also need a method for reconstructing position from depth; I use the inverse of the matrix used to get from world space to screen space, e.g. gluInvertMatrix(projectionMatrix * cameraRotationMatrix * worldToCameraMatrix). This method also requires a uniform which contains the dimensions of the viewport. You’ll be pleased to know that you get all of the other transformations for free as a result of making the shadowmap in the first place.

This technique is heavily dependent on the quality of the shadow map available. Using a shadow map cascade, to increase the resolution of shadow information near the camera, makes things more complicated but will also make the effect work much better.

Finally, we’re going to be rendering a single fullscreen quad. This is pretty easy – all you need is a a vertex buffer object containing two triangles, which make up a quad with points (-1.0, 1.0, 0.0), (1.0, 1.0, 0.0), (1.0, -1.0, 0.0), (-1.0, -1.0, 0.0). If you draw this quad without applying any transformations, then it will take up your entire viewport (i.e. in the vertex shader gl_Position = vec4(position.xyz, 1.0);), so the vertex shader is basically a passthrough.

The fragment shader is where we actually do the raytracing. Here’s the process for each fragment:

1) Reconstruct the world space position from the depth buffer. This will allow us to tell when the ray we’re going to shoot from our camera to the fragment hits world geometry (my skybox is at infinity).

It’s worth noting at this point that depth buffers can sometimes be a pain in the arse. In this case we are going to want to measure light scattered in from a particular distance, but just stepping through the depth buffer (from 0 to 1) isn’t going to work: for starters, most of the depth information is next to the camera (i.e. most things in the scene will be at > 0.9, which is why if you try to visualise a depth buffer you get a white-out). On the other hand, we want to know when our ray hits geometry, so that we can stop evaluating it. To make life bearable, I transform everything into world space.

2) Get the direction in world space from the camera; either subtract your camera’s position in world space from the fragment’s position, or use inverse(projectionMatrix * cameraRotationMatrix) * (gl_FragCoord.xy/screenDimensions.xy * 2.0 – 1.0, 1.0, 1.0). I stole the latter function from Florian Bösch, who is much cleverer than me.

3) Sample along your ray in a loop. 2000 iterations if you’re fancy, 500 if you’re not. You don’t need to make your steps particularly tiny because this will end up all blended and glowy when finally composited, and errors won’t really be visible, but you might lose out on fine details (the more common screen space approach is generally cheaper for capturing very fine details). Compromise on a reasonable fraction of the size of your world space units to both cover enough of your view distance and to catch fine detail; I use world units of about a meter, about half of that generally works ok.

4) use an inscattering equation to determine the maximum amount of light that could be scattered towards the camera from that point in world space. I stole mine from Miles Macklin who is also much cleverer than me. This will require the direction in which light rays are travelling (NOT the direction to the light as with normal lighting models) in world space, the eye direction (see above), and the distance along the ray. You can multiply this by a function which represents your atmosphere, e.g. the absorption spectrum of nitrogen, if you like.

5) Determine whether the sampled point is actually contributing light. This is just a case of shadow mapping the worldspace coordinates of the point – i.e. we are shadow mapping the air. You will find that including a floor operation to sample the exact middle of the nearest shadow map texel (floor(shadow_map_texture_coordinate.xy/shadow_map_dimensions) * shadow_map_dimensions) + (0.5 * shadow_map_texture_coordinate.xy/shadow_map_dimensions) will reduce the amount of flickering you get.

6) If the point is not in shadow, increment by the amount of in-scattered light divided by the length of the ray sampled (number of steps * size of step)

7) Exit the loop when the ray hits the geometry – otherwise, the rays hitting the ground keep going and keep sampling shadow, so the ground gets very dark.

8) optionally, add in a shadow contribution from the geometry once you hit it. This will “cap” your light rays visually so that you get bright spots where the rays hit, say, the ground.

9) Composite your lighting buffer in with your main render. I have found that (sunlightValues * 2.0 – 1.0) * 0.1, added to your main render, often looks rather nice.

Here’s my complete fragment shader. It’s very rough, but hopefully makes sense.

 
#version 330
//precision highp float;
 
uniform sampler2D shadowMap;
uniform sampler2D depthBuffer;
uniform sampler2D worldSpaceNormalsTexture;
uniform sampler2D waterHeightsTexture;
 
uniform vec2 viewport_dimensions;
uniform mat4 inv_proj;
uniform mat4 inv_proj_cam_rot;
uniform mat4 inv_camera;
uniform vec4 atmosphericAbsorptionProfile;
uniform mat4 lightToClipOne;
uniform mat4 lightToClipTwo;
uniform mat4 lightToClipThree;
uniform vec3 world_space_camera;
uniform vec3 world_space_sunlight;
 
uniform mat4 projectionmatrix;
uniform mat4 modeltoworld;
uniform mat4 worldToCamera;
 
in vec2 texCoord;
in vec2 random_noise_TexCoords;
 
out vec4 fragColour;
 
const int numberOfSteps = 600;
const float world_space_ray_unit = 0.8;
float numberOfStepSDividedByRayUnit = numberOfSteps/world_space_ray_unit;
 
//get ray from camera to fragment in world space
vec3 get_world_normal(){
    vec2 frag_coord = texCoord;//gl_FragCoord.xy/viewport_dimensions;
    frag_coord = (frag_coord-0.5)*2.0;
    vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
    vec3 eye_normal = normalize((inv_camera * device_normal).xyz);
    return eye_normal;
}
 
//get ray from camera to fragment in camera space (i.e. camera facing=(0,0,0), pre-projection)
vec3 get_camera_space_position(){
    vec2 frag_coord = texCoord;
    frag_coord = (frag_coord-0.5)*2.0;
    float depthValue = texture(depthBuffer, texCoord).g;
    float z_n = (2.0 * depthValue - 1.0);
    vec4 device_normal = vec4(frag_coord, z_n, 1.0);
    vec4 eye_normal = ((inv_proj * device_normal));
    return eye_normal.xyz/eye_normal.w;
}
 
//chose shadow map from cascade of 4 maps tiled vertically in one texture
vec4 shadowCoordinateToUse(in vec3 worldSpaceCoords) {
    vec4 ShadowCoordPostW = lightToClipOne * vec4(worldSpaceCoords.xyz, 1.0);
    float shadow_map_selector = 0.0f;
 
    vec2 biasedShadowDimensions = ShadowCoordPostW.xy * 2.0 - 1.0;
    float maxShadowCoordDimension = max(abs(biasedShadowDimensions.x), abs(biasedShadowDimensions.y));
 
    if (maxShadowCoordDimension >= 0.98f) {
        ShadowCoordPostW = lightToClipTwo * vec4(worldSpaceCoords.xyz, 1.0);
        shadow_map_selector = 0.25;
 
        biasedShadowDimensions = ShadowCoordPostW.xy * 2.0 - 1.0;
        maxShadowCoordDimension = max(abs(biasedShadowDimensions.x), abs(biasedShadowDimensions.y));
 
        if (maxShadowCoordDimension >= 0.98f) {
            ShadowCoordPostW = lightToClipThree * vec4(worldSpaceCoords.xyz, 1.0);
            shadow_map_selector = 0.5;
        }
    }
 
//bias shadowmap coordinate to centre of nearest texel - reduces flicker
//1024.0 = dimensions of shadowmap texture -> can make adjustable with uniform
    ShadowCoordPostW.xy  = floor(ShadowCoordPostW.xy * 1024.0) / 1024.0 + (0.5/1024.0);
    ShadowCoordPostW.y = (ShadowCoordPostW.y/4.0f)+shadow_map_selector;
 
    return ShadowCoordPostW;
}
 
//inscattered light; not adjusted for nature of media
float inScatter (float d, vec3 lightVector, vec3 cameraVector) {
    vec3 cameraFacing = cameraVector;//get_world_normal();
    vec3 q = -lightVector;//cameraFacing + lightVector.xyz;
 
    float b = dot(cameraFacing, q);
    float c = dot(q, q);
 
    float s = 1.0 / sqrt(c - b*b);
    float l = s * (atan((d + b) * s) - atan(b*s));
    return l;
}
 
void main()
{
    //vector in which to accumulate light//
    vec3 outputColour = vec3(1.0);
 
    float depthValue = texture(depthBuffer, texCoord).g;
    float z_n = (2.0 * depthValue - 1.0);
    vec4 worldPositionFromDepthBuffer = inv_proj_cam_rot * vec4(texCoord * 2.0 - 1.0, z_n, 1.0);
    vec3 fragmentWorldPosition = worldPositionFromDepthBuffer.xyz/worldPositionFromDepthBuffer.w;
 
    vec3 eyeDirection = get_world_normal(); //eye direction in world space
    vec3 sampleWorldSpaceCoords = vec3(0.0);
 
    float previousResult = 0.0;
 
    int i = 0;
 
//loop until reached full distance or until hit geometry
    while (i < numberOfSteps && length (fragmentWorldPosition - sampleWorldSpaceCoords) > 2.0) {
 
        sampleWorldSpaceCoords = (world_space_camera + eyeDirection * i * world_space_ray_unit);
 
        vec4 shadowCoord = shadowCoordinateToUse(sampleWorldSpaceCoords.xyz);
        shadowCoord/=shadowCoord.w;
 
        float lengthInscattered = (i * world_space_ray_unit);
        float inscatteredLight = inScatter(lengthInscattered, normalize(world_space_sunlight), eyeDirection);
 
//determine if sampled point is in shadow; clamp to max possible inscattered light.
        float shadowMapLength = texture(shadowMap, shadowCoord.xy).r;
        float additive = clamp(sign(shadowMapLength + 0.0005 - shadowCoord.z),  0.0, inscatteredLight); 
 
//subtracting previous result gives only light scattered by the current sample   
//instead of light scattered in by sample + distance to sample     
        outputColour += vec3(additive - clamp(previousResult, 0.0, 1.0))/numberOfStepSDividedByRayUnit;
        previousResult = inscatteredLight;
        i ++;
    }
    fragColour = vec4(outputColour * outputColour, 1.0);
}

(NB: updated 04/07/2014 – better shader which removes the need to adjust for rays which travels through the ground by sampling a given length in the depth buffer rather than the entire distance to each sampled point)

Things to note: I use four shadow maps in a cascade rendered one above the other to a 1 x 4 texture. In this case, world depth is in the green channel of the depth texture because reasons but obviously this can be changed to point at the correct representation. This is a regular depth buffer, NOT linearised depth. inv_proj = gluInvertMatrix(projectionMatrix * cameraRotation * cameraPosition). inv_camera = gluInvertMatrix(projectionMatrix * cameraRotation). The code for Variance Shadow Mapping was adapted from Fabien Sanglard’s tutorial.

Results:

Here’s a view from the lighting buffer looking through some leaves:
Light_scattered_between_leaves
And here’s the same shot with the lighting composited into the rest of the scene:
Light_scattered_between_leaves_composited
You can still see rays when you look away from the light:
Looking_away_from_light
View perpendicular to light ray direction – still looks reasonable:
Looking_through_rays
And here’s two shots of “sunset”. Note that if you look carefully, you can see a light glow in the valley to the left – that’s inscattered light from behind the hill to the right; this effect works fine even without a visible light source, and can accumulate light from “empty air”.
Light_scattered_at_sunset
Light_scattered_at_sunset_composited

3 thoughts on “Sunlight volumes and scattering

    1. Yeah, It would be pretty straightforward to put those values in a 3d texture or texture array mapped to world space and look the values up while ray tracing. 3d textures necessarily have to have small dimensions but you really don’t need much spatial resolution for these effects.

      Actually I could look at the world’s heightmap and assume higher humidity for all points where the terrain is below “sea level”.

      Thanks for the suggestion, I will have a crack at that!

Leave a Reply

Your email address will not be published. Required fields are marked *