Generalized Trapezoidal Shadow Mapping for Infinite Directional Lighting

Graham Aldridge, University of Canterbury New Zealand, Department of Computer Science









This document describes a method for robust, high detail shadow mapping of directional lights for large environments. Based on existing trapezoidal shadow mapping, problems when the view direction is close to parallel to the light direction are overcome, while still using a single shadow depth texture.


This technique requires hardware support for pixel shader 2.0 or better. (It may also be possible with pixel shader 1.4)


Trapezoidal Shadow Mapping


Reference: Anti-aliasing and Continuity with Trapezoidal Shadow Maps [Tobias Martin, Tiow-Seng Tan]

Reference: Light Space Optimized Shadow Maps [Daniel Scherzer]


The above paper describes a technique to compute a trapezoidal projection matrix. A set of four vertices in a trapezoidal shape are mapped to a unit square. When using this projection for the view matrix of a shadow map, detail can be biased. The following image form the above paper demonstrates trapezoidal projection:

Trapezoidal projection



When applied to the view matrix of a shadow map, detail may be biased closer to the view point. The shadow map will more tightly enclose the silhouette of the camera view frustum, allowing for more efficient usage of the shadow map.


Example camera view frustum in light space:

Example view frustum


Bounding Trapezoid (purple):

Example bounding trapezoid



Parallel Light and View Direction Problem


The advantages of trapezoidal shadow mapping dissapear as the angle between the light direction and the camera view direction become close, or less than the camera view field-of-view angle. At which point a bounding trapezoid will become a square, therefore no longer biasing near and far detail. Furthermore, it is difficult to achieve a smooth transition from high to low detail in the foreground as the light and view direction become more parallel. Once the trapezoidal projection no longer takes effect, if the far view distance is high, it would be common for each pixel of the shadow map to represent a large proportion of the foreground. The end result is a slow transition from crisp, well balanced shadow map detail to extremely unbalanced detail within a few degrees of camera rotation.


Example camera view frustum in light space: (Camera frustums in light space, Bounding trapezoids in light space, rendered image)

Example view frustum Example view frustum Example view frustum
Example bounding trapezoid Example bounding trapezoid Example bounding trapezoid
Example render Example render Example render

Solving for Parallel Light and View Directions


Existing Algorithms:


The simplest solution to this problem is to use multiple shadow maps. Using five shadow maps, one trapezoidal shadow map for each side of the camera view frustum, and one shadow map for the near plane, projected in light space, the problem may be overcome:


Example image: Five shadow maps

Five shadow maps


This algorithm can be optimized slightly; the centre shadow map for the near clip plane can be removed. Each of the four remaining shadow maps is extended to cover half the remaining gap:


Example image: Four shadow maps

Four shadow maps


However, both algorithms have the same disadvantages. More textures are used, and geometry must be rendered in multiple passes, once for each shadow map. Furthermore, shadow map boundaries may produce artefacts. Geometry rendering must also be limited to the current shadow map; using alpha-testing, shader texkill, or hardware clipping. Each of which may produce artefacts along shadow map boundaries. This also may not work for alpha transparent geometry, as geometry can no longer be correctly z-sorted.


Proposed Solution and Implementation:


The proposed solution is a modification of the four shadow map algorithm above. Instead of using four separate shadow maps, the single shadow map used will have four separate viewports, with each of the four shadow map taking up one quarter of the new shadow map.


Example Image: Four shadow maps encoded into a single shadow map using viewports.

Four shadow maps in one tex


Selection of the appropriate corner of the shadow map must be done per pixel. Each corner requires its own texture projection matrix. Therefore the vertex shader must output four sets of 4-component texture coordinates to the pixel shader.

Further output from the vertex shader is required for the pixel shader to choose the appropriate texture matrix, and therefore render the appropriate viewport of the shadow map. In light space, the areas where each texture matrix is used correspond to the trapezoids of each viewport of the shadow map, therefore, approximate boundaries between each area are the edges to sides of the camera view frustum:


Example Image: View frustum in light space, with boundary highlighting

Four shadow maps in one tex boundary highlight


These boundaries are projected into world space using planes, as there are four boundaries, there are four planes. Each plane is a four component vector; therefore they can be easily stored in a 4x4 matrix. Multiplication of the world space vertex coordinate (x,y,z,w) with this matrix will result in four values representing the distance to each of the four planes. This is the fifth and final output required of the vertex shader to the pixel shader.


Example Image: Boundaries projected into world space

Planes in world space


Vertex Shader:

OUT.shadowTex1             = mul(ShadowMapMatrix1,IN.position);

OUT.shadowTex2             = mul(ShadowMapMatrix2,IN.position);

OUT.shadowTex3             = mul(ShadowMapMatrix3,IN.position);

OUT.shadowTex4              = mul(ShadowMapMatrix4,IN.position);

OUT.shadowTexSelect  = mul(ShadowMapMatrixSelect,IN.position);


Pixel Shader:


return shadowTexSelect;


The vertex shader is complete, but the pixel shader still requires work.


As can be seen in the above picture, the majority of the geometry being rendered is yellow (1,1,0,0). This indicates that these pixels are above planes zero and one, and below planes two and three. It can also be seen that there are more than five areas of colour in the image. The following modification to the pixel shader helps illustrate this point:


Pixel Shader:


return sign(shadowTexSelect);


Planes in world space



In this image, the three planes representing RGB can clearly be seen (the fourth alpha plane cannot be seen).

As each colour channel (R,G,B,A) will represent one shadow map viewport, it is required that each pixel in the above image be either red, green, blue or alpha, not a combination such as yellow. Therefore, consider the light space view of the bounding planes:


Four shadow maps in one tex boundary highlight


For this example, in world space, a red pixel will correspond to shadow map viewport A and be greater than boundary plane 0 and less than boundary plane 1:



Greater than plane

Less than plane














Therefore, for area A, the texture selection value desired is (1,0,0,0) – red. Here plane 0 (shadowTexSelect.x ) must be greater than 0, and plane 1 (shadowTexSelect.y ) must be less than 0. No other pixels will satisfy this requirement. The same is true for the three other areas. Therefore the requirement of pixels being either red, blue, green or alpha is satisfied by this part of the algorithm. Implementing this in the pixel shader can be easily done with a swizzle, multiply and two saturates:


Pixel Shader:


return sign(shadowTexSelect * -shadowTexSelect.yzwx);


This shader produces the following result:


Planes in world space


It can be seen that each colour, red, blue, green and alpha are correctly representing the correct area represented by a shadow map viewport area. For area A, the shader will process the code saturate( 1 * saturate( - -1) ) for shadowTexSelect.x . Where sign(shadowTexSelect) is (1,-1,z,w). This is the only combination of x and y that can produce an output of (1,y,z,w), and will only occur in area A. The same applies to the three other areas.


Modifying the pixel shader to display the shadow map instead of the colour output is simple, as red represents area A, or the shadowTex1 texture coordinate, shadowTex1 is multiplied by shadowTexSelect.rrrr .


Pixel Shader:


shadowTexSelect = sign(shadowTexSelect * -shadowTexSelect.yzwx);


return tex2Dproj(shadowMap, shadowTex1 * shadowTexSelect.rrrr +

                           shadowTex2 * shadowTexSelect.gggg +

                           shadowTex3 * shadowTexSelect.bbbb +

                           shadowTex4 * shadowTexSelect.aaaa);


This shader produces the following result:

Shadows in world space


And things look even better when combined with normal rendering:

Shadows in world space





There are two simple cases for optimizations in this algorithm.


The first is straight forward; only use the algorithm where normal single-map trapezoidal shadow maps no longer work. This can be accomplished through a different shader, or settings planes two and three to (0,0,0,0) while setting planes one and two to values that will always pass or fail (for example the same as the near clip plane).


The second optimization is more subtle. As the view direction differs more from the light direction, the area of each shadow map viewport becomes unbalanced. In the following example, area A is significantly smaller than C, while areas B and D are both horizontally or vertically stretched:




The optimization for this case is to resize the viewports within the shadow map. If the viewport for A is made smaller, the C viewport will be larger, while B and D will be either wider or taller, balancing detail across the entire map.


This algorithm will still work correctly at any view angle, these optimizations simply make it a lot more efficient at certain angles.



Problems and Further Work:


One problem that occurs with this algorithm is incompatibility with anisotropic filtering. If anisotropic filtering is enabled, lines a single pixel wide will appear in the shadow, along shadow map borders. As yet a solution to this problem has not been found, other than disabling anisotropic filtering on the shadow map texture unit.**


Currently, work is underway to perfect the trapezoidal side of the algorithm, to produce the best detail biasing and z-scale balancing. I’m looking at putting together a small library to generate the various matrices easily, however, the trapezoidal algorithm is patent pending, so this may not happen.

** This limitation should be solvable by rotating the individual viewports by multiples of 90 degrees so the geometry at the boundaries matches.