Subsurface Scattering Implementation

May/01/16

Jump to
Process | Video clip | Links

(Click to enlarge)

Process

Overview

This page aims to provide a brief look at steps necessary to implement any rendering procedure you might need to achieve desired look. It is specific to a project integrating game engine into Blender Cycles. We will implement Separable SSS by iryoku, but these steps generally apply to any technique. Note this goes into low-level internals and it is not needed for general usage of the engine.

"Separable Subsurface Scattering is a technique that allows to efficiently perform subsurface scattering calculations in screen space in just two passes."

Obviously the first thing that needs to be done is to study the paper or laid out your own technique you want to implement. After we are ready with theory part, we jump onto implementation.

Importing geometry data

We begin by importing the scan of head, which will come handy for testing the effect. This is a regular procedure in Blender.

Setting up material

A regular PBR material is setup, using base color, roughness and normal map provided with model. This time, it is also being mixed with Subsurface Scattering node. Right now it is very, very simplistic as you can see. What happens behind the scenes is that the node is approximated using SSSS technique, to make it run in real-time.

Since we want to process subsurface scattering only with materials that do contain it, instead of applying it to the whole screen, a stencil mask is set automatically for those materials. Shaders then do the work only for marked pixels in screen-space, which makes it run very fast. For your own shaders you can also set this mask manually, as it is exposed in material properties.

Render path nodes

The most interesting part is setting up the shaders. We prepare it in GLSL and write a small descriptor file picked up by shader processing script. This file just links needed uniforms and sets some of the render pipeline properties. Since the SSSS shader works in two passes, we make two shader contexts - one horizontal and one vertical. The first one will link the direction uniform to vec2(1.0, 0.0), while the other one will use vec2(0.0, 1.0).

Now we need to hook up the shaders to the render path, as described in SSSS paper. We implement it into deferred render path, since SSSS is a post-processing technique. As advised, we set it up before tonemapping, which happens in the compositor pass.

The first pass is invoked by taking the final color framebuffer as input and storing the results into a temporal framebuffer. We go a little further here. To not create additional buffer and save memory, a framebuffer from gbuffer that was already processed is reused. That enables us to still use this technique at minimum setup, using gbuffer composed of two float textures with four channels, which at half precision fit into 128bit - exactly the size we need to cram into to also run reasonably at (newer) mobile GPUs, which have 128 bits of per-pixel on-chip memory.

We then just add two passes referencing horizontal and vertical SSS shader contexts.

Head rotation

To better showcase the effect, we want to rotate the head with minimum effort. We throw these nodes together and add them as a trait to the head object.

Video clip

The result! Plenty of color/effect tweaking could be done to improve the look further, which would be much better handled by experienced artist.

In case of any feedback, get in touch!
@luboslenco,