Coloring Raymarched Volumetric Light as it passes through Stained Glass (Part 2)

Valerio Marty
7 min readJul 6, 2021

In the previous tutorial, we learned how to do raymarched volumetric light, in this article we will pick it up where we left off and add support for stained glass.

Shortcomings of this implementation:

  • Previous shortcomings.
  • It is in no way physically accurate. The color-specific raymarching is hugely simplified and has edge cases.
  • Tinting is a screen space effect, if the glass is not visible, the light will not be colored.
  • If you are watching stained glass through another panel of stained glass, only the closest one will actually tint light. Pretty minor.
  • Colors at the furthest edge of the stained glass will be more powerful than the rest
  • It is a performance sink, for each step of the previous raymarching we are doing another raymarch. We can get away with 4–8 samples, but the performance hit is very noticeable.

This tutorial assumes that you are familiar with the previous tutorial, and there are some variables like _SunDirection that aren’t mentioned here, if you intend to use this code, need an explanation on raymarching in general or my specific implementation, I encourage you to read it first.

The Complete Algorithm

  • For every pixel in the screen we cast a ray in the direction of the camera view, effectively towards higher depth (but still the same screen space coordinate).
  • This ray is divided into a limited number of steps, each one at a different distance from the camera.
  • We ask Unity if that point in space is in light or shadow of the main light, this is possible because we can sample the Shadow Map.
  • IF a step is lit we calculate light scattering
  • Also, if the step is in light, we cast a second world-space ray from the step position towards the light.
  • This ray is also divided into a limited number of steps, each one not just at a different distance from the camera, but in a different screen space coordinate from where it started.
  • IF the second ray depth intersects the depth at that position, we can say the ray is touching something, but because this is only happening if the ray is in light, it can only be a non-shadow-casting object.
  • We calculate the average light scattering in that pixel by dividing the sum of the steps between the number of steps.

Of course, just checking if the distance from the camera to the ray is greater than the distance from the camera to that pixel is not an accurate intersection check, the ray could always just be behind something from the camera perspective without ever intersecting it, but that is not a problem because we are going to use custom depth.

Custom Depth

Our algorithm uses depth to check if we are intersecting non-shadow-casting objects, but transparent materials do not write depth by default. We still do not want to force write depth because that will have unintended results. The solution is to have a custom depth pass where only the objects we want are rendered, which won’t affect regular transparent rendering.

This way we can also filter which objects will actually tint the rays, preventing regular objects from tinting our light because of the simplified intersection method, otherwise, every depth discontinuity would tint nearby light.

This script, which is based on the render feature of this Alexander Ameye tutorial, will let us create a ScriptableRendererFeature that will filter the visible objects by LayerMask, override their material and render them into a custom buffer. By using a material that writes custom depth, we have a custom depth buffer.

Some notes:

  • In line 48, I am clearing to white, which is very important for our algorithm.
  • Using R8 encoding is completely fine for this purpose.
  • I am outputting to a global texture called “_CameraDepth2Texture”, which is not a great name.
Make sure that the custom depth is above the volumetric light, your setting will look different than mine

We can’t actually use the same depth pass regular materials use for our custom depth, or at least I don’t know how to. What I do know is that in the volumetric shader I will calculate distance from the camera, so we can write a depth shader that outputs that as well. Regular depth is encoded using non-linear scaling for better precision near the camera, but to save on some calculations in the shader nested loop (which would mean converting to linear depth hundreds of times), I decided to directly write linear depth. For our purpose, the lower precision won’t have any impact.

The shader we are using is the following:

This shader outputs distance from the camera, divided by the far plane value (_ProjectionParams.z) so it is a value from 0 to 1. Keep in mind that this shader is not a screen space effect, but the actual shader that will render some of the objects, that we are outputting to a render texture.

The Shader

Let’s jump directly into the shader and compare it with the one we had before.

First, let’s talk about the least obvious difference. Before, we were working in monochrome, but we can no longer do that. Now, every pass outputs a real3 and every color is defined as real3. Apart from that, the blur, compositing, and Low Res passes are identical.

The raymarching differences start at line 192, where we define the constant variables for every color ray. This time the distance is constant, so we can calculate the step vector outside the loop.

The next stage starts at line 210, where we declare a nested loop when the ray is in light. We calculate the distance from the camera to the ray position and then that position in screen space coordinates, which we will use as UVs to sample our custom depth.

We don’t want to continue if the ray exited the screen view, so we break the loop in line 216 if that is the case.

In line 220 we sample our custom depth using DX10+ nomenclature and using SampleLevel(), which will sample the mipmap of our choice. The reason for this is that the regular sample uses derivatives to select the appropriate mipmap level, but to do that in a loop, the compiler needs to unroll it, which isn’t possible in our nested loop. This being a screen space effect, we don’t need mipmaps anyway, so we just sample from mipmap 0. We multiply by the far plane value to bring it back to proper units.

If the distanceToDepthRay is greater than the depth in that specific point of the screen, the ray is either behind or intersecting that point, so we sample its color. The tex2dlod nomenclature is equivalent to SampleLevel, you can use either one, DX10+ nomenclature is more modern and separates the sampler from the texture, but for a screen space effect, we don’t care about that.

The color value is multiplied by the scattering and then by 2, to not lose luminosity, I also added a _Boost parameter for stylistic effect.

Once we have sampled, we don’t need to continue the ray, so we break out of the inner loop. If we didn’t intersect, we continue the ray.

The ScriptableRendererFeature

We need to update our feature with RBG encoding for our render textures and passing the new parameters. Apart from that, it is identical to the previous one.

As a final note, my actual shader uses a _COLORED_ON keyword to switch between the previous one and this one, changing color variables and frag functions from real to real3 and enabling the second raymarching loop. In the scriptable renderer feature, I switch between the encodings too.

The Result

For the coloring to work, make sure to select a layer in your Custom Depth LayerMask and assign it to some objects. STAINED GLASS MUST HAVE SHADOWS DISABLED or it will not work properly.

For the best quality, I am using 40 normal raymarching steps at half resolution with 250 jitter and 8 colored samples with 2 jitter and 28 color max distance.

The same result can be obtained with 25 normal raymarching steps and just 4 colored samples at a minimal loss, any fewer samples than that will become noticeably noisy.

Conclusion

If you want more shader and game dev content be sure to follow me on Twitter: @ValerioMarty

You can also check my itch.io page for finished projects.

https://valeriomarty.itch.io/

Additional resources used for this effect

--

--

Valerio Marty

Game Design and Development student. Game Designer/Tech Artist. @ValerioMarty on Twitter