Pre-rendered backgrounds in Unity (2/2) – Rendering

In the previous part of this post, we looked at synchronizing the camera settings between Blender and Unity so that they render objects in the same perspective. In this post, we build upon this to result to render the Unity objects into the pre-rendered background.

Exporting depth values

Upper part: The rendered image, lower part: the depth image

Upper part: The rendered image, lower part: the depth image

In a first step, we have to export the depth of our image along with the rendered image. Blender makes the depth-value of the image available to you when you use the nodes editor for setting up the rendering.

In the image, you see the node network that in the end proved to be what I wanted.

The node settings to render the depth image. Don't forget to set the render settings to RAW in the scene settings.

The node settings to render the depth image. Don’t forget to set the render settings to RAW in the scene settings.

Here’s a short explanation: The camera is set to a far clip distance of 40 and a near clip distance of 1. What I want is a linear distribution of z-value in the range between the near and far clipping planes. The z-Buffer in Blender is a floating point number that is linear between the camera position at z=0 and into the screen. So, to map the incoming z values to the interval 0-1, we have to first subtract 1 (the distance of the near clipping plane) and divide by 39 (far – near). This way, we get a correct grayscale image.

There are several sources on the net that either propose to use a Map Values or a Normalize node. Both will create an image that might seem correct and that might work for some applications, however, they both have their disadvantages. The normalize node will take the lowest and highest value and map them to the interval [0,1]. However, this is only correct if you have objects at the complete ends of the clipping region. You want an object at the far clipping plane to be almost white in the depth image. Instead, if you have objects in a very narrow range, it won’t cover the actual clipping area. The Map values node will actually do the right operation, but especially the division is harder to get correct as you have to enter 1/39 as a decimal number.

There have been some forum discussions that have been priceless in finding out relevant information. For example, it is imperative that you have the “RAW” render setting in the scene settings set. I found the most useful information in these two forum posts on Elysiun and Unity3D.com.

During debugging I found out some other funny things about Blender. You can view the depth buffer in the image viewer, but taking this image will actually not gain you that much. For whatever reason, it does the correct operation, but then squares the depth value, making everything darker that it should be.

Rendering depth in Unity

Once you have exported the image, the next part is getting it into Unity. This is achieved by importing it as an Advanced texture, ideally with no compression. After importing, there are two major techniques that could be used to composit the image:

  1. We could adjust all shaders in our scene, to sample the depth image before writing something in the depth shader. If we find that we are hidden by the image, we can discard the pixel.
  2. We first write the depth buffer from the image and then render our realtime geometry into the image. This way, when we try to write to a pixel that is infront of the realtime rendered geometry, it is discarded.

Number 2 has by far the advantage. First, we don’t have to adapt any shaders to work with this system. Second, the graphics card will already discard the pixel if it fails the z-test and our fragment shader will never be run on occluded pixels. Third, we don’t have to send the depth image to the graphics card for each object that we want to render. Therefore, number 2 is implemented. For the realization, we first render a screen-sized quad with the image and the depth image, filling the depth buffer. Then, we have a second camera that renders the geometry. This camera has its clearing flags set to “None”, since we don’t want to reset either the color or the depth buffer that is set to the pre-rendered depth and image.

To compute the depth, we have to put a little bit of work into figuring out where in the graphics pipeline we are. What we get from the image is a linearized z-value between the near and far clipping plane. Normally, we have z in screen space, which results from a perspective projection and division from eye space. This perspective division makes the z-buffer values be distributed non-linearly. And we want this normal non-linear z-Buffer value as the output of our shaders which initializes the depth buffer.

The way to make this work is by projecting the depth back into the eye space and then carrying out the perspective transform and division in our fragment shader. This way, we get the same z-value as we would have gotten had the object been rendered in Unity itself.

Collision

The collision geometry for the demo scene in Unity

The collision geometry for the demo scene in Unity

Even though we do not render any geometry from Blender, it is useful to create some collision geometry to use in Unity. For example, I have re-created the outlines of the structures in the demo with boxes and exported these boxes. For easier navigation in the Blender scene, the collision geometry is on a separate layer than the rendered geometry.

In Unity we can either disable the mesh renderers for the collision geometry or have them be masked out from both cameras by placing them in a “Colliders” layer and not rendering this layer in the two cameras.

Result

As you can try out for yourself, the system works, the character will walk correctly behind the fence in the foreground. There are some potential improvements, though:

  • Depth values are very coarse. Since we are working with grayscale textures, we can only encode 256 levels of depth. This can already help us since we will make sure with colliders that characters will not pass through objects, but we always have to watch out for rounding errors. Note that we can choose different clip values in Unity that we did in Blender – we only have to supply the correct input values to our Shader.
    This problem could be improved if we used 32 bit textures which encode a single 32 bit float per pixel. This could for example be exported using the OpenEXR format in Blender. In Unity, we could encode and decode the values in a similar way as positions were encoded in the Self-Building Structures Demo.
  • Light and Shadows. So far, I have not looked into synchronizing the light sources between Blender and Unity. Furthermore, I have not looked into shadows. We have two shadow sources to consider: First, we have the shadows that are in the pre-rendered image and that should also fall onto the character. Second, we have the shaders the character casts on the pre-rendered geometry. A simple way for the second one would be something as Broken Age does it, drawing a round shadow on the floor.
    A solution for both types of shadows might be to do shadow mapping. In shadow mapping, we use a depth buffer rendered from the light source to determine shadow. We could use a depth buffer exported from Blender with the static geometry’s depth as seen from the light source to compute shadows that have been cast by the character and also to compute if the character is in shadow.

 

The source code for this project is available on my github page.

Used assets and further links:

The following assets have been used for the demo scene:

Some more links if you want to go further into the topic:

7 thoughts on “Pre-rendered backgrounds in Unity (2/2) – Rendering

  1. January 12, 2015 at 11:50 pm

    Hi there, TomPeters (vexe) here!

    First I’d like to thank you a lot for your tutorial on prerendered backgrounds, it really helped! – As our game is centered around prerendered backgrounds, static cameras and tank controls (classic RE type of game)

    I am having some trouble however and would like your help if you please. (Note that I’ve just started shaping up my graphics/shader knowledge base, so yeah…)

    First, I managed to take your scene a bit further and got shadows working. i.e. shadows cast from the player (3d objects) to the backgrounds, and viseversa, via invisible shadow receivers/casters (through matte and shadow-only shaders)

    I’m having problems building a successful prerendered scene like in the tutorial. I go to blender, render a few shapes, get a depth map (via divide and subtract nodes, like mentioned in the tutorial. divide by far-near, subtract near) and then export my scene to Unity.

    In Unity, I make my main camera a child to the imported camera position to get the right location/orientation, y rotation = -90. Create another camera and a plane/quad as a child to it. In the plane, I attach my bg material with the depth shader, with the proper textures assigned.

    The 3d geometry has a layer that’s ignored by the 3d cam. I run the scene, I either don’t see the player, or, depth composing doesn’t work….

    Problems:
    1- Unity/blender camera FOV relation. I searched and searched, I can’t get it to work right. I can’t figure out the right FOV in unity. I know the you mentioned an equation, but idk plugging it into Unity’s cam didn’t give good results
    2- How to figure out the right scale for the background plane/quad?!…. I saw a link about pixel-perfect stuff, but couldn’t relate…
    3- The depth.. I don’t know what’s wrong, but a lot of the times the player will just go over an object he’s supposed to go behind. Either the shader is doing something wrong, or there’s something wrong with the way I did the depth image.

    I’d appreciate your help if you could. What am I doing wrong? how to address those issues? (could it be depth precision issues…?)

    Note that I’m using Unity pro, I read that I could read depth from RenderTexture, not sure yet how, but will this benefit me/ or is it any better/does it have any benefits over the way you did it?

    And a question about your shader, why did you take -x instead of x?

    Really appreciate your help! Thank you!

  2. Dan_Tsukasa
    January 18, 2015 at 11:12 pm

    Hi.

    I just wanted to ask if you’ve any plans to share the z-depth shader edits you’ve made.
    I’m attempting the same thing as you, except I’ll be handpainting over the renders afterwards to achieve something more akin to this http://cdn.halcyonrealms.com/wp-content/uploads/2012/03/kogaart01.jpg (not after making some sort of PSX Final Fantasy Clone). Creating the base geometry is the east part, but I’m not a programmer and I’ve no idea how to write a shader that writes to the zbuffer like yours. I’d be great if you were able to share your code, or snippets of it so I can figure out the best process to try and code my own.

  3. admin
    April 10, 2015 at 4:35 pm

    Sorry for the long time to answer, your comment was hidden in a mountain of spam :-( You can find the source code on github: https://github.com/SpookyFM/DepthCompositing.

  4. Leo
    September 25, 2015 at 7:16 am

    Hey Florian,

    I’ve got a question about creating depth from an Illustration. When I was working on Harry Potter 7, we created depth maps for stereoscopic viewing in After Effects. I figured this would be much the same.

    So I created some geo for a scene. Brought it into Unity, found the camera angle I wanted to use, grabbed a still, Illustrated on top of it. And then created a 0-255 depth map within Photoshop. I assumed there would be some sort of issue I couldn’t comprehend and I was right.

    We took your setup and everything, and tried to plug it in, but it doesn’t read the depth map correctly. I was wondering if you would be able to chat over skype or something to help figure out the proper way of doing this. My email is leoharelson@gmail.com if you wouldn’t mind. I’m not sure if there’s a setting within Unity that I can adjust, or if I will have to adjust the depth map. Of If I should have taken a a depth map from maya, tried to get a raw linear interpertation and the based the depth map off of that. I’m in a little over my head here and could really use some help if you’re willing. Appreciate it. Thank you!

  5. Neurological
    September 28, 2015 at 4:19 pm

    Hello, not sure if interested but after toying a little with your shader made a version compatible with spriterenders, also made the clip plane camera code automated, so no neeed to input anything in any script, and reversed colors for the depth mask (this is just by preference as I’m used more with black beign the lowest and white the highest).

    If interested can share heere the shader. Didn’t use anywhere outside of my own testing so I prefer to ask first.

  6. Jack
    January 2, 2016 at 6:00 am

    This is a terrible tutorial for beginners, too much theory and side info.

  7. ChristmasGT
    March 30, 2017 at 5:35 am

    Hey guys,

    Just a heads up, after unity 5.5 this shader no longer works. In order to get things working again you’ll need to change this line:

    clipSpace.z = 0.5*(clipSpace.z+1.0);

    to:

    clipSpace.z = 0.5*(1.0-clipSpace.z);

    due to the fact that Unity inverted the depth buffer after 5.5. Hopefully this helps a new user.

Leave a Reply

Your email address will not be published.