Post Perspective - Color Pipeline: Virtual Roundtable

Here is a Q&A I was recently included in. Check out the full article here at postperspective.com

Warner Bros. Post Creative Services Colorist John Daro

My mission control station at HQ

Warner Bros. Post Production Creative Services is a post house on the Warner Bros. lot in Burbank. “We specialize in feature films and high-end episodic projects, with picture and sound finishing under one roof. We also have editorial space and visual effects offices just one building over, so we truly are a one-stop shop for post.” 

What does your setup look like tools-wise?


I have been a devotee of FilmLight’s Baselight for the past five years. It is the beating heart of my DI theater, where I project images onto a 4K Christie projector and monitor them on two Sony X300s. For that “at-home” consumer experience, I also have a Sony A95K.

Although I spend 90% of my time on Baselight, there are a few other post-software necessities for my craft. I call my machine the “Swiss army box,” a Supermicro chassis with four Nvidia A6000s. I use this machine to run Resolve, Mistika, Photoshop, and Nuke. It also makes a fine dev box for my custom Python tools.

I always say, “It’s not the sword; it’s the samurai.” Use the right tool for the right job, but if you don’t have the right tool, then use what you’ve got.

Do you work in the cloud? If so, can you describe that workflow and the benefits?


Not really. For security reasons, our workstations are air-gapped and disconnected from the outside world. All media flows through our IO department. However, one cloud tool I do use is Frame.io, especially for the exchange of notes back and forth. I really like how everything is integrated into the timeline. It’s a super-efficient way to collaborate. In addition to those media uploads, the IO team also archives finished projects and raw scans to the cloud.

I do think cloud workflows are gaining steam, and I definitely have my eye on the space. I can envision a future where we send a calibrated Sony X3110 to a client and then use Baselight in the cloud to send JPEG XS straight to the display for remote approvals. It’s a pretty slick workflow, and it also gets us away from needing the big iron to live on-prem.

Working this way takes geography out of the equation too. I would love to work from anywhere on the planet. Bring on the Tiki drinks with the little umbrellas somewhere in the tropics with a laptop and a Mini Panel. All joking aside, it does open the talent pool to the entire world. You will be able to get the best artists regardless of their location. That’s an exciting prospect, and I can’t wait to see what the future holds for this new way of looking at post.

Do you often create LUTs for a project? How does that help?


I mostly work with curves and functions to do my transforms, but when on-set or editorial needs a preview of what the look will be in the room, I do bake LUTs out. They are especially critical for visual effects reviews and dailies creation.

There’s a film project that I’m working on right now. We’re doing a scan-once workflow on that show to avoid overly handling the negative. Once scanned, there is light CDL grading, and a show LUT is applied to the raw scans to make editorial media. The best looks are the ones that have been developed early and help to maintain consistency throughout the entire workflow. That way, you don’t get any surprises when you get into the final grade. Temp love is a thing… LUTs help you avoid loving the wrong thing.

Do you use AI as part of your daily job? In what way?

Superman II Restoration


I do use a bit of AI in my daily tasks, but it’s the AI that I’ve written myself. Originally, I started trying to make an automated dust-buster for film restoration. I failed miserably at that, but I did learn how to train a neural net, and that led to my first helpful tool.

I used an open-source image library to train an AI up-rezer. Although this is commonplace now, back then, it was scratching an itch that hadn’t been scratched yet. To this day, I do think my up-rezer is truer to the image and less “AI”-feeling than what’s available off the shelf.

After the up-rezer, I wrote Match Grader in 2020, which essentially takes the look and vibe from one shot and applies it to another. I don’t use it for final grading, but it can be very useful in the look-dev process.

Building on what I had learned coding Match Grader, I subsequently developed a process to use machine vision to create a depth channel. This turns your Power Windows from circles and squares into spheres and cubes. It is a very powerful tool for adding atmosphere to images. When these channels are available to me, one of my favorite moves is to desaturate the background while increasing the contrast in the foreground. This adds dimension to your image and helps to draw your eye to the characters where it was intended.

These channels can also aid in stereo compositing, but it’s been a minute since I have had a 3D job cross my desk that wasn’t for VR.

Machine vision segmentation with YOLO. 16fps @4k

Lately, I have been tinkering with an open-source library called YOLO (You Only Look Once.) This software was originally developed for autonomous driving, but I found it useful for what we do in color. Basically, it’s a very fast image segmenter. It returns a track and a matte for what it identifies in the frame. It doesn’t get everything right all the time, but it is very good with people, thankfully. You wouldn’t use these mattes for compositing, but they are great for color, especially when used as a garbage matte to key into.

I have also recently refreshed my AI uprezer. I built in some logic that is somewhat “intelligent” about the source coming in. This way the process is not a one size fits-all operation.

SamurAI Image Restoration

It can auto-detect interlace and cadence now and can perform a general analysis of the quality of the picture. This allowed me to throttle the strength and end up with the perfect amount of enhancement on a case-by-case basis. The new tool is named SamurAI.

If given an example from another show or work of art, what is the best way to emulate that?


It’s good to be inspired, but you never want to be derivative. Often, we take many examples that all have a common theme or feeling and amalgamate them into something new.

That said, sometimes there are projects that do need a literal match. Think film emulation for a period effect. People can approach it in two ways. First — the easiest way, while also being more complicated — is to get a hold of some of the stock you are emulating. Next, you expose it with color and density patches and then develop and measure the strip. If you read enough points, then you can start to interpolate curves from the data.

FilmLight can help with this, and back in my lab days, that is exactly whose software we used. Truelight was essential back in the early days of DI, when the “I” was truly the intermediate digital step between two analog worlds.

The second way I approach this task would be to use my Match Grader software. I can push the look of our references to some of the production footage. Match Grader is a bit of a black box in that it returns a completed graded image but not the recipe for getting there. This means the next step would be to bring it into the color corrector and match it using curves, keys, and scopes. The advantage of doing it this way instead of just matching it to the references is that you are working with the same picture, which makes it easier to align all the values perfectly.

Oh, or you can just use your eyeballs. 😉

Do your workflows include remote monitoring?


Not only do they include it, but there was a time in the not-too-distant past when that was the only option. We use all the top solutions for remote sessions, including Streambox, Sohonet ClearView, Colorfront and T-VIPS. The choice really comes down to what the facility on the catching side has and the location of the client. At the moment, my preference is Streambox. It checks all the boxes, from 4K to HDR. For quick approvals, ClearView is great because all we need on the client side is a calibrated iPad Pro.

What film or show or spot resonates with you from a color perspective?


Going back to my formative years, I have always been drawn to the austere beauty of Gattaca. The film’s use of color is simply flawless. Cinematographer Sławomir Idziak is one of my favorites, and he has profoundly influenced my work. I love Gattaca’s early flashbacks, in particular. I have been gravitating in that direction ever since I saw the picture.

Gattaca

Magic Mike

The Sea Beast

You can see a bit of Gattaca‘s influence in my own work on Steven Soderbergh’s Magic Mike and even a little bit on the animated film The Sea Beast, directed by Chris Williams.

Gattaca

The Sea Beast

I am always looking for new ways to push the boundaries of visual storytelling, and there are a ton of other films that have inspired me, but perhaps that’s a conversation for another time. I am grateful for the opportunity to have worked on projects that I have, and I hope that my work will continue to evolve, inspire and be inspired in the years to come.

How To - Dolby Vision

Dolby Vision - How To and Best Practices



What is Dolby Vision

Dolby Vision is a way to dynamically map HDR to different display targets. At its core, the system analyzes your media and transforms it to ranges less than your mastering target, typically SDR 100 nits.

Project Setup

The first step is to license your machine. Once that is in place you need to set up your project. Go into settings and set your CMU(Content Mapping Unit) version. Back in the day, we used an external box, but nowadays the software does it internally. You will also need to set which version. V4 is the current iteration whereas V2.9 is a legacy version that some older TVs use. Finally set your Mastering Display. That is a Sony x300 in my case which is set up for PQ P3 D65 1000 nits.

Baselight Project Setup

Resolve Project Setup

It’s not what you said, it’s your tone

The goal is to make our HDR master look good on an SDR display. To do this we need to tone map our HDR ranges to the corresponding SDR ranges. This is a nonlinear relationship and our shadows mid-tones and highlights will land in the wrong areas if we don’t tone map them first. See below for an example of an SDR image that has not been tone mapped correctly. You can see the highlights are way too hot. Now we could use a curve and shape our image to a discreet master for SDR, but most studios and streamers are requesting a Dolby delivery regardless if a separate SDR grade was made. Plus, Dolby does a pretty decent job of getting you there quickly since the v4 release.

The first step is to analyze your footage. This will result in three values that will set a tone curve, min, max, and average. These values inform the system how to shape the curve to get a reasonable rendering of your HDR master in SDR.

Image courtesy of Dolby

Tone mapping from HDR to SDR

What we are trying to do here is fit 10 pounds of chicken into an 8-pound bag. Something has to give, usually the bones but the goal is to keep as much chicken as you can. Rather than toss data out, we instead compress it. The system calculates the min, max, and average light levels. The idea is to keep your average or the “meat and potatoes” of your shot intact while compressing the top and bottom ranges. The end result is an SDR image that resembles your HDR only flatter.

How a colorist goes about the analysis is just as important as the analysis itself. This is going to get into a religious debate more than a technical one and everything from this point on is my opinion based on my experiences with the tech. Probably not what Dolby would say.

The original design of the system wanted you to analyze every shot independently. The problem with this approach is it can take a consistent grade and make it inconsistent depending on the content. Say you had two shots from the same scene.

One side of the coverage was shooting the character with a blown-out window behind them. The other side shoots into the darker part of the house. Now even though you as a colorist have balanced them to taste, the Dolby analysis will have two very different values for these shots. To get around this, I find it is better to average the analysis for each scene vs doing independent shots. The first colorist I saw work this way was my good friend and mentor Walter Volpatto. He went toe to toe with Dolby because his work was getting QC rejections based on his method. He would analyze only a grey ramp with the d-min and d-max values representing his media and apply that to his entire timeline. His thought process was if it was one transform to HDR it should be one transform down.

Most studio QC operations now accept this approach as valid metadata (Thank you, Wally!) While I agree with his thought process, I tend to work based on one analysis per scene. Resolve has this functionality built in. When I’m working in Baselight I set it up this way and copy the scene averaged analysis to every shot in preparation for the trim.

Scene average analysis in Baselight.

Setting the tone

Now that your analysis is complete it’s time to trim. First, you need to set what display output your trim is targeting and the metadata flag for the intended distribution. You can also set any masking that was used so the analysis doesn’t calculate the black letterbox pixels. The most common targets are 709 100nits, P3 48nits, and PQ 108nits. The 709 trim is for SDR home distribution whereas the other two are for theatrical distribution. The reason we want to keep the home video and cinema trims separate is that displays that fall in between two trim targets will be interpolated. You can see that the theatrical 108nit trim is very close to the home video 100nit trim. These two trims will be represented very differently due to the theatrical grade being intended for a dark theater vs home viewing with dim surround lighting conditions. Luckily Dolby recognized this and that is why we have separation of church and state now. The process for completing these trims is the same though, only the target changes.

Trim the fat

Saturation plus lift gamma gain is the name of the game. You also have advanced tools for highlight clipping and mid-tone contrast. Additionally, you have very basic secondary controls to manipulate the hue and saturation of the six vectors.

Baselight Dolby trim controls.

Resolve Dolby trim controls.

These secondary controls are very useful when you have extremely saturated colors that are on the boundaries of your gamut. I hope Dolby releases a way to only target the very saturated color values instead of the whole range of a particular vector, but for now, these controls are all we have.

Mid tone offset

Another tool that affects the analysis data but could be considered a trim is the mid-tone offset. A good way to think about this tool is a manual shifting of what your average is. This slides the curve up or down from the midpoint.

I usually find the base analysis and subsequent standard conversion a little thick for my taste. I start by finding a pleasing trim value that works for a majority of shots. Then I ripple that as a starting place and trim from there until I’m happy with the system’s output. The below before and after shows the standard analysis output vs where I ended up with the trim values engaged.

It’s time to export once you are happy with the trims for all of your needed outputs. This is done by exporting the XML recipes that when paired with your PQ master will create all the derivative versions.

XML

Here are two screenshots of where to find the XML export options in Baselight and Resolve.

Rightclick on your timeline -> timelines - >export -> Dolby XML

Shots View -> Gear Icon ->Export Dolby Vision Metadata… This will open a menu to let you choose your location and set primaries for the file.

The key here is to make sure that you are exporting an XML that reflects your deliverable, not your DSM. For example, I typically export PQ P3 D65 tiffs as the graded master files. These are then taken into Transkoder, placed into a rec 2020 container, and married with the XML to create an IMF. It’s important to export a rec2020 XML instead of a P3 one so that when it is applied to your deliverable it yields the intended results. You can always open your XML in a text editor if you are unsure of your declared primaries. I have included a screen grab of what the XML should look like for the Rec2020 primaries on the left and P3 primaries on right. Always go by the numbers because filenames can lie.

Rec2020 XML vs P3 D65

There is beauty in the simplicity of this system. Studios and streamers love the fact there is only one serviceable master. As a colorist, I love the fact that when there is a QC fix you only need to update one set of files and sometimes the XML. That’s a whole lot better than in the height of the 3D craze where you could have up to 12 different masters and that is not even counting the international versions. I remember finishing Katie Perry’s “Part of Me” in 36 different versions. So in retrospect, Dolby did us all a great service by transmuting all of those versions we used to painstakingly create into one manageable XML sidecar file.

Thanks for reading

I bet in the future these trim passes end up going the way of the 4x3 version. Especially with the fantastic HDR displays available from Samsung, Sony, and LG at continually lower price points. Remember the Dolby system only helps you at home if it is something other than what the media was mastered at. Until then, I hope this helps.

Check out this Dolby PDF for more information and deeper dive into the definition of the various XML levels. As always thanks for reading.

-JD

How to - VR 180 Video Files

Recently a few VR jobs came across my desk. I had done some equirectangular grading in the past, but it was always for VFX plates, Dome Theaters, or virtual production sets. These recent projects were different because they were purposely shot for 180 VR. Sorry, no looking back over your shoulder. The beauty of this format is that it brings back some of the narrative language that we have cultivated over 100+ years of cinema. We can direct your eye through shadow and light or pull your attention with a sound effect and sudden action. All while not having to worry if you are looking in the right direction.

I thought it would be a good idea to share what I have learned working with this type of immersive content. It’s all out there on the web but hopefully, this pulls it all together in one place and saves all of you a bunch of googling.

It all starts with a stitch

First, you will need to choose a rig. There are many off-the-shelf kits you can buy or you can go the homebrew route and cobble together a few cameras. There are also some interesting standalone devices that save you from having to use/manage multiple cameras. In all cases, there will be some post-processing needed. You will need stitching software like Mistika VR or Cara VR for multiple camera rigs.

Stitching is the process of combining multiple cameras together, color balancing them, and then feathering the overlapping pixels to create one seamless equirectangular image. There are a lot of tutorials on stitching and this post is not that.

6 cameras stitched

The red lines are the edges. The green lines are where the feather starts for the overlap.

Equidistant Fisheye

Extremely wide fisheye setups will need to be converted from equidistant fisheye to equirectangular

Want to avoid stitching all together? Use a very wide-angle lens. There are extremely wide fisheye setups that can capture more than 180 degree field of view. These will need to be converted from equidistant fisheye to equirectangular, but other than that, no stitching or post-processing is needed. Canon has just recently released a fantastic dual fisheye product that further simplifies capture. No matter the setup the end result of the post process will be a 2:1 canvas with each eye being a 1:1 equirectangular image placed side by side. This is probably a good time to talk about what an equirectangular image is.

Equirectangular Projection

This type of spherical visualization is basically the map of the globe that you had in school. It’s what happens when you take a sphere, map that to a cylinder, and unroll the cylinder to a flat projection. That is a gross oversimplification, but a good way to visualize what is going on nonetheless. Please see the equations below if you are coding something or if you are just a maths fan.

Transform Definition

Spherical to Planar Transform

This is the concept of 360 video. We work with it in a flat plane during post. Same idea for 180 VR video, but just one hemisphere instead. Click to see higher quality.

Ok Cool, I have VR Videos… Now what?

At this point, your videos are ready for post. I would consider everything up to this point dailies. Now it’s time to edit. All the usual editors we use daily can cut together these video files, but some are better suited than others. Premiere would be my first choice, with Mistika Boutique being a close second. In my workflow, I use both since the two tools have different strengths and weaknesses. Premiere has a clever feature that uses Steam VR and feeds your timeline to a headset. This is indispensable, in my opinion, for that instant feedback one needs while cutting and grading. VR is a different beast. Straight cuts, unless carefully planned out, can be very jarring if not nausea-inducing. Fades work well but are sort of the VR equivalent of “if you can’t solve it dissolve it.” Having all of these transitions live for evaluation and audition in the headset is what separates Premiere from the rest of the pack. SGO has recently released the ability for HMD review similar to Premiere, but I have yet to use the new feature. I will update this post once I take it out for a spin.

9/7/2023 Mistika update

So, I finally took Mistika’s HMD monitoring for a spin. It was super easy to set up. First, you download DEO VR player to your headset. Next, you click the HMD icon in Mistika. This will give an HTTP address with the ip of your machine. Type that into the address bar in DEO VR and ta-da. You end up with super steppy streaming VR video of your current environment.

It was OK to check geometry and color, but It would be hard to use for review. There are a couple of advantages to working this way, though. Multiple headsets are able to connect to the same stream. This is great when you have a room full of folks and everybody in their own headset. With Premiere, we pass the HMD around while everyone else views on the projector or stares at whoever is in the headset, patiently waiting for their turn. Another benefit is remote monitoring. You can technically serve out the ip of your local machine (this will probably need some port forwarding on your router and some VPN shenanigans) to the world. This means someone remote can connect, provided they are on the same network.

Pros

  • Easy setup

  • Multiple viewers at once

  • Remote viewing

  • Instant HMD feedback

Cons

  • Steppy playback

  • Needs a network-attached machine

  • Low resolution to maintain interactivity

Setting up your project

Premiere has a couple of dependencies to enable VR viewing. First, you need to install Steam VR. This is all you need if you are using a Windows Mixed Reality headset. You will need to install the Oculus software if you plan on using the Facebook offerings via Occulus link.

Now that your HMD is set up. Check out this blog post for step-by-step settings to get Premiere ready to edit VR. The settings are the same for 180VR. Just change the Horizontal Capture settings from 360 to 180.

Change “360” to 180 for VR180 editing.

Who’s Daniel and why do I care?

One downside about Premiere is the dreadfully slow rendering of HEVC files. Not to mention the 60mbps limitation. The Adobe dev team knows my feelings on the matter so hopefully, this will be fixed in a future update, but until then here is a crafty workaround. Cinegy is a company that makes a codec called daniel2. They have their own renderer. We don’t really care about their codec but we do like that their Cinegy HEVC render is way faster than Premiere’s native one. Here’s how to install it.

  • download and install

  • go to email and copy the license (it’s free but still needs to be licensed)

  • open the Cinegy license manager and paste the number

  • open a Premiere timeline and, press ctrl m for export, and check to see if Cinegy comes up as an export option.

  • set your bitrate and hit go. I would recommend a bitrate around 130mbps. This allows enough headroom for audio and will not have any issue playing back on the Oculus Quest 2.

The compromise of all this speed is what’s missing from the header of the video file. The file will be missing the flag that lets players know that it is a VR180 file. You can also use Resolve or Mistika for fast HEVC renders as an alternative to Daniel2. No matter how you get your HEVC file you will need to ensure the header is correct. More on this after we sync the audio.

Audio is not my world

I’m a picture guy. Some would even say a big picture guy ;) The one thing I know for sure is that when it comes to audio, I know when it sounds good, but I haven’t a clue on what it takes to get it there. But no more excuses! This is the year that I want to dig deeper. Check back in a few and I hope to update this section with the FB 360 Protools integration information. Until then, the audio is best left to the pros.

Spatial sound can come in different orders with better immersion the higher you go. First-order ambisonics has 4 channels. Second-order has 9, while Third-order files contain 16 tracks. Now it may seem that third order is the way to go, but in my experience, the difference between second-order and third-order isn’t that noticeable on the built-in headset speakers. Then again, I’m, a picture guy. Whatever sound you receive from your mix, you will need to sync it to your HEVC file.

We use the FaceBook 360 app to marry the picture to the spatial sound. The app has some dependencies to install before you can use it.

  1. Python - if you are like me you may have already had this one!

  2. FFMPEG - this link has a tutorial for installing on a Windows machine. Click “code” then “Download Zip.” Uncompress and copy to the FB360 directory

  3. GPAC - make sure you use the legacy 0.8.1 version. This stumped me for a bit the first time.

Now we can run FB360 The first step is to point to your video file. Then choose the right order of ambisonic audio and point to the wav file from the mix. There is also an option to load a standard “head locked” stereo audio track. This can be good for narration, music, or other types of audio that do not need to be assigned a spatial location.

Finally, we hit “Encode.”

It’s not a vaccine but it is an injection

Google VR 180 Creator can be downloaded here. You can’t even find this anymore but it’s super important. There are other options including the original source code for this app, but this little gizmo is by far the easiest way to inject the proper metadata into the header of your HEVC file. This lets players know it’s a side-by-side 180 VR file.

VR180 Creator

Click “Prepare for Publishing. Drag your video in. Set it to side by side and hit export. You will have a new video that has been “injected” with the correct metadata.

How do I view the final product?

Plugin your Oculus Quest into your computer and put it on. Click allow file transfer. Now take off the headset and go to your computer. It will show up as a USB drive. Navigate to the movies directory and simply drag your files across. Now you can unplug your Oculus. Go to Oculus TV/ my media and click your video. If everything was done correctly you are now in a stereo 180 world!

You can also upload to Facebook or Youtube for streaming distribution. Here are two links that contain the specs for both. As with all tech, I’m sure these will change as better headsets are released.

Thank you to the experts that have helped me along the way.

Hopefully, this helps navigate the murky waters of VR just a bit. I’m excited to see what you all create. A big thanks to Hugh Hou for making a ton of really informative videos. A tip of the cap to Tom Peligrini for bringing us all together and leading the charge. I also owe a debt of gratitude to David Raines, for not only introducing Hugh to me but also making sure our VR pictures have all the emotion and immersive sound one could ask for. There’s a pretty great team here at Warner PPCS.

As always, thanks for reading.

JD

Best Practices: Restoring Classics

2020 - The year of Restorations

Now that we seem to be on the other end of the pandemic, I wanted to take a moment to look back on some of the projects that kept me busy. Restorations were the name of game during covid times. With productions shut down and uncertainty in the theatrical marketplace, I had time in my schedule to breathe new life into some of my favorite classics.

Over the last year, I have restored;

Let’s take a look at a couple of these titles and talk about what it means to remaster a film with our contemporary toolset.

The Process

The process for remastering classic titles is very similar to finishing new theatrical work with a couple of additional steps. The first step is to identify and evaluate the best elements to use. That decision is easy for digitally acquired shows from the early 2000’s. In those instances, the original camera files are all that exist and are obviously the best source. Film shows are where it gets particularly ambiguous. There is a debate whether starting from the IP or original negative yields better results. Do we use the original opticals or recreate them from the elements? Black and white seps vs faded camera neg? These questions all need to be answered before you begin the work. Usually I prefer to start with the OCN when available.

Director Scanner

Director Scanner

Arri Scan

Arri Scan

Scanning

Scanning is arguably the most critical part of the process. Quality and success will live or die by the execution of great scans. Image breathing, movement, and general sharpness are issues to look for when evaluating. Scans should not be pretty but rather represent a digital copy of the negative.  In a perfect closed-loop system, a scanned piece of film, once shot back out on a calibrated recorder needs to closely match the original negative.

Digital Restoration

The next step in making an old project shiny and new is to repair any damage to the film from aging or that was inherent in production. this includes painting out splice lines, gate hairs, dirt, and scratches. Film processing issues like breathing or turbulence can also be taken care of in this step. I prefer to postpone flicker removal until the grading step since the contrast will have an effect on the amount of flicker to remove. Some common tools used for restoration include MTI and PF Clean. This work is often outsourced because of the high number of man-hours and labor costs associated with cleaning every frame of film. Some companies that do exceptional restoration work are PrimeFocus and Prasad among others.

Grading

Grading restoration titles is a total sub-discipline from grading as a whole. New theatrical grading starts with references and look development to achieve a certain tone for the film. There is a ton of work that goes into this process. Restoration grading differs since the goal is staying true to that original intent. Not reimagining it. Much like new theatrical grading, a good reference will set you up for success. My preferred reference is a filmmaker-approved answer print.  These were the master prints that best represented the filmmakers’ creative intent.

kinoton-fp30d-696x1024.jpeg

A good practice is to screen the print and immediately set looks for the scans getting as close as possible at 14fl projected. An upgrade to this workflow is to use a projector in the grading suite like a Kinoton. These projectors have remote control and cooling. This allows you to rock and roll the film. You can even freeze-frame and thanks to the built in cooling your film doesn’t burn. Setting up a side-by-side with the film vs digital is the best way to ensure you have a match to the original intent. These corrections need to happen using a good color management system. Aces for example has ODTs for theatrical 48nits which is the equivalent of 14fl. Once you have a match to the original, the enhancement can start.

There would be no point in remastering if it was going to look exactly like the existing master. One great reason to remaster is to take advantage of new advancements in HDR and wide color gamut formats. Film was the original HDR format containing 12 stops of range. The print was the limiting factor, only being able to display 8 of those stops. By switching the ODT to PQ P3D65, we can take advantage of the larger container and let the film display all that it has to offer.

My approach is to let the film land where it was originally shot but tone-mapped for PQ display. This will give you a master that had the original intent of the print but in HDR. I often use an LMT that limits the gamut to that of the emulsion used for original photography. This also ensures that I’m staying true to the film's original pallet. Typically there is some highlight balancing to do since what was white and “clipped” is now visible. Next is to identify and correct any areas where the contrast ratios have been disrupted by the increased dynamic range. For example, if there was a strongly silhouetted shot, the value of the HDR highlight can cause your eye to iris down changing the perception of the deep shadows. In this case, I would roll off the highlights or lift the shadows so the ratio stays consistent with the original. The extra contrast HDR affords is often welcomed but it can cause some unwanted issues too. Grain appearance is another one of those examples.



Grain Management

Film grain is one of those magic ingredients. Just like salt, you miss it when it is not there and too much ruins the dish. Grain needs to be felt but never noticed. It is common for the noise floor to increase once you have stretched the film scan to HDR ranges. Also, the grain in the highlights not previously visible starts to be seen. To mitigate this, a grain management pass needs to be implemented. This can come before the grade, but I like to do this after since any contrast I add will have an effect on the perceived amount of noise. Grain can impart a color cast to your image, especially if there is a very noisy blue channel. Once removed this needs to be compensated for and is a downside of working post grade. It is during this pass that I will also take care of flicker and breathing which the grade also affects. My go-to tool for this is Neat Video. You would think that after a decade of dominance some software company would have knocked Neat off their throne as king of the denoise, but it hasn’t happened yet. I prebake the scans with a Neat pass (since Baselight X doesn’t play nicely with Neat yet.) Next, I stack the Neat’ed scan and the original as layers. This allows me to blend in the amount of grain to taste. The goal of this pass is to keep the grain consistent from shot to shot, regardless of the grade. The other, and most important goal is to make the grain look as it did on the print.

Dolby Trim

After the HDR10 grade is complete, it’s time for the Dolby trim. I use the original 14 FL print match version as a reference for where I want the Dolby trim to clip and crush. Once all the trims have been set, I export out a Dolby XML expecting rec2020 primaries as input. Yes, we graded in P3, but that gamut will be placed into a 2020 container once we export.

Mastering

Once all the work has been completed it’s time to master. Remasters receive the same treatment as new theatrical tiles when it comes to deliverables. The common ones are as follows:

  • Graded PQ P3D65 1000nit 16bit Tiff Files or ACES AP0 EXRs

  • Un-Graded PQ P3D65 16bit Tiff files or ACES AP0 EXRs

  • Graded 2.6 XYZ DCDM 14fl

  • Graded PQ XYZ 108nit 16bit Tiff Files or ACES AP0 EXRs for Dolby Vision Theatrical

  • Bt1886 QT or DPX files created from a Dolby XMLIMF PQ rec2020 limited to P3D65 1000nit

Case Studies

Perfect worlds do exist but we don’t live in one. Every job is a snowflake with its own unique hurdles. Remastering tests a colorists abilities across many disciplines of the job. Stong skills in composting, paint, film manipulation, and general grading is what is required to achieve and maintain the original artistic intent. Here are two films completed recently and a bit on the challenges faced in each.

Teenage Mutant Ninja Turtles

For those of you that don’t know the Ninja Turtles are near and dear to me. Not only was I a child of the 80s, but my Father was in charge of the postproduction on the original cartoons. He also wrote and directed many of them. When this came up for a remaster, I jumped at the chance to get back to my roots.

This film only required an SDR remaster. The output delivery was to be P3 D65 2.6 gamma. I set up the job using Baselight’s color management and worked in T-Log E_Gamut. The DRS was performed by Prasad with additional work by yours truly BECAUSE IT HAD TO BE PERFECT!

dark_1.13.1.jpg

There were two main color hurdles to jump through. First, some scenes were very dark. I used Baseligt’s boost shadow tool to “dig” out detail from the toe of the curve. This was very successful in many of the night scenes that the film takes place in.

Another trick I used was on the Turtle’s skin. You may or may not know, but all the turtles have different skin colors. Also, most folks think they are green, when in fact there is very little green in their skin. They are more of an olive. To make sure the ratio of green to yellow was correct I converted to LAB and graded their skin in that color space. Once happy, I converted it back to T-Log E-Gamut. LAB is a very useful space for affecting yellow tones. In this space, I was able to tweak their skin and nothing else. Sort of like a key and a hue shift all in one.

tmnt_lab.gif

The SDR ended up looking so good that the HDR was finished too. The HDR was quick and painless because of Baselight’s built-in color management. Most of the heavy lifting was already done and only a few tweaks needed.




Space Jam

Space Jam was a formative film from my youth. Not only did I have Jordan’s at the time, but I was also becoming a fledgling animation nerd (thanks Dad) when this film was released.

I set up the project for ACES color management with a Kodak LMT that I had used for other films previously. This reigned in the extreme edge of gamut colors utilized in the animation.

The biggest challenge on this project was cleaning up some of the inherent artifacts from 1990’s film recording technology. Cinesite performed all of the original composites, but at the time they were limited to 1k film recording. To mitigate that in a 4k world, I used Baselight’s texture equalizer and convolutional sharpen to give a bit of snap back to the filmed out sections.

Vishal Chathle supervised the restoration for the studio. Vishal and I boosted the looney tunes to have more color and take advantage of the wider gamut. The standard film shots, of which there were few, were pretty straightforward. Corrected mostly with Baselight’s Basegrade. Basegrade is a fantastic tool where the corrections are performed in linear gamma. This yields a consistent result no matter what your working space is.

Joe Pytka came in to approve the grade. This was very cool for me since not only did I grow up watching this film of his, but also all those iconic Superbowl commercials from the 90’s that he did. A true master of camera. He approved the grade but wished there was something more we could do with the main title. The main title sequence was built using many video effects. To recreate it would have cost a fortune. We had the original film out of it, but it looked pretty low res. What I did to remedy this was to run it through an AI up-rezer that I coded a while ago for large format shows.

The results were astounding. The titles regained some of their crisp edges that I can only presume were lost from the multiple generations of opticals that the sequence went through. The AI was also able to fix the aliasing inherent in the low res original. In the end, I was very proud of the result.

The last step was grain management. This show needed special attention because the grain from the Jordan plate was often different from the grain embedded in the animation plate that he was comped into. In order to make it consistent. I ran two de-grain passes on the scan. The first took care of the general grain from the original neg. The second pass was tuned to clean up Jordan’s grain that had the extra layer of optical grain over the top. It was a complicated noise pattern to take care of. Next, I took the two de-grained plates, roto’ed out Jordan, and re-comp-ed him over the cleaned-up plate. This gave consistency to the comps that were not there in the original.

Another area where we helped the comps were in animation error fixing. Some shots had layers that would disappear for a couple of frames, or because it was hand-drawn, a highlight that would disappear and then reappear. I used Baselight’s built-in paint tool to repair the original animation. One great feature of the paint tool is its ability to paint on two’s. An old animation trick is to only animate at 12fps if there isn’t a lot of motion. Then you shoot each frame twice. This halves the number of frames that need to be drawn. When I was fixing animation issues I would make a paint stroke on the frame and Baselight would automatically hold it for the next one. This cut down my work by half just like the original animators!

I was honored to help restore this piece of animation history. A big thanks to Michael Borquez and Chris Gillaspie for the flawless scanning and deep investigation of the best elements to use. Also a tip of the cap to Vishal Chathle for all the hard work and lending me his eagle eye!

Final Thoughts

Restoration Colorist should be a credit on its own. It’s unfortunate that this work rarely gets recognized and even less frequently gets credit. It is hard enough to deliver a director’s artistic vision from scratch. It’s arguably even harder to stay true to it 30 years later. Thanks for reading and check out these projects on HBO Max soon!