HDR - Flavors and Best Practices

Better Pixels.

Over the last decade we have had a bit of a renaissance in imaging display technology. The jump from SD to HD was a huge bump in image quality. HD to 4k was another noticeable step in making better pictures, but had less of an impact from the previous SD to HD jump. Now we are starting to see 8k displays and workflows. Although this is great for very large screens, this jump has diminishing returns for smaller viewing environments. In my opinion, we are to the point where we do not need more pixels, but better ones. HDR or High Dynamic Range images along with wider color gamuts are allowing us to deliver that next major increase in image quality. HDR delivers better pixels!

Last Summer_lake.jpg

Stop… What is dynamic range?

When we talk about the dynamic range of a particular capture system, what we are referring to is the delta between the blackest shadow and the brightest highlight captured. This is measured in Stops typically with a light-meter. A Stop is a doubling or a halving of light. This power of 2 way of measuring light is perfect for its correlation to our eyes logarithmic nature. Your eyeballs never “clip” and a perfect HDR system shouldn’t either. The brighter we go the harder it becomes to see differences but we never hit a limit.

Unfortunately digital camera senors do not work in the same way as our eyeballs. Digital sensors have a linear response, a gamma of 1.0 and do clip. Most high-end cameras convert this linear signal to a logarithmic one for post manipulation.

lin_curve_anim_wide.gif

I was never a huge calculus buff but this one thought experiment has served me well over the years.

Say you are at one side of the room. How many steps will it take to get to the wall if each time you take a step, the step is half the distance of your last. This is the idea behind logarithmic curves.

Say you are at one side of the room. How many steps will it take to get to the wall if each time you take a step, the step is half the distance of your last. This is the idea behind logarithmic curves.

It will take an infinite number of steps to reach the wall, since we can always half the half.

It will take an infinite number of steps to reach the wall, since we can always half the half.

range_shift_ANIM.gif

Someday we will be able to account for every photon in a scene, but until that sensor is made we need to work within the confines of the range that can be captured

For example if the darkest part of a sampled image are the shadows and the brightest part is 8 stops brighter, that means we have a range of 8 stops for that image. The way we expose a sensor or a piece of celluloid changes based on a combination of factors. This includes aperture, exposure time and the general sensitivity of the imaging system. Depending on how you set these variables you can move the total range up or down in the scene.

Let’s say you had a scene range of 16 stops. This goes from the darkest shadow to direct hot sun. Our imaging device in this example can only handle 8 of the 16 present stops. We can shift the exposure to be weighted towards the shadows, the highlights, or the Goldilocks sweet spot in the middle. There is no right or wrong way to set this range. It just needs to yield the picture that helps to promote the story you are trying to tell in the shot. A 16bit EXR file can handle 32 stops of range. Much more than any capture system can deliver currently.

Latitude is how far you can recover a picture from over or under exposure. Often latitude is conflated with dynamic range. In rare cases they are the same but more often than not your latitude is less then the available dynamic range.

Film, the original HDR system.

Film from its creation always captured more information than could be printed. Contemporary stocks have a dynamic range of 12 stops. When you print that film you have to pick the best 8 stops to show via printing with more or less light. The extra dynamic range was there in the negative but was limited by the display technology.

Flash forward to our digital cameras today. Cameras form Arri, Red, Blackmagic, Sony all boast dynamic ranges over 13 stops. The challenge has always been the display environment. This is why we need to start thinking of cameras not as the image creators but more as the photon collectors for the scene at the time of capture. The image is then “mapped” to your display creatively.

Scene referred grading.

The problem has always been how do we fit 10 pounds of chicken into an 8 pound bag? In the past when working with these HDR camera negatives we were limited to the range of the display technology being used. The monitors and projectors before their HDR counterparts couldn’t “display” everything that was captured on set even though we had more information to show. We would color the image to look good on the device for which we were mastering. “Display Referred Grading,” as this is called, limits your range and bakes in the gamma of the display you are coloring on. This was fine when the only two mediums were SDR TV and theatrical digital projection. The difference between 2.4 video gamma and 2.6 theatrical gamma was small enough that you could make a master meant for one look good on the other with some simple gamma math. Today the deliverables and masters are numerous with many different display gammas required. So before we even start talking about HDR, our grading space needs to be “Scene Referred.” What this means is that once we have captured the data on set, we pass it through the rest of the pipeline non-destructively, maintaining the relationship to the original scene lighting conditions. “No pixels were harmed in the making of this major motion picture.” is a personal mantra of mine.

I’ll add the tone curve later.

There are many different ways of working scene-referred. the VFX industry has been working this way for decades. The key point is we need to have a processing space that is large enough to handle the camera data without hitting the boundaries i.e. clipping or crushing in any of the channels. This “bucket” also has to have enough samples (bit-depth) to be able to withstand aggressive transforms. 10-bits are not enough for HDR grading. We need to be working in a full 16-bit floating point.

This is a bit of an exaggeration, but it illustrates the point. Many believe that a 10 bit signal is sufficient enough for HDR. I think for color work 16 bit is necessary. This ensures we have enough steps to adequately describe our meat and potatoe…

This is a bit of an exaggeration, but it illustrates the point. Many believe that a 10 bit signal is sufficient enough for HDR. I think for color work 16 bit is necessary. This ensures we have enough steps to adequately describe our meat and potatoes part of the image in addition to the extra highlight data at the top half of the code values.

Bit-depth is like butter on bread. Not enough and you get gaps in your tonal gradients. We want a nice smooth spread on our waveforms.

Now that we have our non destructive working space we use transforms or LUTs to map to our displays for mastering. ACES is a good starting point for a working space and a set of standardized transforms, since it works scene referenced and is always non destructive if implemented properly. The gist of this workflow is that the sensor linearity of the original camera data has been retained. We are simply adding our display curve for our various different masters.

Stops measure scenes, Nits measure displays.

For measuring light on set we use stops. For displays we use a measurement unit called a nit. Nits are a measure of peak brightness not dynamic range. A nit is equal to 1 cd/m2. I’m not sure why there is two units with different nomenclature for the same measurement, but for displays we use the nit. Perhaps candelas per meter squared, was just too much of a mouthful. A typical SDR monitor has a brightness of 100 nits. A typical theatrical projector has a brightness of 48 nits. There is no set standard for what is considered HDR brightness. I consider anything over 600nits HDR. 1000nits or 10 times brighter than legacy SDR displays is what most HDR projects are mastered to. The Dolby Pulsar monitor is capable of displaying 4000 nits which is the highest achievable today. The PQ signal accommodates values up to 10,000 nits

The Sony x300 has a peak brightness of 1000 nits and is current gold standard for reference monitors.

The Sony x300 has a peak brightness of 1000 nits and is current gold standard for reference monitors.

The Dolby Pulsar is capable of 4000 nit peak white

The Dolby Pulsar is capable of 4000 nit peak white

P-What?

Rec2020 color primaries with a D65 white point

Rec2020 color primaries with a D65 white point

The most common scale to store HDR data is the PQ Electro-Optical Transfer Function. PQ stands for perceptual quantizer. the PQ EOTF was standardized when SMPTE published the transfer function as SMPTE ST 2084. The color primaries most often associated with PQ are rec2020. BT.2100 is used when you pair the two, PQ transfer function with rec2020 primaries and a D65 white point. This is similar to how the definition of BT.1886 is rec709 primaries with an implicit 2.4 gamma and a D65 white point. It is possible to have a PQ file with different primaries than rec2020. The most common variance would be P3 primaries with a D65 white point. Ok, sorry for the nerdy jargon but now we are all on the same page.



2.4_vs_PQ.png

HDR Flavors

There are four main HDR flavors in use currently. All of them use a logarithmic approach to retain the maxim amount of information in the highlights.

Dolby Vision

Dolby Vision is the most common flavor of HDR out in the field today. The system works in three parts. First you start with your master that has been graded using the PQ EOTF. Next you “analyse“ the shots in in your project to attach metadata about where the shadows, highlights and meat and potatoes of your image are sitting. This is considered layer 1 metadata. Next this metadata is used to inform the Content Mapping Unit or CMU how best to “convert” your picture to SDR and lower nit formats. The colorist can “override” this auto conversion using a trim that is then stored in layer 2 metadata commonly referred to as L2. The trims you can make include lift gamma gain and sat. In version 4.0 out now, Dolby has given us the tools to also have secondary controls for six vector hue and sat. Once all of these settings have been programmed they are exported into an XML sidecar file that travels with the original master. Using this metadata, a Dolby vision equipped display can use the trim information to tailor the presentation to accommodate the max nits it is capable of displaying on a frame by frame basis.

HDR 10

HDR 10 is the simplest of the PQ flavors. The grade is done using the PQ EOTF. Then the entire show is analysed. The average brightness and peak brightness are calculated. These two metadata points are called MaxCLL - Maximum Content Light Level and MaxFALL - Maximum Frame Average Light Level. Using these a down stream display can adjust the overall brightness of the program to accommodate the displays max brightness.

HDR 10+

HDR 10+ is similar to Dolby Vision in that you analyse your shots and can set a trim that travels in metadata per shot. The difference is you do not have any color controls. You can adjust points on a curve for a better tone map. These trims are exported as an XML file from your color corrector.

HLG

Hybrid log gamma is a logarithmic extension of the standard 2.4 gamma curve of legacy displays. The lower half of the code values use 2.4 gamma and the top half use log curve. Combing the legacy gamma with a log curve for the HDR highlights is what makes this a hybrid system. This version of HDR is backwards compatible with existing display and terrestrial broadcast distribution. There is no dynamic quantification of the signal. The display just shows as much of the signal as it can.

HDR_curve_8_19_anim.gif

Deliverables

Deliverables change from studio to studio. I will list the most common ones here that are on virtually every delivery instruction document. Depending on the studio, the names of these deliverables will change but the utility of them stays the same.

PQ 16-bit Tiffs

This is the primary HDR deliverable and derives some of the other masters on the list. These files typically have a D65 white point and are either Rec2020 or p3 limited inside of a Rec2020 container.

GAM

The Graded Archival Master has all of the color work baked in but does not have the any output transforms. This master can come in three flavors all of which are scene referred;

ACES AP0 - Linear gamma 1.0 with ACES primaries, sometimes called ACES prime.

Camera Log - The original camera log encoding with the camera’s native primaries. For example, for Alexa, this would be LogC Arri Wide Gamut.

Camera Linear - This flavor has the camera’s original primaries with a linear gamma 1.0

NAM

The non-graded assembly master is the equivalent of the VAM back in the day. It is just the edit with no color correction. This master needs to be delivered in the same flavor that your GAM was.

ProRes XQ

This is the highest quality ProRes. It can hold 12-bits per image channel and was built with HDR in mind.

Dolby XML

This XML file contains all of the analysis and trim decisions. For QC purposes it needs to be able to pass a check from Dolby’s own QC tool Metafier.

IMF

Inter-operable Master Format files can do a lot. For the scope of this article we are only going to touch on the HDR delivery side. The IMF is created from an MXF made from jpeg 2000s. The jp2k files typically come from the PQ tiff master. It is at this point that the XML file is married with picture to create one nice package for distribution.


Near Future

Currently we master for theatrical first for features. In the near future I see the “flippening” occurring. I would much rather spend the bulk of the grading time on the highest quality master rather than the 48nit limited range theatrical pass. I feel like you get a better SDR version by starting with the HDR since you have already corrected any contamination that might have been in the extreme shadows or highlights. Then you spend a few days “trimming” the theatrical SDR for the theaters. The DCP standard is in desperate need of a refresh. 250Mbps is not enough for HDR or high resolution masters. For the first time in film history you can get a better picture in your living room than most cinemas. This of course is changing and changing fast.

Sony and Samsung both have HDR cinema solutions that are poised to revolutionize the way we watch movies. Samsung has their 34 foot onyx system which is capable of 400nit theatrical exhibition. You can see a proof of concept model in action today if you live in the LA area. Check it out at the Pacific Theatres Winnetka in Chatsworth.

Sony has, in my opinion, the wining solution at the moment. They have a their CLED wall which is capable of delivering 800 nits in a theatrical setting. These types of displays open up possibilities for filmmakers to use a whole new type of cinematic language without sacrificing any of the legacy story telling devices we have used in the past.

For example, this would be the first time in the history of film where you could effect a physiologic change to the viewer. I have often thought about a shot I graded for The “Boxtrolls” where the main character, Eggs, comes out from a whole life spent in the sewers. I cheated an effect where the viewers eyes were adjusting to a overly bright world. To achieve this I cranked the brightness and blurred the image slightly . I faded this adjustment off over many shots until your eye “adjusted” back to normal. The theatrical grade was done at 48nits. At this light level, even at it’s brightest the human eye is not iris’ed down at all, but what if I had more range at my disposal. Today I would crank that shot until it made the audiences irises close down. Then over the next few shots the audience would adjust back to the “new brighter scene and it would appear normal. That initial shock would be similar to the real world shock of coming from a dark environment to a bright one.

Another such example that I would like to revisit is the myth of “L’Arrivée d’un train en gare de La Ciotat. In this early Lumière picture a train pulls into a station. The urban legend is that this film had audiences jumping out of their seats and ducking for cover as the train comes hurling towards them. Imagine if we set up the same shot today but in a dark tunnel. We could make the head light so bright in HDR that coupled with the sound of a rushing train would cause most viewers, at the very least, to look away as it rushes past. A 1000 nit peak after your eyes have been acclimated to the dark can appear shockingly bright.

I’m excited for these and other examples yet to be created by filmmakers exploring this new medium. Here’s to better pixels and constantly progressing the art and science of moving images!

Please leave a comment below if there are points you disagree with or have any differing views on the topics discussed here.

Thanks for reading,

John Daro

How To - Dolby Vision

Dolby Vision - How To and Best Practices



What is Dolby Vision

Dolby Vision is a way to dynamically map HDR to different display targets. At its core, the system analyzes your media and transforms it to ranges less than your mastering target, typically SDR 100 nits.

Project Setup

The first step is to license your machine. Once that is in place you need to set up your project. Go into settings and set your CMU(Content Mapping Unit) version. Back in the day, we used an external box, but nowadays the software does it internally. You will also need to set which version. V4 is the current iteration whereas V2.9 is a legacy version that some older TVs use. Finally set your Mastering Display. That is a Sony x300 in my case which is set up for PQ P3 D65 1000 nits.

Baselight Project Setup

Resolve Project Setup

It’s not what you said, it’s your tone

The goal is to make our HDR master look good on an SDR display. To do this we need to tone map our HDR ranges to the corresponding SDR ranges. This is a nonlinear relationship and our shadows mid-tones and highlights will land in the wrong areas if we don’t tone map them first. See below for an example of an SDR image that has not been tone mapped correctly. You can see the highlights are way too hot. Now we could use a curve and shape our image to a discreet master for SDR, but most studios and streamers are requesting a Dolby delivery regardless if a separate SDR grade was made. Plus, Dolby does a pretty decent job of getting you there quickly since the v4 release.

The first step is to analyze your footage. This will result in three values that will set a tone curve, min, max, and average. These values inform the system how to shape the curve to get a reasonable rendering of your HDR master in SDR.

Image courtesy of Dolby

Tone mapping from HDR to SDR

What we are trying to do here is fit 10 pounds of chicken into an 8-pound bag. Something has to give, usually the bones but the goal is to keep as much chicken as you can. Rather than toss data out, we instead compress it. The system calculates the min, max, and average light levels. The idea is to keep your average or the “meat and potatoes” of your shot intact while compressing the top and bottom ranges. The end result is an SDR image that resembles your HDR only flatter.

How a colorist goes about the analysis is just as important as the analysis itself. This is going to get into a religious debate more than a technical one and everything from this point on is my opinion based on my experiences with the tech. Probably not what Dolby would say.

The original design of the system wanted you to analyze every shot independently. The problem with this approach is it can take a consistent grade and make it inconsistent depending on the content. Say you had two shots from the same scene.

One side of the coverage was shooting the character with a blown-out window behind them. The other side shoots into the darker part of the house. Now even though you as a colorist have balanced them to taste, the Dolby analysis will have two very different values for these shots. To get around this, I find it is better to average the analysis for each scene vs doing independent shots. The first colorist I saw work this way was my good friend and mentor Walter Volpatto. He went toe to toe with Dolby because his work was getting QC rejections based on his method. He would analyze only a grey ramp with the d-min and d-max values representing his media and apply that to his entire timeline. His thought process was if it was one transform to HDR it should be one transform down.

Most studio QC operations now accept this approach as valid metadata (Thank you, Wally!) While I agree with his thought process, I tend to work based on one analysis per scene. Resolve has this functionality built in. When I’m working in Baselight I set it up this way and copy the scene averaged analysis to every shot in preparation for the trim.

Scene average analysis in Baselight.

Setting the tone

Now that your analysis is complete it’s time to trim. First, you need to set what display output your trim is targeting and the metadata flag for the intended distribution. You can also set any masking that was used so the analysis doesn’t calculate the black letterbox pixels. The most common targets are 709 100nits, P3 48nits, and PQ 108nits. The 709 trim is for SDR home distribution whereas the other two are for theatrical distribution. The reason we want to keep the home video and cinema trims separate is that displays that fall in between two trim targets will be interpolated. You can see that the theatrical 108nit trim is very close to the home video 100nit trim. These two trims will be represented very differently due to the theatrical grade being intended for a dark theater vs home viewing with dim surround lighting conditions. Luckily Dolby recognized this and that is why we have separation of church and state now. The process for completing these trims is the same though, only the target changes.

Trim the fat

Saturation plus lift gamma gain is the name of the game. You also have advanced tools for highlight clipping and mid-tone contrast. Additionally, you have very basic secondary controls to manipulate the hue and saturation of the six vectors.

Baselight Dolby trim controls.

Resolve Dolby trim controls.

These secondary controls are very useful when you have extremely saturated colors that are on the boundaries of your gamut. I hope Dolby releases a way to only target the very saturated color values instead of the whole range of a particular vector, but for now, these controls are all we have.

Mid tone offset

Another tool that affects the analysis data but could be considered a trim is the mid-tone offset. A good way to think about this tool is a manual shifting of what your average is. This slides the curve up or down from the midpoint.

I usually find the base analysis and subsequent standard conversion a little thick for my taste. I start by finding a pleasing trim value that works for a majority of shots. Then I ripple that as a starting place and trim from there until I’m happy with the system’s output. The below before and after shows the standard analysis output vs where I ended up with the trim values engaged.

It’s time to export once you are happy with the trims for all of your needed outputs. This is done by exporting the XML recipes that when paired with your PQ master will create all the derivative versions.

XML

Here are two screenshots of where to find the XML export options in Baselight and Resolve.

Rightclick on your timeline -> timelines - >export -> Dolby XML

Shots View -> Gear Icon ->Export Dolby Vision Metadata… This will open a menu to let you choose your location and set primaries for the file.

The key here is to make sure that you are exporting an XML that reflects your deliverable, not your DSM. For example, I typically export PQ P3 D65 tiffs as the graded master files. These are then taken into Transkoder, placed into a rec 2020 container, and married with the XML to create an IMF. It’s important to export a rec2020 XML instead of a P3 one so that when it is applied to your deliverable it yields the intended results. You can always open your XML in a text editor if you are unsure of your declared primaries. I have included a screen grab of what the XML should look like for the Rec2020 primaries on the left and P3 primaries on right. Always go by the numbers because filenames can lie.

Rec2020 XML vs P3 D65

There is beauty in the simplicity of this system. Studios and streamers love the fact there is only one serviceable master. As a colorist, I love the fact that when there is a QC fix you only need to update one set of files and sometimes the XML. That’s a whole lot better than in the height of the 3D craze where you could have up to 12 different masters and that is not even counting the international versions. I remember finishing Katie Perry’s “Part of Me” in 36 different versions. So in retrospect, Dolby did us all a great service by transmuting all of those versions we used to painstakingly create into one manageable XML sidecar file.

Thanks for reading

I bet in the future these trim passes end up going the way of the 4x3 version. Especially with the fantastic HDR displays available from Samsung, Sony, and LG at continually lower price points. Remember the Dolby system only helps you at home if it is something other than what the media was mastered at. Until then, I hope this helps.

Check out this Dolby PDF for more information and deeper dive into the definition of the various XML levels. As always thanks for reading.

-JD

How to - Convert 29.97i SD to 23.98p in Resolve

29.97i to 23.98p

Have you ever needed to convert older standard-definition footage from 29.97i to 23.98p for use in a new project? Resolve is no Alchemist, but it can get the job done in a pinch. Read below for the best practices approach to converting your footage for use in new films or documentaries.

Setup Your Project

I would recommend creating a new project just for conversions. First up, we will set up the Mastering Settings. Set the timeline resolution to 720x486 NTSC. Are you working in PAL? If so set this to 720x576. All the other steps will be the same, but I will assume you are working with NTSC(North America and Japan) files going forward.

master settings.jpg

Usually, we work with square pixels with a pixel aspect of 1:1 for HD video and higher. Here we need to change the pixel aspect to conform to the older standard of 0.9:1. Remember when you were young and pushed your face right up against that old Zenith TV set. Perhaps you noticed the RGB rectangles, not squares, that made up the image. The 4:3 standard definition setting accounts for that.

Finally and most importantly, we set the frame rate to 23.976, which is what we want for our output.

At this point simply dragging a clip onto a timeline will result in it being converted to 23.98, but why does it look so steppy and bad? We need to tell Resolve to use high-quality motion estimation. This optical flow setting is the same engine that makes your speed effects look smooth. By setting it on the project page we declare the default method for all time re-mapping to use the highest quality frame interpolation. Including frame rate conversions.

Frame_interpolation.jpg

Leveling the Playing Field: 29.97i to 29.97p

Technically 29.97i has a temporal sample rate of 59.94 half resolution(single field) images per second. Before we take 29.97 to 23.98 we need to take the interlaced half-frames and create whole frames. The setting we can engage is the Neural de-interlace. This setting can be found on the Image Scaling page. This will help with aliasing in your final output.

Now that all the project settings have been set, we are ready to create a timeline. A good double-check to make sure everything is behaving as expected is to do a little math.

First we take the frame rate or our source divided by our target frame rate.

29.97 ➗ 23.976 = 1.25

This result is our retime factor

Next we use that factor multiplied by the number of frames in our converted timeline.

1.25 * 1445 = 1818.75

That result will be the original # of frames from the 29.97 source. If everything checks out, then it’s time to render.

Rendering

I prefer to render at source resolution and run any upscaling steps downstream. You can totally skip this step by rendering to HD, 4k, or whatever you need.

deliver.jpg

I would recommend using Davinci’s Super Scale option if you are uprezing at the same time. This option can be accessed via the Clip Attributes... settings in the contextual menu that pops up when you right-click a source clip.

clip atributes.jpg

I hope this helps take your dusty old SD video and prepare it for some new life. This is by no means the “best” conversion out there. Alchemist is still my preferred standards conversion platform. Nvidia has also cooked up some amazing open-source tools for manipulating time via its machine vision toolset. All that said, Resolve does an amazing job, besting the highest quality, very expensive hardware from years past. The best part is it’s free.

Happy Grading,

JD

Best Practices: Restoring Classics

2020 - The year of Restorations

Now that we seem to be on the other end of the pandemic, I wanted to take a moment to look back on some of the projects that kept me busy. Restorations were the name of game during covid times. With productions shut down and uncertainty in the theatrical marketplace, I had time in my schedule to breathe new life into some of my favorite classics.

Over the last year, I have restored;

Let’s take a look at a couple of these titles and talk about what it means to remaster a film with our contemporary toolset.

The Process

The process for remastering classic titles is very similar to finishing new theatrical work with a couple of additional steps. The first step is to identify and evaluate the best elements to use. That decision is easy for digitally acquired shows from the early 2000’s. In those instances, the original camera files are all that exist and are obviously the best source. Film shows are where it gets particularly ambiguous. There is a debate whether starting from the IP or original negative yields better results. Do we use the original opticals or recreate them from the elements? Black and white seps vs faded camera neg? These questions all need to be answered before you begin the work. Usually I prefer to start with the OCN when available.

Director Scanner

Director Scanner

Arri Scan

Arri Scan

Scanning

Scanning is arguably the most critical part of the process. Quality and success will live or die by the execution of great scans. Image breathing, movement, and general sharpness are issues to look for when evaluating. Scans should not be pretty but rather represent a digital copy of the negative.  In a perfect closed-loop system, a scanned piece of film, once shot back out on a calibrated recorder needs to closely match the original negative.

Digital Restoration

The next step in making an old project shiny and new is to repair any damage to the film from aging or that was inherent in production. this includes painting out splice lines, gate hairs, dirt, and scratches. Film processing issues like breathing or turbulence can also be taken care of in this step. I prefer to postpone flicker removal until the grading step since the contrast will have an effect on the amount of flicker to remove. Some common tools used for restoration include MTI and PF Clean. This work is often outsourced because of the high number of man-hours and labor costs associated with cleaning every frame of film. Some companies that do exceptional restoration work are PrimeFocus and Prasad among others.

Grading

Grading restoration titles is a total sub-discipline from grading as a whole. New theatrical grading starts with references and look development to achieve a certain tone for the film. There is a ton of work that goes into this process. Restoration grading differs since the goal is staying true to that original intent. Not reimagining it. Much like new theatrical grading, a good reference will set you up for success. My preferred reference is a filmmaker-approved answer print.  These were the master prints that best represented the filmmakers’ creative intent.

kinoton-fp30d-696x1024.jpeg

A good practice is to screen the print and immediately set looks for the scans getting as close as possible at 14fl projected. An upgrade to this workflow is to use a projector in the grading suite like a Kinoton. These projectors have remote control and cooling. This allows you to rock and roll the film. You can even freeze-frame and thanks to the built in cooling your film doesn’t burn. Setting up a side-by-side with the film vs digital is the best way to ensure you have a match to the original intent. These corrections need to happen using a good color management system. Aces for example has ODTs for theatrical 48nits which is the equivalent of 14fl. Once you have a match to the original, the enhancement can start.

There would be no point in remastering if it was going to look exactly like the existing master. One great reason to remaster is to take advantage of new advancements in HDR and wide color gamut formats. Film was the original HDR format containing 12 stops of range. The print was the limiting factor, only being able to display 8 of those stops. By switching the ODT to PQ P3D65, we can take advantage of the larger container and let the film display all that it has to offer.

My approach is to let the film land where it was originally shot but tone-mapped for PQ display. This will give you a master that had the original intent of the print but in HDR. I often use an LMT that limits the gamut to that of the emulsion used for original photography. This also ensures that I’m staying true to the film's original pallet. Typically there is some highlight balancing to do since what was white and “clipped” is now visible. Next is to identify and correct any areas where the contrast ratios have been disrupted by the increased dynamic range. For example, if there was a strongly silhouetted shot, the value of the HDR highlight can cause your eye to iris down changing the perception of the deep shadows. In this case, I would roll off the highlights or lift the shadows so the ratio stays consistent with the original. The extra contrast HDR affords is often welcomed but it can cause some unwanted issues too. Grain appearance is another one of those examples.



Grain Management

Film grain is one of those magic ingredients. Just like salt, you miss it when it is not there and too much ruins the dish. Grain needs to be felt but never noticed. It is common for the noise floor to increase once you have stretched the film scan to HDR ranges. Also, the grain in the highlights not previously visible starts to be seen. To mitigate this, a grain management pass needs to be implemented. This can come before the grade, but I like to do this after since any contrast I add will have an effect on the perceived amount of noise. Grain can impart a color cast to your image, especially if there is a very noisy blue channel. Once removed this needs to be compensated for and is a downside of working post grade. It is during this pass that I will also take care of flicker and breathing which the grade also affects. My go-to tool for this is Neat Video. You would think that after a decade of dominance some software company would have knocked Neat off their throne as king of the denoise, but it hasn’t happened yet. I prebake the scans with a Neat pass (since Baselight X doesn’t play nicely with Neat yet.) Next, I stack the Neat’ed scan and the original as layers. This allows me to blend in the amount of grain to taste. The goal of this pass is to keep the grain consistent from shot to shot, regardless of the grade. The other, and most important goal is to make the grain look as it did on the print.

Dolby Trim

After the HDR10 grade is complete, it’s time for the Dolby trim. I use the original 14 FL print match version as a reference for where I want the Dolby trim to clip and crush. Once all the trims have been set, I export out a Dolby XML expecting rec2020 primaries as input. Yes, we graded in P3, but that gamut will be placed into a 2020 container once we export.

Mastering

Once all the work has been completed it’s time to master. Remasters receive the same treatment as new theatrical tiles when it comes to deliverables. The common ones are as follows:

  • Graded PQ P3D65 1000nit 16bit Tiff Files or ACES AP0 EXRs

  • Un-Graded PQ P3D65 16bit Tiff files or ACES AP0 EXRs

  • Graded 2.6 XYZ DCDM 14fl

  • Graded PQ XYZ 108nit 16bit Tiff Files or ACES AP0 EXRs for Dolby Vision Theatrical

  • Bt1886 QT or DPX files created from a Dolby XMLIMF PQ rec2020 limited to P3D65 1000nit

Case Studies

Perfect worlds do exist but we don’t live in one. Every job is a snowflake with its own unique hurdles. Remastering tests a colorists abilities across many disciplines of the job. Stong skills in composting, paint, film manipulation, and general grading is what is required to achieve and maintain the original artistic intent. Here are two films completed recently and a bit on the challenges faced in each.

Teenage Mutant Ninja Turtles

For those of you that don’t know the Ninja Turtles are near and dear to me. Not only was I a child of the 80s, but my Father was in charge of the postproduction on the original cartoons. He also wrote and directed many of them. When this came up for a remaster, I jumped at the chance to get back to my roots.

This film only required an SDR remaster. The output delivery was to be P3 D65 2.6 gamma. I set up the job using Baselight’s color management and worked in T-Log E_Gamut. The DRS was performed by Prasad with additional work by yours truly BECAUSE IT HAD TO BE PERFECT!

dark_1.13.1.jpg

There were two main color hurdles to jump through. First, some scenes were very dark. I used Baseligt’s boost shadow tool to “dig” out detail from the toe of the curve. This was very successful in many of the night scenes that the film takes place in.

Another trick I used was on the Turtle’s skin. You may or may not know, but all the turtles have different skin colors. Also, most folks think they are green, when in fact there is very little green in their skin. They are more of an olive. To make sure the ratio of green to yellow was correct I converted to LAB and graded their skin in that color space. Once happy, I converted it back to T-Log E-Gamut. LAB is a very useful space for affecting yellow tones. In this space, I was able to tweak their skin and nothing else. Sort of like a key and a hue shift all in one.

tmnt_lab.gif

The SDR ended up looking so good that the HDR was finished too. The HDR was quick and painless because of Baselight’s built-in color management. Most of the heavy lifting was already done and only a few tweaks needed.




Space Jam

Space Jam was a formative film from my youth. Not only did I have Jordan’s at the time, but I was also becoming a fledgling animation nerd (thanks Dad) when this film was released.

I set up the project for ACES color management with a Kodak LMT that I had used for other films previously. This reigned in the extreme edge of gamut colors utilized in the animation.

The biggest challenge on this project was cleaning up some of the inherent artifacts from 1990’s film recording technology. Cinesite performed all of the original composites, but at the time they were limited to 1k film recording. To mitigate that in a 4k world, I used Baselight’s texture equalizer and convolutional sharpen to give a bit of snap back to the filmed out sections.

Vishal Chathle supervised the restoration for the studio. Vishal and I boosted the looney tunes to have more color and take advantage of the wider gamut. The standard film shots, of which there were few, were pretty straightforward. Corrected mostly with Baselight’s Basegrade. Basegrade is a fantastic tool where the corrections are performed in linear gamma. This yields a consistent result no matter what your working space is.

Joe Pytka came in to approve the grade. This was very cool for me since not only did I grow up watching this film of his, but also all those iconic Superbowl commercials from the 90’s that he did. A true master of camera. He approved the grade but wished there was something more we could do with the main title. The main title sequence was built using many video effects. To recreate it would have cost a fortune. We had the original film out of it, but it looked pretty low res. What I did to remedy this was to run it through an AI up-rezer that I coded a while ago for large format shows.

The results were astounding. The titles regained some of their crisp edges that I can only presume were lost from the multiple generations of opticals that the sequence went through. The AI was also able to fix the aliasing inherent in the low res original. In the end, I was very proud of the result.

The last step was grain management. This show needed special attention because the grain from the Jordan plate was often different from the grain embedded in the animation plate that he was comped into. In order to make it consistent. I ran two de-grain passes on the scan. The first took care of the general grain from the original neg. The second pass was tuned to clean up Jordan’s grain that had the extra layer of optical grain over the top. It was a complicated noise pattern to take care of. Next, I took the two de-grained plates, roto’ed out Jordan, and re-comp-ed him over the cleaned-up plate. This gave consistency to the comps that were not there in the original.

Another area where we helped the comps were in animation error fixing. Some shots had layers that would disappear for a couple of frames, or because it was hand-drawn, a highlight that would disappear and then reappear. I used Baselight’s built-in paint tool to repair the original animation. One great feature of the paint tool is its ability to paint on two’s. An old animation trick is to only animate at 12fps if there isn’t a lot of motion. Then you shoot each frame twice. This halves the number of frames that need to be drawn. When I was fixing animation issues I would make a paint stroke on the frame and Baselight would automatically hold it for the next one. This cut down my work by half just like the original animators!

I was honored to help restore this piece of animation history. A big thanks to Michael Borquez and Chris Gillaspie for the flawless scanning and deep investigation of the best elements to use. Also a tip of the cap to Vishal Chathle for all the hard work and lending me his eagle eye!

Final Thoughts

Restoration Colorist should be a credit on its own. It’s unfortunate that this work rarely gets recognized and even less frequently gets credit. It is hard enough to deliver a director’s artistic vision from scratch. It’s arguably even harder to stay true to it 30 years later. Thanks for reading and check out these projects on HBO Max soon!

Recommendations for Home Color System

I have recently had a lot of inquiries about setting up a home color system. Even with the vaccines rolling out, I think coloring at home will stick around. There is no substitute for a full DI bay with best-in-class equipment, but this is some of the gear that I use when at home.

Color Correction Panels

These will vary based on the software used but here are the ones that I think give you the most bang for your buck when using Baselight, Resolve and others.

Filmlight Slate Panel for Baselight

Filmlight Slate Panel for Baselight

Filmlight Slate

Now I love my BlackBoard 2, but if desk real estate is at a premium the Slate is the next best thing. The great thing about this panel is it can double as a remote option for daylight when on set. The downside is this panel only works with Filmlight software.

BMD Micro Panel for Resolve

BMD Micro Panel for Resolve

Blackmagic Micro

I find that I use the mouse a lot more when working in Resolve. For that reason, I like the Micro panel more than the Mini. The Micro gives you just the basics, but the additional features the Mini has do not justify the premium paid. I think you are better off getting a Stream Deck and mapping the missing buttons to that. Controlling curves in Resolve is still best done with the mouse in my opinion. Just like the Filmlight offering, these panels only work with Resolve.

Tangent Elements Panel. Works with Mistika, Resolve, Flame, Lustre, Premiere, Scratch, Red Cine, Nucoda, Quantel

Tangent Elements Panel. Works with Mistika, Resolve, Flame, Lustre, Premiere, Scratch, Red Cine, Nucoda, Quantel

Tangent Elements

If you are like me then you probably use multiple software packages to complete your work. One great aspect of the Tangent Panels is that they work with many different types of software. An honorable mention goes out to the Ripple, especially if you are looking for a small footprint at an entry-level price.

Tangent Ripple Panel. Works with Mistika, Resolve, Flame, Lustre, Premiere, Scratch, Red Cine, Nucoda, Quantel

Tangent Ripple Panel. Works with Mistika, Resolve, Flame, Lustre, Premiere, Scratch, Red Cine, Nucoda, Quantel

Monitoring

Get a Sony X300 if you can find one and afford it. That said not everyone has 30k to invest in their home setup. The LG CX has one of the best price-to-performance ratios out there. You won’t be hitting 1000nits but the color can be calibrated to be extremely close to the x300. Thanks to Dado for making this video that has all manufacture codes to unlock this display’s true potential and a walkthrough on Calman calibration.

LG CX

LG CX

Sony X300

Sony X300

I’m also excited to test the new displays from LG display. The next-gen LG panels found in the Sony A90j and LG G1 can achieve over 1000 nits. I’m expecting the same colorimetry performance with increased brightness. I will let you know what my real-world measurements are once I get my hands on these.

I don’t do affiliate links and am not paid by any of the manufacturers listed here. I just wanted to let everyone know what I’m using these days in the home office. Let me know what you all are using and if you have any hacks that you find helpful. Thanks for reading!






How To - AAF and EDL Export

AAF and EDL Exporting for Colorists

Here is a quick howto on exporting AAFs and EDLs from an Avid bin. Disclaimer - This is for colorists, not editors!

Exporting an AAF:

First, open your project. Be sure to set the frame rate correctly if you are starting a new project or importing a bin from another.

First, open your project. Be sure to set the frame rate correctly if you are starting a new project or importing a bin from another.

Next, open the bin that contains the sequence you want to export an AAF from.

Next, open the bin that contains the sequence you want to export an AAF from.

Select the timeline and right click it. This sequence should already be cleaned for color. Meaning, camera source on v1, opticals on v2, speed fx on v3, vfx on v4, titles and graphics on v5.

Select the timeline and right click it. This sequence should already be cleaned for color. Meaning, camera source on v1, opticals on v2, speed fx on v3, vfx on v4, titles and graphics on v5.

After you right click, go “Output” -> “Export to File”

After you right click, go “Output” -> “Export to File

Navigate to the path that you want to export to.. Then, click “Options”

Navigate to the path that you want to export to.. Then, click “Options”

In the “Export As” pulldown select “AAF.” Next, un-check “Include Audio Tracks in Sequence” and make sure “Export Method:” is set to “Link to(Don’t Export) Media.” Then click “Save” to save the settings and return to the file browser.

In the “Export As” pulldown select “AAF.” Next, un-check “Include Audio Tracks in Sequence” and make sure “Export Method:” is set to “Link to(Don’t Export) Media.” Then click “Save” to save the settings and return to the file browser.

Give the AAF a name and hit “Save.” That’s it! Your AAF is sitting on the disk now.

Give the AAF a name and hit “Save.” That’s it! Your AAF is sitting on the disk now.

Exporting an EDL:

We should all be using AAFs to make our conform lives easier, but if you need an EDL for a particular piece of software or just want something that is easily read by a human, here you go.

Setup the project and import your bin the same as an AAF. Instead of right-clicking on the sequence, go -> “Tools” -> “List Tool” and it will open a new window. I’m probably dating my self, but back in my day, this was called “EDL Manager.” Li…

Setup the project and import your bin the same as an AAF. Instead of right-clicking on the sequence, go -> “Tools” -> “List Tool” and it will open a new window. I’m probably dating my self, but back in my day, this was called “EDL Manager.” List Tool is a huge improvement since it lets you export multi-track EDLs quickly.

Select “File_129” from the “Output Format:” pull-down. This sets the tape name to 129 characters (128+0 =129) which is the limit for a filename in most operating systems. Next, click the tracks you want to export.

Select “File_129” from the “Output Format:” pull-down. This sets the tape name to 129 characters (128+0 =129) which is the limit for a filename in most operating systems. Next, click the tracks you want to export.

Double-click your sequence in the bin to load your timeline into the record monitor. Then click “Load” in the “List Tool” window. At this point, you can click “Preview” to see your EDL in the “Master EDL” tab. To save, click the “Save List” pull-dow…

Double-click your sequence in the bin to load your timeline into the record monitor. Then click “Load” in the “List Tool” window. At this point, you can click “Preview” to see your EDL in the “Master EDL” tab. To save, click the “Save List” pull-down and choose “To several files.” This option will make one EDL per video track. Choose your file location in the browser and hit save. That’s it. Your EDLs are ready for conforming or notching.

Alternate Software EDL Export

That’s great John, but what if I’m using something else other than Avid?

Here are the methods for EDL exports in Filmlight’s Baselight, BMD DaVinci Resolve, Adobe Premiere Pro, and SGO Mistika in rapid-fire. If you are using anything else… please stop.

Baselight

Open “Shots” view(Win + H) and click the gear pull-down. Next click “Export EDL.” The exported EDL will respect any filters you may have in “Shots” view, which makes it a very powerful tool, but also something to keep an eye on.

Open “Shots” view(Win + H) and click the gear pull-down. Next click “Export EDL.” The exported EDL will respect any filters you may have in “Shots” view, which makes it a very powerful tool, but also something to keep an eye on.

Resolve

In the media manager, right-click your timeline and select “Timelines“ -> “Export“ -> “AAF/XML/EDL“

In the media manager, right-click your timeline and select “Timelines“ -> “Export“ -> “AAF/XML/EDL

Premiere Pro

Make sure your “Tape Name” column is populated.

Make sure your “Tape Name” column is populated.

Make sure to have your Timeline selected. Then go, “File“ -> “Export“ -> “EDL“

Make sure to have your Timeline selected. Then go, “File“ -> “Export“ -> “EDL

The most important setting here is “32 character names.” Sometimes this is called “File32” in other software. Checking this insures the file name in it’s entirety(as long as it’s not longer then 32 characters) will be placed into the tape id locatio…

The most important setting here is “32 character names.” Sometimes this is called “File32” in other software. Checking this insures the file name in it’s entirety(as long as it’s not longer then 32 characters) will be placed into the tape id location of the EDL

Mistika

Set your marks in the Timespace where you want the EDL to begin and end. Then select “Media“ -> “Output“ -> “Export EDL2” -> “Export EDL.“ Once pressed you will see a preview of the EDL on the right.

Set your marks in the Timespace where you want the EDL to begin and end. Then select “Media“ -> “Output“ -> “Export EDL2” -> “Export EDL.“ Once pressed you will see a preview of the EDL on the right.

No matter what the software is, the same rules apply for exporting.

  • Clean your timeline of unused tracks and clips.

  • Ensure that your program has leader and starts at an hour or ##:59:52:00

  • Camera source on v1, Opticals on v2, Speed FX on v3, VFX on v4, Titles and Graphics on v5

Many of us are running lean right now. I hope this helps the folks out there who are working remotely without support and the colorists who don’t fancy editorial or perhaps haven’t touched those tools in a while.

Happy Grading!

JD

HDR - Flavors and Best Practices (Copy)

Better Pixels.

Over the last decade we have had a bit of a renaissance in imaging display technology. The jump from SD to HD was a huge bump in image quality. HD to 4k was another noticeable step in making better pictures, but had less of an impact from the previous SD to HD jump. Now we are starting to see 8k displays and workflows. Although this is great for very large screens, this jump has diminishing returns for smaller viewing environments. In my opinion, we are to the point where we do not need more pixels, but better ones. HDR or High Dynamic Range images along with wider color gamuts are allowing us to deliver that next major increase in image quality. HDR delivers better pixels!

Last Summer_lake.jpg

Stop… What is dynamic range?

When we talk about the dynamic range of a particular capture system, what we are referring to is the delta between the blackest shadow and the brightest highlight captured. This is measured in Stops typically with a light-meter. A Stop is a doubling or a halving of light. This power of 2 way of measuring light is perfect for its correlation to our eyes logarithmic nature. Your eyeballs never “clip” and a perfect HDR system shouldn’t either. The brighter we go the harder it becomes to see differences but we never hit a limit.

Unfortunately digital camera senors do not work in the same way as our eyeballs. Digital sensors have a linear response, a gamma of 1.0 and do clip. Most high-end cameras convert this linear signal to a logarithmic one for post manipulation.

lin_curve_anim_wide.gif

I was never a huge calculus buff but this one thought experiment has served me well over the years.

Say you are at one side of the room. How many steps will it take to get to the wall if each time you take a step, the step is half the distance of your last. This is the idea behind logarithmic curves.

Say you are at one side of the room. How many steps will it take to get to the wall if each time you take a step, the step is half the distance of your last. This is the idea behind logarithmic curves.

It will take an infinite number of steps to reach the wall, since we can always half the half.

It will take an infinite number of steps to reach the wall, since we can always half the half.

range_shift_ANIM.gif

Someday we will be able to account for every photon in a scene, but until that sensor is made we need to work within the confines of the range that can be captured

For example if the darkest part of a sampled image are the shadows and the brightest part is 8 stops brighter, that means we have a range of 8 stops for that image. The way we expose a sensor or a piece of celluloid changes based on a combination of factors. This includes aperture, exposure time and the general sensitivity of the imaging system. Depending on how you set these variables you can move the total range up or down in the scene.

Let’s say you had a scene range of 16 stops. This goes from the darkest shadow to direct hot sun. Our imaging device in this example can only handle 8 of the 16 present stops. We can shift the exposure to be weighted towards the shadows, the highlights, or the Goldilocks sweet spot in the middle. There is no right or wrong way to set this range. It just needs to yield the picture that helps to promote the story you are trying to tell in the shot. A 16bit EXR file can handle 32 stops of range. Much more than any capture system can deliver currently.

Latitude is how far you can recover a picture from over or under exposure. Often latitude is conflated with dynamic range. In rare cases they are the same but more often than not your latitude is less then the available dynamic range.

Film, the original HDR system.

Film from its creation always captured more information than could be printed. Contemporary stocks have a dynamic range of 12 stops. When you print that film you have to pick the best 8 stops to show via printing with more or less light. The extra dynamic range was there in the negative but was limited by the display technology.

Flash forward to our digital cameras today. Cameras form Arri, Red, Blackmagic, Sony all boast dynamic ranges over 13 stops. The challenge has always been the display environment. This is why we need to start thinking of cameras not as the image creators but more as the photon collectors for the scene at the time of capture. The image is then “mapped” to your display creatively.

Scene referred grading.

The problem has always been how do we fit 10 pounds of chicken into an 8 pound bag? In the past when working with these HDR camera negatives we were limited to the range of the display technology being used. The monitors and projectors before their HDR counterparts couldn’t “display” everything that was captured on set even though we had more information to show. We would color the image to look good on the device for which we were mastering. “Display Referred Grading,” as this is called, limits your range and bakes in the gamma of the display you are coloring on. This was fine when the only two mediums were SDR TV and theatrical digital projection. The difference between 2.4 video gamma and 2.6 theatrical gamma was small enough that you could make a master meant for one look good on the other with some simple gamma math. Today the deliverables and masters are numerous with many different display gammas required. So before we even start talking about HDR, our grading space needs to be “Scene Referred.” What this means is that once we have captured the data on set, we pass it through the rest of the pipeline non-destructively, maintaining the relationship to the original scene lighting conditions. “No pixels were harmed in the making of this major motion picture.” is a personal mantra of mine.

I’ll add the tone curve later.

There are many different ways of working scene-referred. the VFX industry has been working this way for decades. The key point is we need to have a processing space that is large enough to handle the camera data without hitting the boundaries i.e. clipping or crushing in any of the channels. This “bucket” also has to have enough samples (bit-depth) to be able to withstand aggressive transforms. 10-bits are not enough for HDR grading. We need to be working in a full 16-bit floating point.

This is a bit of an exaggeration, but it illustrates the point. Many believe that a 10 bit signal is sufficient enough for HDR. I think for color work 16 bit is necessary. This ensures we have enough steps to adequately describe our meat and potatoe…

This is a bit of an exaggeration, but it illustrates the point. Many believe that a 10 bit signal is sufficient enough for HDR. I think for color work 16 bit is necessary. This ensures we have enough steps to adequately describe our meat and potatoes part of the image in addition to the extra highlight data at the top half of the code values.

Bit-depth is like butter on bread. Not enough and you get gaps in your tonal gradients. We want a nice smooth spread on our waveforms.

Now that we have our non destructive working space we use transforms or LUTs to map to our displays for mastering. ACES is a good starting point for a working space and a set of standardized transforms, since it works scene referenced and is always non destructive if implemented properly. The gist of this workflow is that the sensor linearity of the original camera data has been retained. We are simply adding our display curve for our various different masters.

Stops measure scenes, Nits measure displays.

For measuring light on set we use stops. For displays we use a measurement unit called a nit. Nits are a measure of peak brightness not dynamic range. A nit is equal to 1 cd/m2. I’m not sure why there is two units with different nomenclature for the same measurement, but for displays we use the nit. Perhaps candelas per meter squared, was just too much of a mouthful. A typical SDR monitor has a brightness of 100 nits. A typical theatrical projector has a brightness of 48 nits. There is no set standard for what is considered HDR brightness. I consider anything over 600nits HDR. 1000nits or 10 times brighter than legacy SDR displays is what most HDR projects are mastered to. The Dolby Pulsar monitor is capable of displaying 4000 nits which is the highest achievable today. The PQ signal accommodates values up to 10,000 nits

The Sony x300 has a peak brightness of 1000 nits and is current gold standard for reference monitors.

The Sony x300 has a peak brightness of 1000 nits and is current gold standard for reference monitors.

The Dolby Pulsar is capable of 4000 nit peak white

The Dolby Pulsar is capable of 4000 nit peak white

P-What?

Rec2020 color primaries with a D65 white point

Rec2020 color primaries with a D65 white point

The most common scale to store HDR data is the PQ Electro-Optical Transfer Function. PQ stands for perceptual quantizer. the PQ EOTF was standardized when SMPTE published the transfer function as SMPTE ST 2084. The color primaries most often associated with PQ are rec2020. BT.2100 is used when you pair the two, PQ transfer function with rec2020 primaries and a D65 white point. This is similar to how the definition of BT.1886 is rec709 primaries with an implicit 2.4 gamma and a D65 white point. It is possible to have a PQ file with different primaries than rec2020. The most common variance would be P3 primaries with a D65 white point. Ok, sorry for the nerdy jargon but now we are all on the same page.



2.4_vs_PQ.png

HDR Flavors

There are four main HDR flavors in use currently. All of them use a logarithmic approach to retain the maxim amount of information in the highlights.

Dolby Vision

Dolby Vision is the most common flavor of HDR out in the field today. The system works in three parts. First you start with your master that has been graded using the PQ EOTF. Next you “analyse“ the shots in in your project to attach metadata about where the shadows, highlights and meat and potatoes of your image are sitting. This is considered layer 1 metadata. Next this metadata is used to inform the Content Mapping Unit or CMU how best to “convert” your picture to SDR and lower nit formats. The colorist can “override” this auto conversion using a trim that is then stored in layer 2 metadata commonly referred to as L2. The trims you can make include lift gamma gain and sat. In version 4.0 out now, Dolby has given us the tools to also have secondary controls for six vector hue and sat. Once all of these settings have been programmed they are exported into an XML sidecar file that travels with the original master. Using this metadata, a Dolby vision equipped display can use the trim information to tailor the presentation to accommodate the max nits it is capable of displaying on a frame by frame basis.

HDR 10

HDR 10 is the simplest of the PQ flavors. The grade is done using the PQ EOTF. Then the entire show is analysed. The average brightness and peak brightness are calculated. These two metadata points are called MaxCLL - Maximum Content Light Level and MaxFALL - Maximum Frame Average Light Level. Using these a down stream display can adjust the overall brightness of the program to accommodate the displays max brightness.

HDR 10+

HDR 10+ is similar to Dolby Vision in that you analyse your shots and can set a trim that travels in metadata per shot. The difference is you do not have any color controls. You can adjust points on a curve for a better tone map. These trims are exported as an XML file from your color corrector.

HLG

Hybrid log gamma is a logarithmic extension of the standard 2.4 gamma curve of legacy displays. The lower half of the code values use 2.4 gamma and the top half use log curve. Combing the legacy gamma with a log curve for the HDR highlights is what makes this a hybrid system. This version of HDR is backwards compatible with existing display and terrestrial broadcast distribution. There is no dynamic quantification of the signal. The display just shows as much of the signal as it can.

HDR_curve_8_19_anim.gif

Deliverables

Deliverables change from studio to studio. I will list the most common ones here that are on virtually every delivery instruction document. Depending on the studio, the names of these deliverables will change but the utility of them stays the same.

PQ 16-bit Tiffs

This is the primary HDR deliverable and derives some of the other masters on the list. These files typically have a D65 white point and are either Rec2020 or p3 limited inside of a Rec2020 container.

GAM

The Graded Archival Master has all of the color work baked in but does not have the any output transforms. This master can come in three flavors all of which are scene referred;

ACES AP0 - Linear gamma 1.0 with ACES primaries, sometimes called ACES prime.

Camera Log - The original camera log encoding with the camera’s native primaries. For example, for Alexa, this would be LogC Arri Wide Gamut.

Camera Linear - This flavor has the camera’s original primaries with a linear gamma 1.0

NAM

The non-graded assembly master is the equivalent of the VAM back in the day. It is just the edit with no color correction. This master needs to be delivered in the same flavor that your GAM was.

ProRes XQ

This is the highest quality ProRes. It can hold 12-bits per image channel and was built with HDR in mind.

Dolby XML

This XML file contains all of the analysis and trim decisions. For QC purposes it needs to be able to pass a check from Dolby’s own QC tool Metafier.

IMF

Inter-operable Master Format files can do a lot. For the scope of this article we are only going to touch on the HDR delivery side. The IMF is created from an MXF made from jpeg 2000s. The jp2k files typically come from the PQ tiff master. It is at this point that the XML file is married with picture to create one nice package for distribution.


Near Future

Currently we master for theatrical first for features. In the near future I see the “flippening” occurring. I would much rather spend the bulk of the grading time on the highest quality master rather than the 48nit limited range theatrical pass. I feel like you get a better SDR version by starting with the HDR since you have already corrected any contamination that might have been in the extreme shadows or highlights. Then you spend a few days “trimming” the theatrical SDR for the theaters. The DCP standard is in desperate need of a refresh. 250Mbps is not enough for HDR or high resolution masters. For the first time in film history you can get a better picture in your living room than most cinemas. This of course is changing and changing fast.

Sony and Samsung both have HDR cinema solutions that are poised to revolutionize the way we watch movies. Samsung has their 34 foot onyx system which is capable of 400nit theatrical exhibition. You can see a proof of concept model in action today if you live in the LA area. Check it out at the Pacific Theatres Winnetka in Chatsworth.

Sony has, in my opinion, the wining solution at the moment. They have a their CLED wall which is capable of delivering 800 nits in a theatrical setting. These types of displays open up possibilities for filmmakers to use a whole new type of cinematic language without sacrificing any of the legacy story telling devices we have used in the past.

For example, this would be the first time in the history of film where you could effect a physiologic change to the viewer. I have often thought about a shot I graded for The “Boxtrolls” where the main character, Eggs, comes out from a whole life spent in the sewers. I cheated an effect where the viewers eyes were adjusting to a overly bright world. To achieve this I cranked the brightness and blurred the image slightly . I faded this adjustment off over many shots until your eye “adjusted” back to normal. The theatrical grade was done at 48nits. At this light level, even at it’s brightest the human eye is not iris’ed down at all, but what if I had more range at my disposal. Today I would crank that shot until it made the audiences irises close down. Then over the next few shots the audience would adjust back to the “new brighter scene and it would appear normal. That initial shock would be similar to the real world shock of coming from a dark environment to a bright one.

Another such example that I would like to revisit is the myth of “L’Arrivée d’un train en gare de La Ciotat. In this early Lumière picture a train pulls into a station. The urban legend is that this film had audiences jumping out of their seats and ducking for cover as the train comes hurling towards them. Imagine if we set up the same shot today but in a dark tunnel. We could make the head light so bright in HDR that coupled with the sound of a rushing train would cause most viewers, at the very least, to look away as it rushes past. A 1000 nit peak after your eyes have been acclimated to the dark can appear shockingly bright.

I’m excited for these and other examples yet to be created by filmmakers exploring this new medium. Here’s to better pixels and constantly progressing the art and science of moving images!

Please leave a comment below if there are points you disagree with or have any differing views on the topics discussed here.

Thanks for reading,

John Daro