John Daro

View Original

A Brief History of HDR

When I started in this industry video was still in the standard definition days. Film was king and the only HDR capture we had at the time. Film captures 12-13 stops of range on the negative but once printed you are choosing the best 8 stops. 

HD video cameras evolved from their standard def predecessors. We got a wider color gamut with 709 but the range of the cameras fell flat compared to film. One was lucky to get 7 stops out of an old Sony F900. It was at this time time that Steve Yedlin and I loaded logarithmic gamma profiles to Sony's just released functionality called User Gamma in their F950 series cameras. Using a Print simulation LUT we were able to view what the end result would like on film with no additional manipulation. He was way ahead of his time even back then. This approach worked as one of the first digital log capture methods but we were still trying to fit 10 pounds of chicken into an 8 pound bag.

In 2007 a grade school friend of mine, Matt Holwick and I created a system that would allow for the capture of high dynamic range images using the technology available to us at the time. Stereoscopic 3D was all the rage back then. It tends to come and go. We were lucky it was making a resurgence at that time and had access to a beam splitter rig. I won't get into the geeky details but essentially this rig uses a mirror so that two cameras can shoot the same thing close together. The usual spacing is 66mm apart which is the average distance of the adult human inter-ocular separation. Oops, I said I wasn't going to get geeky. Anyway, What we did was remove this  separation completely. That setup is essentially acquiring two shots of the same picture. Next we over exposed one camera 2 stops and the other down 2. This bumped our sony F35's from being a 10 stop camera to a 12 stop camera. The other 2 stops were used for blending. This system was briefly considered by some of the major studios but got quickly shot down for the added complexity it introduced. Nobody wants to be on set using a beam splitter with two cameras and not getting 3D. Especially when shooting on film gives you more range with less complication and an established pipeline. Please see below for the last surviving remnant of that early HDR process, a highly compressed h.264. It's possible that the original is still sitting on an old Isilon somewhere at the lab or its in an e-waste pile by now.

HDR capture system developed by Matt Holwick and John Daro in 2007


The next step in this evolution was when RED released their HDRx mode. This took the same idea that Matt and I had but put nice and simply in one camera. That means you only had to deal with one file, one lens, and unfortunately one sensor.   How RED was able to make this work was to read the sensor values at different times in the exposure. They would first read the sensor quickly creating a low exposure pass for highs and then read again at the normal time for that exposure creating the standard picture. You would then be able to combine these two different exposures in software to create an HDR frame. The challenge I found was that they were two different exposures from two different times, albeit less than 1\24 of a second different. Let’s say you were shooting a car traveling fast. You would see a ghost of a car that is is slightly behind the highlights. That said this was still great technology especially on locked-off shots.


Which takes us to the current generation of capture. Alexa RED and Sony cameras now achieve 14-15 stops with no tricks, just great sensors. BMD makes an HDR camera that fits into your pocket. Our color correction systems work in 32bit floating point so that range is never compromised. We have display technology that can achieve a quarter of the brightness of reflected sun. Metadata now maps creative intent, eliminating the need for alternate masters. I must say it's a good time to be a colorist.