I recently received a great question and it made me realize that something very fundamental in color is shrouded in mystery to many, so I figured I’d open up the conversation here. As we’ve transitioned into HDR-native pipelines, the requirements for bit depth and color precision have increased significantly. The "good enough" standards of the SDR era simply break down when you start pushing dynamic range.
For any professional HDR deliverable, 12-bit is the absolute minimum. In a high-latitude HDR container (like PQ/ST.2084), 10-bit often lacks the code-value density to prevent "banding" or contouring in fine gradients, particularly at the edges of a color space.
The Numbers Game
Think of it like the difference between a high-end recording versus a generic MP3. The 44k or 48k sample rate describes the amount of time slices that approximate a sound wave. Bit depth does the same thing for pictures, but for amplitude or brightness. It's essentially how many specific "steps" you have to define a gradient from absolute black to absolute white.
Here is the math on the available steps per channel:
8-bit: 2^8 = 256 steps (Standard Web/SDR)
10-bit: 2^10 = 1,024 steps (Broadcast HDR)
12-bit: 2^12 = 4,096 steps (Cinema/Dolby Vision)
16-bit: 2^16 = 65,536 steps (High-end Finishing/VFX)
Integer vs. Float: The Container vs. The Canvas
We currently utilize a 16-bit ACES (Academy Color Encoding System) pipeline for modern finishing. The reason for this is twofold, and it comes down to the difference between Integer and Floating Point math.
1. Precision vs. Quantization
While 12-bit Integer is often the delivery spec, we work in 16-bit Float to maintain mathematical precision during the color grade. Every transform we apply—LMTs, look dev, or secondary corrections—benefits from that extra headroom. If you work in 10-bit or 12-bit, you risk "rounding errors" where values get snapped to the nearest integer, creating artifacts.
It’s also worth noting that GPUs favor standard data alignments (8, 16, 32-bit). A 12-bit word often gets padded to a 16-bit word in VRAM anyway, so attempting to "save space" with 12-bit processing is often a fallacy; it is just as heavy on the machine but with less precision.
2. Linear Light Workflow
High-end animation and VFX are rendered in "Linear Light." In a 16-bit Half-Float (OpenEXR) format, we can represent values far above "white" 1.0. This allows us to store specular highlights (the sun, reflections, explosions) that might be 5.0, 10.0, or even 100.0 times brighter than "white."
If we used an Integer format (like 10-bit or 12-bit), those values would be clipped at the ceiling (1.0). By using 16-bit Float, we have a future-proof master that can be re-mapped to any current or future display standard (SDR, HDR10, Dolby Vision, or even LED walls) without clipped highlights.
The Bottom Line
Essentially, 10-bit and 12-bit are "bucket" formats usually reserved for the final "container" (the file you watch). 16-bit is the professional standard for the "canvas" we work on to ensure no data is lost during the finishing process.
I always say, "No pixels were harmed in the making of this major motion picture." ;)
Cheers and happy grading,
JD