HITS

M&E Journal: Get the Most Out of Your Colour Grading Session, While Preserving Artistic Vision

Ray Dolby founded Dolby Laboratories in 1965. Since the earliest days of the company, Dolby’s mission has been to harness science and engineering to improve media fidelity, enabling consumers to experience audiovisual content the way its creators intended.

Ray Dolby’s first invention was the Dolby Noise Reduction System, which reduced distracting noise present in early audio tape recordings. Today, Dolby is known for more than just sound. We helped pioneer high-dynamic range (HDR) imaging as it exists for consumers today. Dolby invented a method of representing HDR video using the perceptual quantiser (PQ) transfer function, standardised as SMPTE ST. 2084, which underlies the open HDR10 standard as well as our proprietary Dolby Vision format.

HDR video for consumers is typically colour graded and finished on high-end video mastering displays costing tens of thousands of dollars. Consumer  HDR TVs, tablets and smartphones have come a long way over the last few years, but they still don’t have the capabilities of professional reference displays when it comes to how bright and colourful they are. Nor should they — how many consumers would want to shell out $30,000 for their living room TV?

For this reason, consumer devices must accommodate bright and vibrant HDR content by translating the incoming video signal into a smaller container at playback using a process called display mapping (DM).

The luminance and colour decisions the colourist made on the original mastering display are “mapped” into the smaller dynamic range and colour gamut capabilities of the consumer device.

The goal is always to preserve the creative intent of the filmmakers all the way to the consumer. What a consumer sees at home should closely match the look the colourist created and approved on the mastering display during the colour grading process.

Working for consumers

Consumer Dolby Vision video combines HDR images with metadata that changes from shot to shot. The dynamic metadata in a Dolby Vision video stream guides dynamic display mapping in a consumer device to optimise the amount of image detail preserved in each scene. Some scenes will be very bright, some will be very dark and others will fall somewhere in between.

A generic display mapping transform in a TV that works well for very bright images is unlikely to be as successful when handling dark images. Using a static approach in which every scene in a movie or TV episode is mapped the same way would inevitably cause some of the original image detail to be lost.

Dolby Vision dynamic metadata is generated in the colour grading system, adjusted by the colourist and ultimately transmitted to a TV or other playback device in an en-coded video stream.

The metadata describes the original mastering display used by the colourist, luminance information calculated for each individual shot, and can incorporate dynamic trims chosen by a colourist to ensure the most accurate mapping from mastering display to playback device. The inputs to that display mapping process are the HDR video signal, the Dolby Vision metadata and information about the play-back device’s own display characteristics.

Dolby Vision display mapping yields excellent image fidelity whether a display is capable of 300 nits of peak luminance or over one thousand nits. When Dolby developed the post-production content creation tools for our Dolby Vision consumer format, we weren’t just enabling the creation of compelling HDR images that maintain their integrity across a broad range of display types. We were also determined to develop an efficient workflow that would allow multiple “grades” to be derived from a single master.

The Dolby Vision metadata isn’t solely used by display mapping: it can also enable a post-production process called content mapping (CM), which generates new video files that have the Dolby Vision mapping process baked in.

Content mapping takes in the original HDR video and Dolby Vision metadata and outputs a new deliverable that is optimised for a specified luminance target, even down to 100 nits for SDR. We believe this streamlined colour grading workflow best captures the quality and detail in the HDR grade down into the lesser capable SDR deliverable. It also is extremely efficient in terms of time required in the grading suite. On a typical one-hour show, the Dolby Vision SDR trims can be accomplished in as little as an afternoon as opposed to another full round of regrading an SDR version.

Netflix and other studios have adopted this as the primary workflow to deliver Dolby Vision, HDR10 and SDR versions of their original shows.

Feedback from content creators

When asked for his take on the Dolby Vision workflow, Dean Devlin of Electric Entertainment in Los Angeles told Dolby that “as spectacular as HDR is, we’ve basically ignored it because it was cost prohibitive for us to colour time it as well as doing a second version in SDR.

With the Dolby Vision tools and workflow, however, we find that we can spend our energy and money on creating the best version possible, and only have to do the process once.” We have also considered feedback from content creators around the world as our product and engineering teams have refined Dolby Vision over time.

We launched a more flexible CM/DM algorithm, v4.0, at IBC 2018. For v4.0 of the Dolby Vision content creation tools, we heard that it was important to introduce more flexibility to the trim controls, which are the means by which the colourist exerts creative influence on what is otherwise an algorithmic mapping process.

In many cases, filmmakers and colourists want the look of their derived SDR grade to adhere very closely to the original HDR grade. Other times, they may actually prefer a slightly different look in the SDR version.

The original Dolby Vision colour grading tools had 6 trim controls to adjust the mapped output at each defined luminance target. CM v4.0 has 9 primary and 12 secondary trim controls, including 6-vector hue adjustments to more precisely map hues, which can be especially important when shifting from P3 or Rec. 2020 to the more limited Rec. 709 colour gamut.

We’re encouraged by the feedback we’ve received from the post-production community so far.

Keith Shaw is a colourist at Keep Me Posted in Burbank, Calif. Shaw has been the final colourist on Showtime’s Ray Donovan for multiple seasons. He graded a demo clip from Ray Donovan in Dolby Vision, although the show itself was graded in SDR only.

About CM v4.0, Shaw says, “The improved and additional SDR trim tools available in 4.0 gave me the ability to produce an SDR image that more accurately represented the original HDR version.

During the process I noticed that in certain scenes where I had struggled to maintain highlight detail during my original SDR colour session, the Dolby Vision SDR version now yielded a more detailed image on the top end, and I even decided to insert those scenes into my original SDR deliverable for Showtime.”

Like most MESA members, we have multiple constituencies to please, from the post-production vendors who rely on our technology to do their jobs to the clients and consumers who enjoy the fruits of that effort.

That has always been at the heart of Dolby’s mission, and continues to be true with Dolby Vision.

* By Thomas Graham, Head of Dolby Vision Content Enablement, and Graef Allen, Manager, Technical Operations, Dolby Laboratories

——————————————————

Click here to translate this article
Click here to download the complete .PDF version of this article
Click here to download the entire Spring/Summer 2020 M&E Journal