Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

DEBUNKING "HDR"

Steve I personally don't know enough about HDR deliverables to engage in any type of intellectual arguments about the subject. Jon posted on his forum about what he thought of the presentation at the LINK below.

 
Steve I personally don't know enough about HDR deliverables to engage in any type of intellectual arguments about the subject. Jon posted on his forum about what he thought of the presentation at the LINK below.

Thank you Rand.
 
@Jon Pais Thank you for providing your counterpoints on Steve's presentation. I would say, perhaps consider removing the part in your blog post about him being a cult leader rambling as that is not necessary and might prevent a healthy discussion that Steve himself might even be willing to engage in. By opening your blog that way, it takes away from some of the professionalism and taints the entire post rather than engaging in a good debate. Just something to consider if you have more posts coming. I know many people over at Fotokem and they have an amazing team of colorists, film timers, color scientists, etc. All working on extremely high end projects. Steve works with Fotokem a lot on his projects. I think his goal in the presentation is to present what he sees as some misconceptions about HDR, especially from the perspective of a cinematographer. He has a wealth of knowledge about color spaces, transforms, film chemistry, and obviously lighting and camera technology. A lot of what Steve is saying makes sense to me, but I also admit that my knowledge of HDR is still quite limited, so perhaps I don't know what I don't know.
Anyway, I think it's very fair to make counterpoints. I just suggest doing so respectfully so that it continues the discussion and doesn't get dismissed based on unprofessional tone.
 
With all due respect, Yedlin’s coining offensive terminology like ‘punching through the ceiling’ to refer to HDR highlights & repeating it several dozen times throughout the presentation, ensuring that we’ll be hearing this imbecilic phrase ad naseum from his fans for years to come; his condescending attitude and disagreeable snickering; his dishonesty in presenting BT2100 as if that’s what colorists master at rather than industry standard P3-D65 (and then having the temerity to suggest that we need a new color space between BT1886 & Rec.2020!); his failure to acknowledge that there already exists an HDR format better suited than PQ for viewing in different environments and on displays with less capability than the reference grading monitor - HLG; his unwillingness to concede that HDR highlights can and do serve an expressive purpose and on and on… prevent me from taking Yedlin seriously. Is spending two hours listening to Yedlin going to make you a better filmmaker?

People like Charles Poynton stand like towering giants next to Yedlin. Any of Cullen Kelly’s articles or videos (prior to coming in contact HDR-loathing Yedlin) where he talks about topics like HDR and film emulation are actually worthy of being called brilliant.

As for Yedlin’s photography, let’s take a random Netflix show, the Korean drama Weak Hero: Class 2 (2025), for example: the color, lighting, texture, photography, skin tones and above all, the highlights, are all arguably superior to those of Yedlin’s obnoxious Glass Onion, with its unnatural, dead looking highlights.
 
Last edited:
I am however grateful to Yedlin for his demonstration proving once and for all that shadow detail is decidely not HDR’s greatest advantage, as many filmmakers ignorantly repeat. Nor is it color. It’s the highlights.

 
Last edited:
Okay, again no issues with counterpoints. Mostly just tone. I think it would be interesting to have a follow-up presentation where these counterpoints could be presented and debated professionally. I might be interpreting Steve's pushing through the ceiling differently than you, so would certainly be interested to see your presentation/demonstration to counter his thoughts on this. I'd also be curious to hear thoughts from those in attendance such as Roger Deakins, Cullen Kelly, and others. I always keep an open mind about this stuff. I just know that while working on Knives Out, I was very impressed with Steve's grain algorithm we were implementing into the dailies. And I got to see a very rare film vs digital comparison as one scene was filmed with both.
 
Film grain looks awful in streamed content. Netflix uses AV1, which denoises the footage then adds synthesized grain back, but it’s got a lot of issues. In the only study of its kind, participants found AV1 film grain synthesis (FGS) to be suboptimal.

A Subjective Study of Film Grain Synthesis for the Preservation of Creative Intent, authored by Jatin Sapra, Kai Zeng and Hojat Yeganeh

At Demuxed 2024, Li-Heng Chen from Netflix talked about AV1 & Film Grain Synthesis in the session "A hitchhiker's guide to AV1 deployment.” The key takeaways were that substantial savings in bitrate were possible with AV1 without FGS (30%) and that video denoising often introduces artifacts or introduces image softness that leads to degraded video and incorrect film grain table generation (repeating patterns or inconsistent grain structure or intensity). Netflix's solution today is to drop FGS for scenes/shots where denoising or noise estimation fails.

IMG_2145.jpeg
 
Last edited:
That's distribution/encoding. I was simply referring to the film grain algorithm applied to the uncompressed footage I was viewing in optimal conditions. I thought Steve's grain looked very nice and seemed to respond appropriately in shadows, mids, and highs. What happens to the grain downstream, well that's a totally different matter.
 
“What happens to the grain downstream, well that's a totally different matter.”

@Steve Sherrick How the grain of a movie or TV series will appear once it’s distributed is not only of immense concern to streamers, it should also be taken into consideration by filmmakers. As a matter of fact, some colorists/DPs actually preview how their work will look prior to streaming publicly by watching it on Netflix’s private channel.

Cinematographer Armando Salas & Light Iron colorist Ian Vertovec did just that when they worked together on the series Griselda (2024), precisely to evaluate how the grain, which was applied pretty heavily on the show, as well as the halation, would appear when streamed to viewers at home.

Walter Volpatto, who graded Yedlin’s “Star Wars: The Last Jedi" while he was senior colorist at FotoKem, no longer advises adding film grain:

"Unfortunately, I'm changing my mind on grain and I'll explain why. First of all, I love grain. I love the idea of grain; and for me, it reminds me of film. I mean, there is really nothing more obvious than grain to tell you this has been shot on film, and Interstellar was shot on film, like Dunkirk, like others, so the grain was naturally what they feel the negative... actually in this case, [the] interpositive - was giving you. And I always like to add grain to my projects. There is a problem, and it's how we actually watch footage nowadays - Netflix or Amazon or TV, whatever - the compression system is killing the details. So there is a problem where the more grain you add, the more hard the compression algorithm has to work in order to keep those details and the more compression you get on the actual overall image. So nowadays, unless the image already has grain or unless the director of photography really wanted me to add a bunch of grain, I tend to not add it anymore because you'll never see what I'm seeing. The compression will kill it. It's just the way it works. [...] Usually, if somebody's shot on film, it's because they like the texture and I usually tend to just preserve it. Just don't do anything stupid to it; try to preserve it as much as possible, try to blend it and that's about it. But for the longest time I was the one asking, you know, add grain. Now I don't do it anymore.”

After all, no one is more fanatical than Yedlin when it comes to how streamed content looks, as you can hear in his presentation, and that includes grain.
 
Last edited:
If scene white made no sense to you, you’re not alone. Not only is it not a thing, Yedlin was compelled to re-record the voiceover in order to clarify it - and, seven minutes in, you’ll be every bit as clueless as you were before.

Yedlin does not even appear to understand what diffuse (or reference) white is, that it even exists, or its relationship to highlights (in HDR, highlights refer to specular reflections and emissives); nor does he understand that photographers should no longer be exposing for 18% gray when shooting for HDR delivery.

The reason Yedlin refuses to talk about diffuse white? Perhaps because then he’d be forced to admit that 200 nits are not highlights at all.

IMG_2143.jpeg
 
Last edited:
Yedlin empowers the cult by persuading them that their ignorance is as good as your knowledge.

The assertion that the relativity of human vision somehow undermines the superiority of HDR rests on a fundamental misunderstanding of how human vision works. HDR’s got an incomparably wider range of brightness, contrast and detail than low dynamic range (LDR) video, creating a more immersive experience. This expanded luminosity mimics the real-world dynamic range our eyes naturally handle even if our perception adjusts. Our eyes adapt to ambient conditions, prioritizing contrast and detail over absolute luminance, but this adaptation in no way negates the value of wider dynamic range. On the contrary, when watching HDR content, this adaptive response reinforces the perception of realism, heightening the sense of immersion, and in turn increases emotional impact. Claims of HDR’s superiority don’t fall apart but are instead reinforced by understanding human visual relativity.

IMG_2149.jpeg
 
Last edited:
As it turns out, Steve Yedlin told Vulture that there are actually two versions of Glass Onion: a grainier version for theaters and a more restrained version for Netflix. So kudos to him.

IMG_2178.jpeg
 
I had a dream that Netflix and other streaming platforms would steam Blu-Ray quality files to subscribers for an added fee, sort of a high bit-rate type of service. But, it was only a dream.

When I watch VOD or whatever other type of stream is availability to me via Comcast etc. I can't believe, for the most part, how poor the streaming quality is. Is 6 bits of color even a thing? Is dynamic compression something we should be happy about?

I wish that the elite Hollywood filmmakers would create there own streaming channel that offers at least Blu-ray quality streaming.

But hey, don't listen to me as I think Yedlin is a genius! His HDR video is pure gold to me. And I found it to be 99.99% spot on.
 
That is the most ridiculously misinformed video have ever scene in regards to HDR, but it makes sense with all the institutional inbreeding that goes on in this industry. Yedlin just seems to fundamentally misunderstand scene referred and display referred. And also seems to not understand that HLG already exists. A lot of people really struggle with this idea of no longer having 0-1 range be your working limitation. Anyone familiar with VFX pipelines know they have been working in scene referred linear for the past 20 or so years, and it is fairly trivial to remap that working space to any display output space. It's good to see more and more of the savvy colorists out there moving to this workflow as well so their content isn't limited to the display they are mastering with.

Jon Pais already covered it all better than I can, but all I can say is don't waist your time watching this if you are interested in learning more about HDR. This idea that HDR is a detriment is just absolute nonsense. Yedlin and other DPs like them forcing SDR in HDR containers is why you get people upset about not being able to see dark content (think Game of Thrones ending). They are actively choosing to significantly limit what their cameras are capable of, to match an aesthetic that was driven by technical limitations of older displays. They decide to make their HDR grade look the same as their 100nit SDR grade side by side, which is a horrible way to work because of perception alone. If you are doing that you are effectively putting a volume limiter on the brightness that someone can view that content when watching on a Dolby Vision capable display. Essentially the same as an audio engineer forcing a volume limit on a music master. Not everyone is going to watch your content in a controlled environment, almost no-one in fact. So actively choosing to ruin those people's experience or try to put training wheels on it is very misguided. Not only that, almost all of these SDR in HDR container masters would look significantly better if they pushed even just a little beyond the ~200nits limitations they are applying to themselves. Dull overly tone mapped highlights aren't particularly compelling to look at, nor are overly lifted shadows.

Yes being overly aggressive in an HDR grade and boosting highlights and contrast beyond what is natural also isn't a good approach. And choosing to limit and tone map and clip your highlights also isn't a good approach. Being able to choose how you use contrast (and have that additional range to do so) is hardly a limitation or detriment.

The only reason grading SDR at 100nits "works" is because people luckily aren't limited to viewing on a 100nit display. Almost all SDR mastered content is viewed brighter than it was originally mastered because of this self imposed 100nit display referred limitation. Even in a controlled grading quite 100nits looks very dull. It is very easy to make HDR grades that look very natural and beautiful without having to look "hyper real" or overly contrasty, and there are countless beautiful examples out there. Something a lot of people don't seem to understand, is anyone viewing SDR content on a modern TV or modern computer monitor, is likely viewing it at an expanded range much beyond the 0-100nit grading standard, even when viewing in a dim controlled environment. And no that doesn't perceptually look the same as what the filmmaker sees in the grading suite.

Yedlin doesn't make HDR content so listening to him discuss it isn't really worth while. He treats his audience like they need training wheels and puts SDR in an HDR container and doesn't seem to understand why that is making his work look worse to most viewers. Even worse, he is actively lying about the things he is informed about.
 
Last edited:
This whole discussion about film grain is also a bit ridiculous. It has already been solved, a little more bitrate, it's not complicated. Film grain survives beautifully in shows like The Studio, Severance, and many other shows and films on AppleTV+. Netflix and others just make an active decision to be cheap and limit bitrate so much that it destroys the content.

So I guess unless they decide to improve their shitty encoding, don't make content for services that destroy it.
 
Last edited:
Pretty sure the first time I saw the video, the overdubs were already in place as I could hear the change in the audio. So maybe I didn't see and hear the presentation as it was originally recorded.
I can assure you Steve does understand scene referred and display referred. If you disagree with his perspective on this why not professionally challenge those ideas with a presentation of your own or contact him and ask if he would be willing to debate these issues publicly? In fact, I'm sure Dolby would be happy to get involved and maybe even sponsor it. I think it could be a productive debate and help the industry as a whole better understand the technology. Maybe a couple of the top color scientists and HDR experts could join. To me, that's in the spirit of progress. That's productive. Calling someone a cult leader or saying his thoughts are a waste of time just seems a bit unprofessional when in fact in this day and age, people like Steve are approachable and probably open to debate if it's handled in a civil, professional manner.
I for one would grab a bag of popcorn and watch this, if not in person (which would be ideal), certainly a stream of it.
 
Yedlin claims that HLG’s only differentiating feature is a detriment and that utilizing it would be a logistical nightmare, even though (1) it is absurd to argue that HLG’s transfer function is inferior to Rec.709; (2) all devices - smartphones, laptops, tablets and televisions - support HLG; (3) rendering an HLG version from a finished Dolby Vision master is trivial; and (4) he never explains how the ability to maintain creative intent in different viewing conditions is somehow a disadvantage. Check out the 2-1/2 minute tutorial.


IMG_2182.jpeg
 
Last edited:
Back
Top