Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Ask Mike Most Anything

technicalities aside... how do you guys "philosophically" deal with seeing/knowing your work will not be perceived exactly how you finished it (in some cases, ridiculously compressed/shifted/etc). sometimes i spend a lot of time getting a single color just right, and i just really like that color. but as soon as it starts shifting around, it doesn't look as great, and it's not what i intended. so practicalities aside, i need some wisdom to preserve my peacefulness in this area.

David's answer really hit the mark for me as well. You can only control what you can control, and you can't get emotionally involved with trying to solve the unsolvable. I also agree with his observation that reproduction of the intended image is far more controllable and consistent in digital cinema than it is in any other electronic distribution, in part because it's a controlled, limited environment, but also because digital cinema projectors are set up to standards when they are installed. The projectors themselves are so stable that even if they are not recalibrated for months, the images are still very, very close to what they're intended to be. So I guess the best way to deal with the problem is to only do projects that are exclusively shown in digital cinema theaters.

Good luck with that..... :rolleyes5:
 
I was wondering... what do you think is going to happen in terms of actual workflow?

1. ACES used behind the scenes - but everyone converts to P3 with log space when they actually want to grade, because ACES is a cumbersome color space to grade in and the grading software doesn't have its dials and knobs set up for linear light wide gamut work? (where we are now, right?)

2. ACES used while grading - but grading software hides most of this from us and presents us with a normal grading interface. Like Adobe Lightroom which uses ProPhoto behind the scenes - but you wouldn't know it from working with it.

I'll start out by saying that the forthcoming Sony camera (the F65, which is designed specifically to be ACES compliant) carries with it the potential of really advancing the ACES movement, or really hindering it, depending on market acceptance. If they do it right, the questions you're asking will have immediate and obvious answers, and if they don't, it might not find quite the level of acceptance that many want it to. That said.....

Grading from native formats is usually done in a pipeline, not necessarily through pre-grading transcodes or color space conversions. In an "ideal" R3D->ACES workflow, you would be interpreting the R3D file "live" from a linear light debayer, piped through an ACES transform, grading in ACES space, then through the RRT and an ODT (that is specific to the display you're using), and possibly through some final color trims (although you might change that if you have multiple deliverables). In #1, you mention that it's a "cumbersome" color space to grade in, and that's somewhat true - if you're using, say, Resolve. But it's not quite as simple as that. Other systems, such as Baselight and Lustre, have grading tool sets that are properly scaled and properly designed for working in log space. Transforms have already been developed to create a "log ACES" space that maps to those controls quite well, so it not only isn't cumbersome, it's actually quite comfortable. You are correct that it isn't the case if you're trying to use controls that are designed for video, such as Resolve's lift gain and gamma controls, which is one reason why I and others are urging Blackmagic to add a proper log grading toolset to that program. You might also consider that when you use a grading pipeline, you can do color manipulations at different points in that pipeline, which is something I do all the time when grading from R3D files using the RedlogFilm gamma curve. For television material, I'll often do a "film style" grade directly on the log data, prior to a log to video LUT, and then add a video style grade after the LUT, where the image is already in a form appropriate to that grading style. This lets you control highlights and overall balance and contrast with all of the original data in the image, and do specific corrections, including secondary trims, in the final color space where you have a bit more specific control and things like keyers can work more effectively. So a lot of what you're talking about is largely addressed - provided you have the right tools to deal with it.

ACES AND the whole "scene referred" thing takes off... people actually figure out the difference between an ODT and a RRT, folks make sure midpoint grey actually matches ACES standard values, etc. Meanwhile, we get laser projectors and color OLED tablets which vastly exceed P3 color gamut - but that's okay because they have built-in ODTs running in the video playback software - so we just deliver a 709 master, a P3 master and a wide-gamut master - and the wide-gamut playback systems actually do a decent job of realtime conversion.

I'd like it if the industry went to #3 because it has such enormous advantages for VFX. But ACES is being presented in a bit of a confusing way. I have no idea about white balance, for example... if you have a warm interior scene, are you supposed to calibrate it so that grey is always completely neutral, then use a RRT that brings the warmth back? Does the RRT = the bit where you do the creative color grade? They have a lot of acronyms. I'm worried that the complexity of ACES, plus all of the color transforms going on leaves a lot of places for people to mess things up. Not sure why they needed such a complex standard. Why not just:

Camera -> 3D input LUT -> scene in ACES space -> grade -> cool looking scene in ACES space -> 3D display LUT to transform from ACES space to desired output (709, P3, etc)

Or is that what they are proposing, just in a more confusing way? Are they avoiding the word LUT because they want something that can encapsulate both LUTs and other color transforms?

I think if you look at the ACES workflow as simply identical to a typical DI workflow but with some terminology and actual math changed, it becomes a bit clearer. In a typical DI workflow, you start out with a film scan. For anything that isn't original film material, you have to convert it into something resembling Cineon log space, and for that, you use an input transform. You then do your basic grading with the log data, probably using controls designed for the purpose, which in many if not most cases would be controls based on exposure and contrast scaled for use in a log environment. You then view the result through a print preview LUT so that it resembles your intended deliverable, which up to now has been a film print. Now let's compare that to ACES. In ACES, your source material goes through an input data transform (IDT) to put it into log space - which is identical to how non-Cineon sources are handled in a DI. You then do your basic grading using the ACES data, exactly the same as you do with log data in the DI. You observe the results through the RRT and ODT, which in effect is accomplishing the same thing as the print preview LUT in the DI. So the steps are the same, and it's really just the names that are different.

To elaborate on your question about the RRT and ODT, the RRT (Reference Render Transform) is a renderer that is designed to work in the theoretically unlimited gamut of ACES. It is designed to be a color neutral "ideal" render based on principles of color science that I won't get into because I can't really explain it as well as people like Ray Feeney. The output of the RRT is an image that is too wide a gamut for any defined and therefore restricted color space, and it doesn't correspond directly to any specific display. That's where the ODT comes in. The ODT, or Output Device Transform, maps the image produced by the RRT to whatever display device you're targeting. Think of the product of the RRT as you would the Digital Source Master in a DI. The ODT takes the product of the RRT and puts it into whatever color space and gamma encoding is appropriate - P3 and 2.6 gamma for digital cinema, Rec709 for HD video, and whatever else happens to come along in the future. All displays are presumed to have restricted color space almost by definition, and the idea behind the RRT and ODT is that the render is the same regardless of the display device, and the specific requirements of each display are handled via an ODT for that display device. Up to this point, in many cases the RRT and ODT have been combined into one transform for testing purposes, but in a fully developed ACES workflow, I wouldn't expect this to be the case.

I hope that simplifies it a bit. Nothing is really new, and it's not as complex as a lot of people seem to think it is. For anyone who's dealt with film DI's, it's pretty standard stuff, only with new names, new values, and hopefully better results.

Also, do you have the time to consult on projects and do you give a Reduser discount? (maybe it'll be a Reduser penalty though because you gave us so much good advice for free and had to put up with so much yammering from us :)

Well, if you (or anyone else here) have something in mind, you should always feel free to contact me privately.
 
I am LOVING this thread.. I don't have a worthy question yet, but between David and Mike, this is becoming on AMAZING online school.. And I love learning. Mike where is this blog my friend Bruce speaks of?

Jay

All questions are worthy. After all, the name is "ask Mike Most anything." :wink5:

The blog (it's called Postworld) is at mikemost.com.
 
I don't know how you've determined the degree of accuracy, but unless you have the capability of doing butterfly projection between the digital projection and the print, accuracy of the degree you're talking about cannot be guaranteed.

After 6 years and about 40 full features and don't know how many shorts, using for chemical processes and projection Kodak Cinelabs Greece, we have achived a level of accuracy that is +/- 1 printer light... and that's a common ground for all my clients... If you allocate the 2 printer lights range in the density equivalent you can easily see from where the 3% number is coming...

Indeed the butterfly projection is unquestionable, but I have to add the VERY simple point of having a Director and DOP countless times yelling in the projection room "wow thats exactly what we saw when we grading...!!!!" that fact really doesn't create for me and Kodak, the need for a butterfly projection...

Butterfly projections in the past was a common practice because Color Management techniques, wasn't so developed and the photochemical Color Grader of the lab, wanted to do few acrobatics in the process like PH adjustments, Replenisher alteration and various chemical tricks to match empirically, what was impossible to match electronically. So when the client was coming to see the result he was taking a super matching print... The only problem with this method was that the negative it self didn't had the information needed, recorded on it, coz the result was a combination of Lab tricks and negative info...

So when the negative was going to a different lab that didn't had the "tricks" of the first lab, to do the mass production of prints, the result was very different... This phenomenon is well known in the industry, but the distributors as we all know they simply don't care about these thinks... they know that DoP and Director can't see all prints delivered to theaters around the globe... the labs also know that... and the poor DoP's they simply can't speak about these thinks...

What we do is the hard way... we show the answer print without any kind of trick's simply by using the ECN-2 development and the LAD aim positive print... we don't need a butterfly projection because our clients never had a problem... simple as that...

We think its more appropriate to have a chemical process and a film recorder solid as a rock with no drifts whatsoever, than investing in butterflies... I know to get these two things solid as a rock, is very difficult... ok... we simply did it...

Besides, the previous poster is correct. If you're using LUT's to view P3 on a projector that isn't physically capable of displaying the entire P3 gamut, you're looking at a gamut mapped simulation of certain colors, not an entirely accurate image. In many scenes you would never see the difference because you don't always have colors in the scene that are out of gamut. But if you happen to have an out of gamut LUT, you would undoubtedly become aware of the colors that your projector is not capable of displaying.

What is part of my service when I take care a Lab, is to identify the out of gamut areas and point them to the owner and colorist I also apply a certain set of techniques to fix these issues. I generally request specific display devices to be used for which I have already developed the solutions. In example good projectors are all the latest JVC's... not the top of the line needed thought. These projectors, relatively to a Christie, are very cheap and they perform with very very limited out of gamut errors when I also do my magic...

To understand it better, because I live in a small country, like Greece is, surrounded by countries that are also small, we don't have the quantity and the budged of the typical client in US... we have that budged limitation situation, for years... so we have being forced to experiment and stay away from i.e. Christie projectors and Cinetal's.... and use all kind of inventive solutions to get the best possible results with the minimum possible cost... So because we had that problem we have developed solutions that today can make a small home grown post to challenge the biggest labs out there...

if someone has a problem before someone else has it, then the first creates solutions faster, than the second...

Aristotle said once "if you show me the problem, I will show you the solution"...

That's democratization of the Color Grading process in practice...
 
Evangelos,

I think both you and Luis are interpreting my remarks as questioning your competence, and that was not and is not my intent. I have said numerous times that I'm sure he's getting good results. However......

Claims of "measured" accuracy cannot be based on remote readings, and they cannot be based on the assumption that everything is being done correctly, the room conditions are correct, the projector is set up correctly, and the print actually matches what is being seen in that room. Very, very few people have extremely accurate "color memory", so when clients - even very happy ones - claim "that's exactly what we saw in the DI," that doesn't necessarily make it so. That's the reason for butterfly projection, and it's the reason that's the only true way of confirming that kind of accuracy. I've been at this a pretty long time myself, and I can tell you that prints I've approved - and the client has been very happy with - were "a bit off" when they were butterfly'd against the electronic image. Some have been right on the money, some haven't. That's in part the nature of the beast.

"Not completely accurate" doesn't imply "wildly different." It simply means that if one is going to make certain claims, one needs to understand the limitations that are built in to the process, and understand that without actual confirmation, assumptions aren't necessarily accurate. I understand all about lab tolerances, and I understand color calibration, color science, and the creation and use of LUTs for matching disparate displays and media. And what you say is largely true. However, I don't believe that a digital element is perfect until it's QC'd, and I don't believe in "measured accuracy" without a way of checking that with actual results. So please, let's not continue this. I think you provide a valuable service to those who need to deal with film recording and want a film color pallette as a basis for the image. Let's leave it at that.
 
Mike, I don't want to elaborate more, I'm measuring all the way around... moreover as we all know, there is nothing absolute in life all are relative and art is the most relative of all...

Art is also subjective... so to try to measure a relative and subjective value like art, its just waste of time... lets stop measuring tolerances and aim to have, happy artists... and that's what I do.-
 
I am a huge proponent of the IIF/ACES initiative. As Bruce pointed out it can seem daunting to wrap your head around. Props to Mike for distilling it in his Postworld blog entry.

The way I think about ACES is that it gives everyone in the image creation/manipulation chain a common benchmark. There are several huge benefits available once you have that known, and virtually unlimited, reference point.

For example:
As noted by others, the actual viewing of our work by the audience is wildly variable from a tiny 6bit laptop screen to calibrated 4K projection. It is also a moving target as display tech changes. By fixing the image metrics in an idealized master one can null out the limitations of any characterized display, today and tomorrow (within the physical capacity of the device of course). I joke about this as "one master to rule them all" ;-). Its efficient, adds asset shelf life and with the widespread use of ODTs increases the relative fidelity of various viewers presentations.

In terms of marrying elements from several sources - practical photography, VFX, etc - its a godsend. All too often image quality is sacrificed (both knowingly and blindly) along the way via low quality transcodes, concatenation, poor log/lin conversions, improperly remapped gammas, etc just to squeeze material into an often antiquated pipeline.

Since this is Mike's thread I'll stop there but I would welcome the chance to share knowledge about ACES with others in another thread or venue. RAW cameras like the Red/Epic offer a tremendous palette and I am stoked to support ACES imaging pipeline topology that can greatly enhance our capacity as image creators/manipulators to utilize it.

Thanx Mike for doing this thread. I've been picking his brain for years and while I don't always agree with him, I always learn something.

Cheers - #19
 
If I were to build something today that was intended for television work, I might consider having a Flanders or Dreamcolor LCD along with a plasma that would serve as both a secondary display for me, and a client monitor for, well, clients. OLED displays are coming and becoming more affordable and a bit larger, so in the future I would likely look at replacing the LCD with OLED as a primary display for television work. And if budget was really tight, the plasma alone could easily do the job in most cases. For a DI theater, it would be DLP Cinema, hands down. NEC, Christie, and Barco all make great projectors for that purpose.

Which Plasma models would you recommend today? Would you always use with Plasmas a 4:4:4 RGB signal? Any comments on signal converters?

Daniel Perez
Freelance Colorist
 
Which Plasma models would you recommend today? Would you always use with Plasmas a 4:4:4 RGB signal? Any comments on signal converters?

I believe the current Panasonic model is the 20 series, but honestly, anything from the 11 series and newer are pretty good and are used by most of the major facilities. 444 connections are not used very much and for nearly all video work 422 is not only fine, but it's very unlikely that you would be able to see any real difference. The differences between 422 and 444 are primarily manifested when you try to pull the image apart for things like matte extractions, where the lower sampling rate for the color information makes itself known. Visually, there is no real significant difference even on the best of monitors.

If by signal converters you mean things like the HDLink, they all seem to work pretty well. I've usually found Aja's products to be a bit more reliable and robust than Blackmagic's, but that is changing a bit and Blackmagic's redesigned products seem quite stable and reliable.
 
Monitoring the end of the line

Monitoring the end of the line

Mike, I have been a shooter for twenty years.. I work in the midwest so there is more corporate here than anything else. I come from the generation and culture of "learn it all".. And by this I mean, in my area: Production, Directing, Camera, Editing. Now I realize that when many say this people tend to think "Jack of all trades, master of none". To them I would not agrue this point.. There are people better than I, however I do these jobs more out of a love of the medium and a passion to learn, than to be a control freak.. I also find that few people will put in the work and suffering I would on my own stuff, unless I were to pay them correctly for the time, and sadly out here, budgets like this are few and far between. But I degress (I just started using that sentence... Makes me feel smarter!)

I have begun to pay a lot more attention to color work, and have also taken the plunge into working in After Effects. For me, the moment I began to realize what I could do was when I discovered I could hit a button on a clip in Premiere and that would take the clip to AE where I could work with it, and then any changes would be reflected in Premiere. For long form projects this would be a nightmare, but for short form (Like 30 second spots) it allows me to really kick things up a notch (No.. I don't cook)..

I would kill to be able to work on Davinci resolve, and I am trying to get that setup, but it will be a while before that's a reality for me. I am not a MAC fan and have no intention of moving my editing systems to MAC, but I would be interested in editing on PC and then doing my color work on a MAC.. This will take time. I have enjoyed reading about the Monitoring issues you've been speaking of, and this brings up a very interesting question: When you say accurate monitoring, do you mean accurate in the sense of the monitor in question reflecting your FINAL VENUE? In other words.. If TV is my final output, I would imagine a nice Panasonic Plasma is my choice.. But if internet is the goal, when I would guess my LCD computer monitor is the best choice. I am always amazed at how DVDs look so good, and so much like what I see on a movie screen, but this is not accidental is it? can you tell me, how much work goes into converting said movie for DVDs and/or Blu Rays for that matter, vs. the origional file or film print.

What a long way I drove to get to this question.. I suppose I felt the need to give you some background. :001_smile:

Jay
 
Mike,
In the early days of the Red One many DPs used blue filters (generally 80A, B or C) when shooting under tungsten light to balance the light to the camera (much as we use filters to correct filmstock / lighting combinations). This practice has largely disappeared in my experience because of the greatly improved noise floor of the MX sensor. This thread got me thinking about a wider question that I would be very interested in hearing your response to:

What must be done in camera to achieve a certain look, and what can (and perhaps should) be left to the DI environment?

I'm thinking to some extent about lighting, e.g. you can't change the direction of a light in the grade, but can you change it's quality? However I'm mainly thinking about lens choices and filtration - can you make an image captured with a Zeiss lens look like a Cooke and vice versa? Can you create the same diffusion quality as a Classic Soft? Can a Dior on the back element be accurately reproduced? Can lens flares?

Regards
 
My two cents as a cinematographer, not a colorist:

Given unlimited time and money in post, there are probably all sorts of things you can do to an image and its lighting, but who has unlimited time and money in post? Not to mention, often the simpler method looks better, i.e. get it right to begin with rather than fix it in post. If you want something to look softly side-lit with no unusual artifacts, then just softly side-light it!

It's very hard to change the texture (softness and direction) of a light casting shadows on a subject, or create a textured lighting effect from whole cloth in post. Not impossible, but odds are high that even after much effort (again, time and money) the final results might just be barely acceptable at best. It's also hard to radically change the color of one source of light relative to another if both are hitting the subject, like a warm key and a blue fill -- it's pretty hard to take neutral footage and make it look like the key had orange gel on it and the fill had blue gel on it. At best, you can shift the shadow end of the image towards a different color tint, or the highlights the other way, but I'm talking about subtle tints, not full orange meets full blue sort of effects.

As a general rule, it's always easier to take away information in post than add it. So softening an image is fairly easy in post, though the unique optical characteristics of net diffusion and flares take some more advanced software. The only problem I find with softening the occasional shot in post when you've shot the movie on film stock is that a lot of basic diffusion tricks tend to blur the grain, so the post-diffused shots may look too clean compared to the clean shots... but there are tricks in post to bring back some grain.

Washing out the blacks a little is very easy compared to recovering clipped highlight information (impossible) or adding more shadow detail without increasing noise, so to some extent, minor differences between a zoom and a prime in contrast and color tint can be corrected to match, same goes for a Zeiss versus a Cooke. But if one lens inherently has more sharpness than another lens, it's very hard to make the soft lens look like the sharp lens (I mean, there's edge sharpening of course, but again, it comes back to the basic problem of not being able to add information that was never recorded.)

Diffusion filters that create unique optical artifacts usually need some sort of software designed to mimic those effects. Artificial lens flares also require some sort of software effect other than a simple round ghost or something easy to fake with some windows.

Over the years, I've found that when shooting narrative, with its large amounts of footage generated over weeks and months, the volume of material makes it hard to add post effects like digital diffusion to the image except at the end when doing the final color-correction to the edited footage. So likely by that point, the director, producer, and editor have been staring at dailies where the effect was missing, making it harder to convince them to now add it at the very end. But even then, adding a lot of effects to the image during the D.I., like diffusion controlled on a shot-by-shot basis, can take time... and time is tightly budgeted for most people on a D.I., which again, makes it prudent to have gotten the live-action photography close to the final look in a practical and cost-effective manner. Which is a judgement call, sometimes on the set you make a decision that it will be better to save something for post (though I'd guess that 80% of the time when a producer says "don't worry, we'll fix it in post" he finds that he ends up spending more money than if he had fixed it on the set to begin with...)
 
When you say accurate monitoring, do you mean accurate in the sense of the monitor in question reflecting your FINAL VENUE? In other words.. If TV is my final output, I would imagine a nice Panasonic Plasma is my choice.. But if internet is the goal, when I would guess my LCD computer monitor is the best choice. I am always amazed at how DVDs look so good, and so much like what I see on a movie screen, but this is not accidental is it? can you tell me, how much work goes into converting said movie for DVDs and/or Blu Rays for that matter, vs. the origional file or film print.

In a DI process, there is usually one primary target and multiple deliverable targets. The grading is done to the primary target. If that target is film, that is the format that is emulated during the DI sessions. If it is digital cinema, that is the projected target during the DI. The idea is that the creative decisions are made in one environment that is generally accepted as the most capable, with the widest range, and probably the largest initial audience (note I said "initial..."). To this point that has generally been the film print, and so that has been the primary delivery target and still is to a great extent, and certainly when film is the origination medium - as it still is in most studio features that are not specifically 3D. Lookup tables are used that emulate the film processes during the DI sessions. When the DI is complete, a finished Cineon log format image is produced that is used as the source for all deliverables. It is sent for film recording and used directly on the film recorder to produce a new negative for the film deliverables. It is transformed using various LUTs and often color trim passes to create the digital cinema master and an HD video master. All of these LUTs retain the same film target as the DI. The HD video master is the source for all video deliverables, regardless of format (SD, HD, PAL, whatever). That is why just about any version of a modern film that is the product of a DI process looks essentially the same today, they are all basically created from the same digital source master (the Cineon image from the DI).

As for hard goods like DVD and BluRay, they look as good as they do for a number of reasons, but one of the primary ones is that studio level discs go through a scene by scene compression process, with the compression optimized based on the material and the physical space allotted on the disc. Scenes with more detail are given lower levels of compression, and static scenes with less detail are assigned more compression. It's all a balancing act, but when it's done well - as it usually is these days - the result is significantly better than, say, over the air HD broadcast, which is essentially undergoing "live" compression that is not tuned to the material. That's why you often see artifacts, especially on things like fast pans, bright lights passing by, or fast moving material like sports events, that you likely never see on a BluRay disc. Real time needs demand real time solutions and compromises, while non-real time processes allow for deeper analysis and better solutions, even given similar technology.
 
What must be done in camera to achieve a certain look, and what can (and perhaps should) be left to the DI environment?

As is almost always the case - no surprise to me - David's answer and mine are virtually identical.

Anything you can get in camera you should get in camera. And by that I don't think either David or I are talking about tweaking the camera itself, other than by exposure choice, lens choice, and filtration. What I'm talking about is lighting and production design. This may sound a bit off the wall, but I never understood - and still don't - the need for "look files" in a digital cinema camera. If you're not shooting what you actually want to shoot, you're better off changing the shooting conditions than you are shooting "anything" and expecting to turn it into a masterpiece in post. And if you're using the set monitors as any kind of lighting guide, it seems to me that it's a bit confusing and potentially self defeating to be looking through a LUT while you do that. The best photography almost always yields the best results, and that doesn't change regardless of what camera you're shooting or who the colorist is. In fact, Stefan Sonnenfeld - who arguably "invented" the notion of modern "look creation" techniques - would be the first to tell you that the best way to achieve, for instance, the now familiar "teal and orange" color palette is to have locations that are either designed or lit to have those colors. And while that look is often done in post, it is far less controllable and ultimately more artificial. Not only that, it takes a lot of time to do well, as it usually requires separation of flesh tones and other items that require customizing the keyed areas on a shot by shot basis to avoid artifacts.

Now, there are always going to be scenes that call for color and other effects that cannot really be photographed. That's not what we're talking about here. We're talking more about things like contrast, brightness, darkness, color palette, color saturation, and framing that can and should all be achieved in camera as much as possible if you want the cleanest and best results. That's not to say that nothing should be done in post, of course that's not the case. But for me, the shows that I've worked on that I felt represented the best results were the ones in which the original photography captured the essence of what was desired, and my job was to maintain and enhance that, not necessarily to make it something that it wasn't. And there's a lot that can be done by a colorist, to both direct the viewer's attention through judicious use of windowing techniques, to enhancing mood in scenes where the production conditions didn't permit that to the degree that was desired, to adding "negative fill" where the cinematographer wants more dramatic contrast, to beauty fixes, and the list goes on. But the "look" of a production, if the best results are to be obtained, is always better set in front of and in the camera. And that would include even some of the extreme color treatments that seem to be relegated to post on a regular basis. If you can shoot what you want, shoot it. That's the bottom line.
 
Mike, I once worked on a movie set as an extra. We were instructed to not wear anything white (which I understand for blown highlights worry) or red. 1. Can you think of a (post) reason for not wearing red, and 2. Is there a color or combination thereof that are particularly difficult to work with?
 
Mike, I once worked on a movie set as an extra. We were instructed to not wear anything white (which I understand for blown highlights worry) or red. 1. Can you think of a (post) reason for not wearing red, and 2. Is there a color or combination thereof that are particularly difficult to work with?

The primary reason I can think of for not wearing red would be because the production designer didn't want red in the scene other than where he designed it. A technical reason might be because they were shooting with a digital camera that is particularly red sensitive and tends to oversaturate reds more than other colors (Sony cameras have had a reputation for that). It is also true that there is more red than anything else in flesh tones, so they might have wanted the faces to stand out a bit more. And finally, one other possibility is that they were planning on a "teal/orange" kind of treatment in the final image, and red wardrobe would be counter to that.
 
Usually the most common reason is that red is too dominant in the frame and thus distracting -- that isn't a digital vs. film issue. In the old analog video days, there was also an issue with reds tearing and bleeding with some bad NTSC dubs, so red was generally avoided. Red is always a problematic color, both creatively and technically. For example, things shot under red lighting tend to look slightly out of focus, particularly on film. It is oversaturates quite easily, particularly in post when you are trying to color-correct fleshtones, which have a lot of red in them.
 
Over the years, I've found that when shooting narrative, with its large amounts of footage generated over weeks and months, the volume of material makes it hard to add post effects like digital diffusion to the image except at the end when doing the final color-correction to the edited footage. So likely by that point, the director, producer, and editor have been staring at dailies where the effect was missing, making it harder to convince them to now add it at the very end. But even then, adding a lot of effects to the image during the D.I., like diffusion controlled on a shot-by-shot basis, can take time... and time is tightly budgeted for most people on a D.I., which again, makes it prudent to have gotten the live-action photography close to the final look in a practical and cost-effective manner. Which is a judgement call, sometimes on the set you make a decision that it will be better to save something for post (though I'd guess that 80% of the time when a producer says "don't worry, we'll fix it in post" he finds that he ends up spending more money than if he had fixed it on the set to begin with...)

There's a very plausible reason, well articulated, that would be very easy to overlook. Thanks for the insight David.
 
Back
Top