Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Cinema camera or Video camera?

I understand what Graeme is saying, and it really makes a lot of sense. I think you can output 16bit tiffs and you'll get what you're looking for with the 12bit sensor data coded linearly, because I don't think there are any 12bit standard formats.

But IF you have the ability to work with log files until your final output, even if you're not doing a film out, then that is the most efficient way to work. You save a lot of disk space, and also consequently rendering time, when working with 10bit files as opposed to 12bit or especially 16bit. You still have all of the dynamic range from the sensor, it's just coded differently, because basically too much data from the linear file is given to the highlights which is wasted because you will NEVER see a change from code values 4024 to 4025 even while grading. Converting it into a log file takes some of the info away from the highlights (which you will NEVER see anyways) and gives some more to the shadows, which is needed).

I recommend the book "Digital Compositing for Film and Video". Although a lot of you probably aren't compositors, the last few chapters describes this subject the best that I've seen so far.

Basically, you can work with 16bit tiffs if you want, but that's wasting a lot of disk space.

I do too. My own software will do compositiing and many things. 12bit values save little space compared to 10bit and are no quicker to process. I can load/filter and save a dpx in about 1.7 secs on a single drive laptop (over two years old). If you use windows postly and ntsc drives, drive compression is your friend often, and the compressing/decompressing speeds things up (since disk reads are many times slower than decompressing in memory, and this has been true of disk compression since pentiums hit 120mhz!). If you keep the original sensor data (whatever the container 12bit or 16bit), they will compress roughly the same.
Converting 12bit log -> 10bit is slightly lossy, as Graeme admits, and is pointless if you don`t need log. As you`ll know, mixing log and linear isn`t great fun when compositing.
 
Also, I don`t intend to scan any film, thats why I`m buying a 12bit digital cine camera, thanks to you Red guys, and your hard work.

I'm not sure how much you know about the film scanning process Ronx. This isn't an attack or anything, I'm learning about it now too. But a lot of these issues with clipping and converting between lin and log would be solved if people did a little research on how people have been working with scanned film for the past while already.

Although the RED camera is different in how you would expose for film in some ways, how you treat the R3D file is VERY similar to how you would treat a film negative. So even if you don't plan on scanning film, it would still benefit to learn from the film's DI workflow that has already been stable for some time.

Maybe you're right about 10bit files not saving much space over 12bit ones. (I think that's what you were saying.) You probably know more about it than me, considering you're developing your own freeware. I would just assume it would save a lot of space over thousands and thousands of frames.
 
And if there was a standard 12bit file format, we might have used it... But, the image processing we do in RedAlert, say, can look better if you keep it 16bit than even going back down to 12bit if you stay linear. It's very simple to apply a LUT to the 10bit data to get back to whatever the native bit depth of your compositing is though.

Graeme
 
I'm not sure how much you know about the film scanning process Ronx. This isn't an attack or anything, I'm learning about it now too. But a lot of these issues with clipping and converting between lin and log would be solved if people did a little research on how people have been working with scanned film for the past while already.

Although the RED camera is different in how you would expose for film in some ways, how you treat the R3D file is VERY similar to how you would treat a film negative. So even if you don't plan on scanning film, it would still benefit to learn from the film's DI workflow that has already been stable for some time.

Maybe you're right about 10bit files not saving much space over 12bit ones. (I think that's what you were saying.) You probably know more about it than me, considering you're developing your own freeware. I would just assume it would save a lot of space over thousands and thousands of frames.


sorry, I didn`t take it as an attack, I just have no use for established film processing techniques as I`m not coming from film or going to film, I`m processing DATA (image), and this is how I look at it. If its usefull I use it, if not, I don`t. If you preserve all the information in the data, throughout your processing, you`ll get good results. A final grade can be made later (in log if need be), but I`m truly digital in nature so traditional film processing routes don`t all apply here. Of course, Red have to supply these traditional log/film pathways for all the people who do need them, which may be most customers?, who are in the film business and who are buying R1s. CGI is never log, and I think your book will suggest converting the log footage to lin before mixing them up. Anyway the important thing is the camera produces great images :)
 
Didn't think you took it that way, just being cautious.

I feel I might be repeating something here, but I think it's pretty important the way people think about processing the DATA from their R3D files. Of course lots of things from the film workflow don't apply to the RED's, and some are even backwards. But some things are exactly the same. One conversation going on is about if using light meters is valid with the RED or with Video due to zebras and things like that. I would argue that for consistency and for good results all the time a light meter is just as useful while working with the RED as when working with film.

Now back on topic. If people were to think just a little bit harder with what they're doing in RC, these clipping problems that Jim is addressing would be gone. Do people scan film as 8bit linear files and clip the superwhites? No, because they want to keep the dynamic range. Therefore, while working in RC if you think about your job as the same thing as the film scanners or telecine's job, then you will end up with better results.
 
agreed, I use a light meter and will use the histograms in camera when it arrives as well as histograms in post too. Ill be investing in some books on colouremitry(!).. and a dictionary too! :)
 
"Highlight protector" built in, with three levels:

1. RedTold'ya mode - text and sound message "Burning!" in the camera and another message in RedCine : "Told 'ya!"
2. RedSlap mode- when highlights cross the limit hand comes out from inside the camera and gently warns the camera operator of reduced picture quality, by leaving a subtle red mark and pulsating feeling on the camera operator's face.
3. RedZapper mode - gently sends higher amounts of electricity into the hand which opens the iris too much


excerpt from a press release:

"Lake Forrest, CA - We have tested our new "Highlight protector" feature with great results. Our three levels of protection are extremely versatile and adaptable to various types of negligence. The system is programmed to automatically switch modes measuring the amount of mistakes made in a specified timeframe. That way each user gets the appropriate attention and guidance and learns to avoid future mistakes much faster than with usual solutions. "

very funny, sign me up!!
 
Yeah those of us coming up from video where the only grading we did was a few tweaks to fix a color balance problem now and then could really use some kind grading tutorial using RedAlert or Red Cine or better, both.
 
but here is a question: If noise can't be compressed, and any compression will necessarily change the character of the noise, than perhaps it makes sense to change the character of the noise first ourselves, prior to compression, so that it compresses in a more attractive way.

We all have our theories.

IBloom

Ian, there are two fundamental approaches in signal processing. Actually one. The other one is borrowed from control systems. The original signal processing just looks at the data and tries to do meaningful things with it, say take a Fourier transform or a Wavlet transform. The other, control systems approach, tries to build a model of the why that particular data was generated using but not limited to state space approaches.

A typical compression works on the former approach. However, noise modeling is extensively done using the later approach.

Let me give you an example. MP3 type audio compression uses perceptual coding that is a former approach as it looks at frequency data and tries to remove certain frequency band that are not audible. It does not model the source that how that sound data was generated by your vocal cords. On the other hand there are approaches that do model the source (vocal cords) and have certain advantages when information from the recorded data is missing.
 
Reds raw workflow is a real paradigm shift - and it is, lets make no mistake about that, here to stay.

it *can* be confusing in the beginning for people who so far only used negative film, especially taking into account that one might rate the "sweet" asa spot different depending on light temperature.

The DNR of red is good to get *outstanding* result, shown, proven and experienced by ourself countless times. I, no matter if using hdcam, film or red, always try to light for slightly less than the maximum of the camera. However, DNR is addictive. If Epic would offer a HDR DNR (by using 2 sensors etc) i would be much more tempted to upgrade our reds to epic - the resolution already is much better than 35mm can transport it to audiences anyhow.
 
Just a quick question:
What is REDspace? Is it a new piece of software, like REDcine? If so, will we have to purchase it? Or, is it the name given to a new user interface in build 16 firmware?


Thanks.
 
JJ wrote:"With film, the tendency is to under expose a negative to keep the shadows from blocking".
Well no. Not in any way shape or form. Quite the opposite in fact. In my 20yrs as a DP I have allways been taught to be aware of one thing: the too thin negative. Epecially for telescine too much underexposure is deadly. Overexposure on the other hand is easy to deal with and no one will notice as happened to me when my AC forgot to tell me he pulled the Pola on a beach shoot.
The other way round would have been bad, using the 5245.
This is so much the case that tru all the 90ties I allways overexposed 1/2 stop for the telescine transfer.
Everything is more refined now but its still the truth. I like underexposing as much a the next guy now, but if u want to play it a little safer you overexpose your neg.
Shooting with RED I found that the "old" JJ saying: "Protect your highlights"
meaning expose as much as u can while maintaining the highlights of your shot, still is the mantra and the way to go with RED.
If you do it right in camera and grade in 2K DPX, you can get beautiful results.
I know I have.
As we all know(or should know) its not 35mm film and we dont have as much to play with in the grading stage. So the new school of cinematography as per Cersar Charlone(City of god) among others, of just shooting and fixing it later is NOT the way to shoot RED or any other digital media.
Best JJ
EDIT:And he did indeed correct this error of over/under. My mistake for have an equally short attention span and not reading the entire thread before posting. never do that.
BTW Im JJ, Jim is at most J.J(my mistake again; not to be repeated by ANYONE)
Best
Jens Jakob Thorsen DP
 
JJ wrote:"With film, the tendency is to under expose a negative to keep the shadows from blocking".
Well no. Not in any way shape or form.

I was trying to discuss this before but the post has been hijacked to a over discussed 12 or 10 bit log or lin ...etc

Trying to interpret Jim post...

When you shoot film you usually rate the film stock at a lower ISO rate so you slightly over-expose it with the intent to keep the shadows from blocking.

When shoot Digital RAW you usually rate the chip at a higher ISO rate (RED is rated at 320 but you shoot it at higher ISO; someone says 500) so you under-expose your footage with the intent to keep the highlights from clipping.

What do you think?
 
It was just a slip from Jim - he meant exactly the opposite from what he said - the late hour etc. It happens - but we all got what he meant:

Treat RED exactly opposite from film.
 
It's never a good idea to intentionally underexpose with a digital camera, especially when a camera is especially susceptible to shadow noise as the Red One is (with build 15). In my opinion, the best way to expose the Red is to choose the ASA you need (the lower the better), and then set your iris to a stop where the whites are just below the clipping point, using the zebra or histogram. Then add shadow fill with lighting if possible.
As Jim mentioned, one can then use curves or secondary correction to adjust midtone and shadow levels in post, without bringing the highlights to the clip point.

So what is Redspace?
 
Just saw Flight of the Red Balloon (dir: Hou Hsaio-Hsien). Never seen so much clipped and over-exposed 35mm film in my life. I didn't think it was pretty, because now I'm attuned to such things. Nevertheless...

"A work of art on the order of a poem by Yeats or a painting by Rothko." John Anderson, Washington Post

"Hou's latest film feels to me like a masterpiece." Michael Phillips, Chicago Tribune

"Flight of the Red Balloon succeeds magnificently." Philip Marchand, Globe & Mail

Seriously, never seen so many blown out images ever. Hm...
 
"There's no accounting for taste"

Personally, I've encountered a bit of the opposite from DP's.

There's such a absolute fear of clipping that there's a inclination to shut down a bit too far.
 
Yes well then that certainly makes everything clear.... Thanks for the response. Your camera is still cool.
 
Back
Top