Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

EPIC-M Monochrome

It's the exact same reaction that the R1 drew on CML. But then, I doubt the poster has enough knowledge of RED to recognize who reveals new RED products and where they are revealed.
 
Imagine if "Dead Man" were shot in color and converted in post. No way!! The Epic-Monochrome just may start a new revolution towards more B&W features.
 
As SSD cards become bigger and faster, it’s conceivable that three HRDx images fit into one R3D file. If they could work out a solution in the 40’s, think of what they could do NOW! What was hard to-do; was coming up a monochromatic camera that has a good ASA/ISO, that is very sharp, and can render a good image. Just saying!

Humberto Rivera

it's not about how many tracks you can fit in a simultaneous stream... it's about light filtering.
 
That is very, very cool I just hope some rental houses pic up some of these. Definitely a specialty item, but way cool.
 
This camera may well be a very valid rear attachment for some of the older cinema lenses like Speed Panchros, maybe with even a little more red colour filtration to emulate the older B/W film stocks. Way beyond my budget but will be watching and reading what follows here.
 
I think the first monochrome Epic M should be named Ansel.

Ansel.jpg


Ansel approved.
 
This camera is such a beautiful idea. I can't wait to see some test shots (and whatever David Fincher is working on) I hope there is also a Scarlet Monochrome planned somewhere down the road.

My wishlist welcomes the Epic-M Monochrome, even if CML thinks that Jared is the prince of Nigeria asking for my support.
 
Hmmm. Film transport + RGB-IR LED backlight + EPIC M Monochrome = 5K 5 x realtime telecine.
There's still a lit of film waiting to be archived, folks....
 
After my last post, last night, my mind kept on-going, I thought! Again WHAT IF? You’re right Paul Herrin, “It's not about how many tracks you can fit in a simultaneous stream... it's about light filtering.” HOWEVER it’s also an important consideration, you need to record the IMAGE.

The Technicolor image was develop between 1916 (Process 1) 1922 (Process 2) 1928 (Process 3) and 1932 (Process 4). All the solution were “MECHANICAL”! Now it’s time for a SOFTWARE approach to the challenge of the IMAGES!

Again, I say that a solution must be found to what goes between the Camera and the Lens utilizing a Red Epic M or X Monochrome Camera. It could be a “Beam Splitter” or it could be series of filter to specific light waves to record the HDRx files onto a single R3D file. Or some sort of SOFTWARE solution that allows the HDRx files.

Here is a short quotation from the Wikipedia Page:

“SHOOTING TECHNICOLOR FOOTAGE, 1932–1955” http://en.wikipedia.org/wiki/Technicolor

“Technicolor's advantage over most early natural-color processes was that it was a subtractive synthesis rather than an additive one. Technicolor prints could run on any projector; unlike other additive processes, it could represent colors clearly without any special projection equipment or techniques. More importantly, Technicolor held the best balance between a quality image and speed of printing, compared to other subtractive systems of the time.

The Technicolor Process 4 used colored filters, a beam splitter made from a thinly coated mirror inside a split-cube prism, and three strips of black-and-white film (hence the "three-strip" designation). The beam splitter allowed ⅓ of the light to shine through a green filter onto one strip of film, capturing the green part of the image. The other ⅔ was reflected sideways by the mirror and passed through a magenta filter, to remove any green (which would have been redundant). The diverted non-green light exposed a pair of film strips spooled together; one which captured only blues, itself a filter, and another which picked up whatever was left (the red part of the image). The "blue" strip could act as a filter because it was a special type of film known as orthochromatic, which is designed to absorb some light frequencies and not others. The "green" and "red" strips were both of the broad-spectrum, panchromatic type.

To print the film, each colored strip had a print struck from it onto a light sensitive piece of gelatin film. When processed, "dark" portions of the film hardened, and light areas were washed away. The gelatin film strip was then soaked with a dye complementary to the color recorded by the film: cyan for red, magenta for green, and yellow for blue (see also: CMYK color model for a technical discussion of color printing).

A single clear strip of black-and-white film with the soundtrack and frame lines printed in advance was first treated with a mordant solution and then brought into contact with each of the three dye-loaded printing strips in turn, building up the complete color image. Each dye was absorbed, or imbibed, by the gelatin coating on the receiving strip rather than simply deposited onto its surface, hence the term "dye imbibition". Strictly speaking, this is a mechanical printing process, very loosely comparable to offset printing or lithography, and not a photographic one, as the actual printing does not involve a chemical change caused by exposure to light.

In the early days of the process, the un-exposed blank receiver film would be pre-exposed with a 50% black-and-white image derived from the green strip, the so-called Key, or K, record. This process was used largely to cover up fine edges in the picture where colors would mix unrealistically (also known as fringing). This additional black increased the contrast of the final print and concealed any fringing. However, overall colorfulness was compromised as a result. In 1944, Technicolor had improved the process to make up for these shortcomings and the K record was, therefore, eliminated.”

As you can tell most of the process was done in POST! Only the recording needed the THREE STRIPE PROCESS. So the only “CHALLENGE-IS”, what goes in the middle, between the Camera & Lens, in B&W; Software or Hardware? In 1932 we were dealing with an ASA of FIVE (5), yes Five, today we have 2000, quite a difference, thanks to Red. All in the Monochromatic space!

Humberto Rivera
 
I used the IR Red One to shoot extensive sequences for a movie coming out next year. It's an amazing type of cinematography. A lot of the rules we tend to live by go out the window when shooting IR.

Jarred, do you have any plans, as of right now, to convert one of these monochrome cameras to IR and rent it out?

Now here is a very interesting idea - old RED ONE conversions to IR would breathe new life into the old girl, especially on those trade-in cameras. You could get them back in the field. I'd buy one -
 
After my last post, last night, my mind kept on-going, I thought! Again WHAT IF? You’re right Paul Herrin, “It's not about how many tracks you can fit in a simultaneous stream... it's about light filtering.” HOWEVER it’s also an important consideration, you need to record the IMAGE.

The Technicolor image was develop between 1916 (Process 1) 1922 (Process 2) 1928 (Process 3) and 1932 (Process 4). All the solution were “MECHANICAL”! Now it’s time for a SOFTWARE approach to the challenge of the IMAGES!

Again, I say that a solution must be found to what goes between the Camera and the Lens utilizing a Red Epic M or X Monochrome Camera. It could be a “Beam Splitter” or it could be series of filter to specific light waves to record the HDRx files onto a single R3D file. Or some sort of SOFTWARE solution that allows the HDRx files.

Here is a short quotation from the Wikipedia Page:

“SHOOTING TECHNICOLOR FOOTAGE, 1932–1955” http://en.wikipedia.org/wiki/Technicolor

“Technicolor's advantage over most early natural-color processes was that it was a subtractive synthesis rather than an additive one. Technicolor prints could run on any projector; unlike other additive processes, it could represent colors clearly without any special projection equipment or techniques. More importantly, Technicolor held the best balance between a quality image and speed of printing, compared to other subtractive systems of the time.

The Technicolor Process 4 used colored filters, a beam splitter made from a thinly coated mirror inside a split-cube prism, and three strips of black-and-white film (hence the "three-strip" designation). The beam splitter allowed ⅓ of the light to shine through a green filter onto one strip of film, capturing the green part of the image. The other ⅔ was reflected sideways by the mirror and passed through a magenta filter, to remove any green (which would have been redundant). The diverted non-green light exposed a pair of film strips spooled together; one which captured only blues, itself a filter, and another which picked up whatever was left (the red part of the image). The "blue" strip could act as a filter because it was a special type of film known as orthochromatic, which is designed to absorb some light frequencies and not others. The "green" and "red" strips were both of the broad-spectrum, panchromatic type.

To print the film, each colored strip had a print struck from it onto a light sensitive piece of gelatin film. When processed, "dark" portions of the film hardened, and light areas were washed away. The gelatin film strip was then soaked with a dye complementary to the color recorded by the film: cyan for red, magenta for green, and yellow for blue (see also: CMYK color model for a technical discussion of color printing).

A single clear strip of black-and-white film with the soundtrack and frame lines printed in advance was first treated with a mordant solution and then brought into contact with each of the three dye-loaded printing strips in turn, building up the complete color image. Each dye was absorbed, or imbibed, by the gelatin coating on the receiving strip rather than simply deposited onto its surface, hence the term "dye imbibition". Strictly speaking, this is a mechanical printing process, very loosely comparable to offset printing or lithography, and not a photographic one, as the actual printing does not involve a chemical change caused by exposure to light.

In the early days of the process, the un-exposed blank receiver film would be pre-exposed with a 50% black-and-white image derived from the green strip, the so-called Key, or K, record. This process was used largely to cover up fine edges in the picture where colors would mix unrealistically (also known as fringing). This additional black increased the contrast of the final print and concealed any fringing. However, overall colorfulness was compromised as a result. In 1944, Technicolor had improved the process to make up for these shortcomings and the K record was, therefore, eliminated.”

As you can tell most of the process was done in POST! Only the recording needed the THREE STRIPE PROCESS. So the only “CHALLENGE-IS”, what goes in the middle, between the Camera & Lens, in B&W; Software or Hardware? In 1932 we were dealing with an ASA of FIVE (5), yes Five, today we have 2000, quite a difference, thanks to Red. All in the Monochromatic space!

Humberto Rivera

you would need 3 sensors, or 3 sensor areas, or 3 cameras. you can't just stick something between the lens and the sensor and get a technicolor process... the way you get a color image from a single sensor is to use a pattern, like the bayer pattern, to gather RGB values.
 
Paul, interesting question; “you would need 3 sensors, or 3 sensor areas, or 3 cameras. you can't just stick something between the lens and the sensor and get a technicolor process... the way you get a color image from a single sensor is to use a pattern, like the bayer pattern, to gather RGB values.” Paul Herrin

So there is THE CHALLENGE, how do you get three different images onto One R3D File? “To print the film, each colored strip had a print struck from it onto a light sensitive piece of gelatin film. When processed, "dark" portions of the film hardened, and light areas were washed away. The gelatin film strip was then soaked with a dye complementary to the color recorded by the film: CYAN FOR RED, MAGENTA FOR GREEN, AND YELLOW FOR BLUE (see also: CMYK color model for a technical discussion of color printing)”. http://en.wikipedia.org/wiki/Technicolor

It’s a Monochromatic sensor, it’s got a high ASA/ISO (2000), it’s got good Dynamic Range, and it gives a clean image! Now THE CHALLENGE, how do you record different wavelength onto one sensor? You already worked out the Monochromatic Sensor, how to record HDRx, high ASA/ISO, you now need to figure out how to split the light coming through the lens, before it gets to the monochromatic sensor! It’s no small challenge, but nothing is impossible, it’s that they haven’t found the solution, YET! I’m just posting the question; WHAT IF? We would then have a Camera capable of recording “Three Stripes Colors” on a single devise, which would definitely be a GREAT THING!

Humberto Rivera
 
It’s a Monochromatic sensor, it’s got a high ASA/ISO (2000), it’s got good Dynamic Range, and it gives a clean image! Now THE CHALLENGE, how do you record different wavelength onto one sensor? You already worked out the Monochromatic Sensor, how to record HDRx, high ASA/ISO, you now need to figure out how to split the light coming through the lens, before it gets to the monochromatic sensor! It’s no small challenge, but nothing is impossible, it’s that they haven’t found the solution, YET! I’m just posting the question; WHAT IF? We would then have a Camera capable of recording “Three Stripes Colors” on a single devise, which would definitely be a GREAT THING!

Humberto Rivera

Er... The "solution" is to use a sensor capable of differentiating the wavelengths that charge each photosite on the sensor. Effectively doing away with color filter arrays and prism blocks and whatnot. Go ahead, invent one, that is the next big (and logical) evolutionary step for image sensors. We start getting into quantum charge directions and photon determination. Detecting the wavelength of light is not a big deal, but doing it independently, on millions of tiny little dots, and building a digital composite value of millions of photon strikes on those little dots, that becomes a whole different deal.

Splitting the light before it hits the sensor has been done. Using color filters such as the Bayer CFA or a prism found in 3-chip cameras. If you split the light, so what? You then need a way to record the split light. So we're back to a filtered pattern (like Bayer or the 3-strip system used in the F35 / Genesis) or you need multiple sensors that are tuned to each split light component -- 3-chip HD camera... One could always use a prism system that alternates which wavelength contacts the sensor in sequnce and then regord R, then G, then B... But then we would have temporal distortion between the R, G and B components...

Ultimate goal would be to not have any color filters, prisms or any light modifcation from the time the light emits from the rear of the lens until it hits the sensor. To have a sensor that can "see color".

As for 3-strip color process in a digital system like this. You still need 3 cameras or 3 sensors, same as it's always been. 3-chip prism cameras, Bayer pattern or RGB stripe pattern sensors are pretty much the equivalent of 3-strip film process. When we get into a digital world, we're still locked into R, G and B as our primary color channels in a 3-component system. There is actually no reason why we have to restrict ourselves in that way. It's easier for the human mind to quantify color components in that manner and we can derive all the visible range that humans can see in a transmissive color system with these components. For passive or subtractive color systems, we use CMYK process to accomplish the same.

The only advantage to trying to push this newer tech through a 3-strip color process is to have the same level of color filtration control that is available shooting a 3-strip film process. And that can be done the same way it has always been done with film. Even better now, look how small the EPIC brain is.
 
Last edited:
I do not understand the myopic response from a number of people on this thread to this amazing addition to the RED family. I wish I could afford one now.

Myopic. He He He, love it.

Epic Mono has the potential to open up a whole new market for scientific applications. Hi res, hi speed broad spectrum. Scientific apps often use ranges of narrow bandwidth dedicated spectrum filters to distinguish and analyze relevant data. It's not just about movies or creative photography.
 
I remember that. I think Brook posted a picture of the see-through Oakleys.

Yup... I remember seeing it posted here a few years back.... went looking for it again a while ago to show a friend, but couldn't find it.

The "solid black" plastic glasses frames looked like clear lucite... you could see the screws threaded in to the plastic from the hinges... very cool.

-sc
 
Back
Top