Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

So how does RedRay work?

Joined
Jul 9, 2008
Messages
18
Reaction score
0
Points
0
Location
UK
Website
chrispk.com
I've got no solid or official information to contribute however I was thinking about the leap that the RedRay codec makes from any previously available compression. Of course as time goes on compression only gets better but the massive difference between RedRay and older compression seems like there must be another other clever idea being implemented by Graeme...

Could it be possible that RedRay works by the same principle as RedCode? Most codecs encode three colour channels (RGB or YUV) but RedCode only has to deal with a bayer pattern greyscale image because it keeps the information RAW. As we know this image is then debayered resulting in an image that resolves about 80% of the original RAW resolution.

Would it be possible to blow up an image (any 3 channel image for instance) by 125% and apply a digital bayer patterned filter to create a greyscale image for encoding which would then be decoded and debayered on playback to form the original image?

Would this process be beneficial in achieving extremely low data rates such as demonstrated with RedRay?
 
I can't say how the codec works, but I can tell you that getting good quality dilithium crystals in commercial quantities has turned out to be trickier than I thought.

Graeme
 
I can't say how the codec works, but I can tell you that getting good quality dilithium crystals in commercial quantities has turned out to be trickier than I thought.

Graeme

Why didn't I see the Scotty/Jim connection before? Brings new life to old lines like:

"Jim, I canna change the laws of physics. I've got to have thirty minutes."
 
Ha! Love the response Graeme :) I've got no idea what dilithium has to do with RedRay but I'm liking the extremely vague honesty.

Sorry if this thread came across as an attempt to get you to spill the beans - not the intention at all, just thought it was an interesting idea worth mentioning. Though I'm still not sure whether it would be feasable or worthwhile...

As always looking forward to hearing more, when the time is right.
 
Leaving aside arguments from ignorance (you know; invoking some fantastic new top-secret compression technology, that I know nothing about, and neither does anybody else, apart from RED), and only discussing what is actually possible, I think I can make a fair stab at guessing of how Red Ray might actually work.

It seems to be fairly common knowledge by now that Redcode recording is based on JPEG2000 (or JPEG2000-like) compression of streams of four separate half-sensor-native-resolution images: one red, one blue and two pixel-offset green images. That is (for the RED One at least) there are basically one 2K red, one 2K blue and two 2K Green images per “4K” frame. The Green images are diagonally offset by distance of about 1.4 pixels, which allows the de-compression/de-Bayer process to produce an output resolution approximating that of a hypothetical 3-Chip camera that uses three “3K” sensors. Thus, although the fully decoded output from a RED One is routinely described as “4K”, it is more like 3K in a 4K container, that is, the 4-gallon 4K bucket never has more than 3 gallons in it. (The higher resolution MX and Epic sensors apparently do produce the full four gallons).

In an idealized RedCode Post-Production environment, the process would start with copying all the RedCode camera files into the editing computer (or making them accessible to it at any rate).
In line with all other non-linear editing systems, the computer then makes lower resolution copies of these files - the so-called “Proxies” - which are what are actually used for the editing process. In most cases, all the editing decisions are made using these lower resolution files, which allows the editing computer to work in real-time, or close to it.

At the end of the editing session, the computer generates some sort of Project file, which is basically a “script” of instructions that tell it how to duplicate all the editing decisions made using the Proxies, but this time doing it to the original camera files, which is generally known as making a full definition “conform”. (This is pretty much the same principle as using workprints for film editing, or “off-line” editing using videotape, in use long before computer-based editors became available).

In just about all non-linear editing systems, the higher resolution of the original camera files can make the high definition conform take a very long time, which is why most large production houses set up “Render Farms”. In a Render Farm, a large project can be cut up into smaller sections and “farmed out” to a large collection of identical dedicated computers, all working in parallel, to “render” the original camera files into the final full-definition output frames, which are then re-assembled to produce the final output file.

A great advantage of cameras and editing systems that store and work directly with Raw format, as opposed to more conventional ones based on fully decoded camera video, is that (in theory at any rate), the first time any image transcoding takes place is when the original camera files are rendered directly to the final output format by the render farm.

The same Post Production “script” can be also set to produce 4K, 2K, 1920 x 1080 HD, or standard definition video, or even 4K optimized for burning to a master negative for film duplication. In all cases, the output file is directly rendered from the original camera files, in more or less a single step.

In practice, none of this seems to happen very often. The vast majority of RED camera footage seems to get converted to the final output format (most often either 2K or 1920 x 1080 HD), PRIOR to the editing process, throwing away a considerable amount of the original sensor information. To make matters worse, quite often only a single format is rendered at the end of the process, and this is then re-transcoded by external apparatus to produce any other formats required.

What I suspect Red Ray is actually for, is an extension of the principle of RedCode, where the original sensor data (G1, G2, B and R) is directly compressed as if it was a series of four monochrome images, without any attempt at image processing.

In a live camera situation, JPEG2000 compression of the original sensor frames is about all that is possible in a current-technology real-time environment. There is, however, no reason why further compression of the Raw files would not be possible, using MPEG4 technology or some variant of it, but only in a post-production environment, where multiple compression passes are possible.

What I think happens with Red-Ray is that, instead of the render farm rendering down to one of the industry standard delivery formats, it only makes a simple assemble-edit of the original RedCode files, applies MPEG4-like inter-frame compression to the G1, G2, B and R data streams, and then folds the “Script” for the final image manipulation of the images (colour correction, gamma curve, brightness, contrast, sharpening etc), as metdata into the Red Ray data stream.

The output from a Red Ray unit (in a cinema for example) would then be decoded more or less directly from the original camera files, all of the Post “massaging” being done on the fly by dedicated high speed processors.

The overwhelming advantage of this approach is that it would minimize the amount of pre-render degradation of the original camera images. Even though they would undergo considerable data compression, the de-compressed files would still represent relatively untouched sensor data with its dynamic range essentially intact, which would produce the best results possible from the available data. This contrasts with the current situation where video parameters are routinely “baked in” right at the start of the editing process.

The Red Ray output could also be set to precisely match the resolution of the display device, which would eliminate yet one more stage of re-mapping.

Where this would fall down is in the area of CGI intensive features, since I doubt anything as compact as what has been proposed would have anywhere near enough processing muscle to produce convincing real-time 4K images.

However in that case, the CGI rendering computers could output the images in the same G1, G2, B and R Bayer format as comes from the camera. In that case, the images would most likely need little or no processing in the Red Ray unit, apart from being routinely de-Bayered to the desired output format.
Also, if you look at the vast majority of TV shows and Movies, editing other than simple cuts and a bit of gamma manipulation is rarely used anyway.

The major advantages would be:

* Minimum transcoding degradation between camera and display. (It would be analogous shooting on Kodachrome and projecting the original camera stock).
* A single low-data-overhead delivery format that could be configured to precisely match just about any display system on a relatively inexpensive player.

Anyway, that’s how I imagine RedRay works. And if it doesn’t work that way, it damn well should… :cheers2:
 
Last edited:
Also, if you look at the vast majority of TV shows and Movies, editing other than simple cuts and a bit of gamma manipulation is rarely used anyway.

Oh, if only that were true. Size and position transforms, speed changes, visual effects "fixes" (remove a mic boom, change a sign, replace a sky, etc.), and monitor burn-ins are very common and very plentiful in television series today, and even more so in motion pictures. If you think editing of these programs is that "simple," you most likely aren't involved in doing it.

As for Red Ray, I don't know any more than anyone else here. But I do know that it's not limited to RedcodeRaw input material (it can work with anything - and has to, because I've yet to work on a project in which every shot in the show was done on Red and not touched by any visual effects) and it's very unlikely that it is utilizing a reverse-deBayer approach. My impression is that it's basically a very well tuned JPEG2000-style wavelet encode coupled with an equally well tuned interframe compression scheme that may or may not be similar in approach to MPEG and H.264. If done correctly, the rates being touted for RedRay files are certainly possible, even if a bit surprising.
 
Oh, if only that were true. Size and position transforms, speed changes, visual effects "fixes" (remove a mic boom, change a sign, replace a sky, etc.), and monitor burn-ins are very common and very plentiful in television series today, and even more so in motion pictures. If you think editing of these programs is that "simple," you most likely aren't involved in doing it.
The vast majority of which are well within the capabilities of a custom hardware solution. As I said, not everything would be amenable to such a process, and in that case a "Follow Copy" flag would be invoked, and pre-processed images would be output, with an unavoidable introduction of transcoding losses.



As for Red Ray, I don't know any more than anyone else here. But I do know that it's not limited to RedcodeRaw input material (it can work with anything - and has to, because I've yet to work on a project in which every shot in the show was done on Red and not touched by any visual effects) and it's very unlikely that it is utilizing a reverse-deBayer approach. My impression is that it's basically a very well tuned JPEG2000-style wavelet encode coupled with an equally well tuned interframe compression scheme that may or may not be similar in approach to MPEG and H.264. If done correctly, the rates being touted for RedRay files are certainly possible, even if a bit surprising.

Yes, but you're now appealing to explanations based on new technlogies that you don't actually know exist, which is a classic "argument from ignorance".

It just seems bizarre to me that if they really have developed some new ultra-efficient compression CODEC, they would be limiting its use to a relatively small-market product like Red Cameras

It's entirely possible that the original RedCode concept has been overtaken by other technological advances. What would a few years ago have been considered outrageously large capacity Hard Disks are now available that can be run and powered from a USB socket. Distributing 4K movies on portable USB hard drives is now a perfectly practical option, but not just for RedCode. Without cheap optical media support, I don't see it as having the impact it might
once have.

A means of projecting more or less directly from the original camera files would have clear advantages.
A more efficient CODEC that simply allows you to store conventionally processed/degraded files in less storage space no longer does


"If done correctly, the rates being touted for RedRay files are certainly possible, even if a bit surprising"
To paraphrase one of your own statements:

If you think massively "re-tuning" the algorithms of a horrendously complex system like MPEG4 is something than can be successfully tackled by a relatively small team like RED's, you most likely don't know the first thing about it.
 
Also, if you look at the vast majority of TV shows and Movies, editing other than simple cuts and a bit of gamma manipulation is rarely used anyway.

Keith, I appreciate your idealized vision of post production, but this is clearly written by someone with virtually no insight into the everyday realities of post. I wish the post process was as clean as you describe. Where I could have one magical 'script' that could contain the sum total of everything that has been done to my image. If that were to ever exist, it would require major cooperation and standardization across vendors. It would also hinder innovation over night.

If you think massively "re-tuning" the algorithms of a horrendously complex system like MPEG4 is something than can be successfully tackled by a relatively small team like RED's, you most likely don't know the first thing about it.

Mike Most knows some things.
 
I want to see RedRay in action! :w00t:
 
It just seems bizarre to me that if they really have developed some new ultra-efficient compression CODEC, they would be limiting its use to a relatively small-market product like Red Cameras

Small market product ...?

Keith ... sorry to tell you ... but it IS true. They have developed a new ultra-efficient compression CODEC.

I've seen it. Up close and personal.

Seen stuff my company has shot and/or mastered - encoded on it and projected on a huge screen.

It is, in my opinion, one of the most significant achievements in Digital Cinema in history.

Stick around ... you will get a crash course in "small market".
 
It just seems bizarre to me that if they really have developed some new ultra-efficient compression CODEC, they would be limiting its use to a relatively small-market product like Red Cameras .

The codec in RED RAY is optimized for 4K content distribution via the internet, not camera usage.

It is our intention to demonstrate that to be a "significant" size market.
 
It's entirely possible that the original RedCode concept has been overtaken by other technological advances. What would a few years ago have been considered outrageously large capacity Hard Disks are now available that can be run and powered from a USB socket. Distributing 4K movies on portable USB hard drives is now a perfectly practical option, but not just for RedCode. Without cheap optical media support, I don't see it as having the impact it might
once have.

The concept of physical inventory - be that hard drives or optical media - is in our opinion obsolete.

It leads to too many inefficiencies and logistics overhead. The future of distribution is over a network.
 
If you think ____ is something than can be successfully tackled by a relatively small team like RED's, you most likely don't know the first thing about it.

I put a blank in your quote.... Fill in with whatever you like. The answer will always be "yes", "why not?", "if they really want to"...

It was once said about the same "small team" and building a camera... They now have 3. Three cameras that have thousands hooked... (One of which is named Epic. Which it simply is.)
I wouldn't doubt this "small team".

Just saying...
 
so Redray will still be needed as a proprietary device for encode/decode ?
The concept of physical inventory - be that hard drives or optical media - is in our opinion obsolete.

It leads to too many inefficiencies and logistics overhead. The future of distribution is over a network.
 
...so Redray will still be needed as a proprietary device for encode/decode ?

The RED RAY player and the RED distribution codec work hand-in-hand to deliver 4K resolution information, yes.
 
How does RedRay work for live content, if at all?

Can I connect a Epic to the internet and deliver live video/audio to Red Ray players in theatres?
 
How does RedRay work for live content, if at all?

Can I connect a Epic to the internet and deliver live video/audio to Red Ray players in theatres?

Or to set-top RED Ray players even? Perhaps a RED Ray 4K live-streaming option?
 
Or to set-top RED Ray players even? Perhaps a RED Ray 4K live-streaming option?

And what about 4K streaming from SPACE? That's something.
Stuart is NASA hookep up?
;-)
 
Back
Top