Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Possible sRAW for DSMC?

Whatever FF1080p is, it will be done "properly". No need to worry about that!

Graeme
 
One more thing I noticed, Jim and Graeme, is that when I'm shooting sRAW, my Canon 5D2 can shoot much, much higher FPS... almost twice as fast as with RAW. Could an sRAW-like option allow for higher framerates with Epic and Scarlet?

I realize that the downsampling might not be 100% perfect, but still, if it could significantly increased FPS while maintaining RAW capabilities, couldn't that potentially be useful?
 
In the Canon, it's not the sensor that's running at double speed, but that the write speeds to the CF card are twice as fast due to half as much data. REDCODE RAW is much more efficient in this regard. So, yes, it will increase the fps for the Canon, but not for the REDs which already run at a much higher fps.

Graeme
 
One more thing I noticed, Jim and Graeme, is that when I'm shooting sRAW, my Canon 5D2 can shoot much, much higher FPS... almost twice as fast as with RAW. Could an sRAW-like option allow for higher framerates with Epic and Scarlet?

I realize that the downsampling might not be 100% perfect, but still, if it could significantly increased FPS while maintaining RAW capabilities, couldn't that potentially be useful?


Part of that is because of the throughput to the cards.
However, REDcode raw will be significantly faster than anything else in Scarlet/Epic.

Edit: what Graeme said :)
 
Hmmm.. I don't know if I'm understanding 100%.

Yes, with the Canon, the FPS gain comes mainly from the limitations of the throughput to the CF cards, for sure. I know this because I tested various cards - x133, x233, x300, etc - shooting RAW, sRAW1 (1/2 res) and sRAW2 (1/4 res).

But if you cut the amount of REDCODE RAW data on the 6K FF35 Scarlet or Epic (to FF 35mm REDCODE sRAW 3K), for example, couldn't the same processors and CF cards (or SSDs or whatever) capture and record roughly twice the number of frames? Or is it the sensors themselves that have the speed limitations?
 
There are two kinds of compression going on here, a RAW visually lossless REDCODE, and a uncompressed decimated RGB RAW (basically what sRGB is doing). One works on a perceptual transform basis (the wavelet bit), followed by an entropy encoder (packs what's left tightly together), the other work by binning (basic averaging) then decimating (throwing bits away), with perhaps some simple entropy encoding to pack things together.

Say for instance doing the sRAW binning / decimation to get the resolution down was done, followed by the REDCODE wavelet stuff - you're now putting less resolution through the REDCODE, so you can make it's life easier. Yes? Probably not... Because inadequate filtering has occurred, you're trying to fit more resolution than the pixels dimensions can handle. In real life when you put more than a pint of beer in a pint pot, the beer spills out and makes a mess. In image processing, the beer folds back on itself and corrupts the image - aliasing, but just another mess of beer really. Aliases are spurious false data that are not correlated with the actual detail in the image. They make an image harder to compress, as they're effectively noise, or extra detail that's not actually part of the image.

In the end, even if you matched data rates on a per pixel basis, you're effectively comparing 4k RAW at say 36MB/s with 2k RGB at 27MB/s, and the 4K RAW properly demosaiced and downsampled to 2k RGB is going to look better, especially if there's any high detail or repetitive high detail in the scene. On motion, it will look a lot better as aliases look worse in motion as they move in the opposite direction to the motion. The image will survive through the broadcast chain more effectively as aliases are spurious false detail that the broadcast compression has to encode as it doesn't know, and can't know it's not real image detail. Like noise, aliases harm compression schemes. Also, in motion, they screw up motion estimation as they move in the opposite direction.

One of the perceptual issues with motion imagery is that the perception of judder is increased on sharp edges. Inadequate filtering on downsample increases edge sharpness considerably, and hence contributes to the perception of motion judder.

But to answer your question, RED's sensors do have speed limitations. Nothing like the speed limitations on the Canon sensors though... Normally on a CMOS, each line you read out takes time. The more lines you read, the slower it takes to read out a whole frame. That is why we can go faster in 3k than 4k, and a lot faster fps in 2k. All makes sense. If the 5D2 is only reading 1 line in 3 say, then it's not a 30fps sensor, but really a 10fps sensor. I think that really puts what RED is able to do, speed-wise on it's sensors into perspective. Given the mirror shutter is hard to run much faster than 10fps, why would stills photo sensor designers even want to try to make the sensors run faster than that? for more speed would cause all manner of engineering difficulty and cost.

Graeme
 
There are two kinds of compression going on here, a RAW visually lossless REDCODE, and a uncompressed decimated RGB RAW (basically what sRGB is doing). One works on a perceptual transform basis (the wavelet bit), followed by an entropy encoder (packs what's left tightly together), the other work by binning (basic averaging) then decimating (throwing bits away), with perhaps some simple entropy encoding to pack things together.

Say for instance doing the sRAW binning / decimation to get the resolution down was done, followed by the REDCODE wavelet stuff - you're now putting less resolution through the REDCODE, so you can make it's life easier. Yes? Probably not... Because inadequate filtering has occurred, you're trying to fit more resolution than the pixels dimensions can handle. In real life when you put more than a pint of beer in a pint pot, the beer spills out and makes a mess. In image processing, the beer folds back on itself and corrupts the image - aliasing, but just another mess of beer really. Aliases are spurious false data that are not correlated with the actual detail in the image. They make an image harder to compress, as they're effectively noise, or extra detail that's not actually part of the image.

In the end, even if you matched data rates on a per pixel basis, you're effectively comparing 4k RAW at say 36MB/s with 2k RGB at 27MB/s, and the 4K RAW properly demosaiced and downsampled to 2k RGB is going to look better, especially if there's any high detail or repetitive high detail in the scene. On motion, it will look a lot better as aliases look worse in motion as they move in the opposite direction to the motion. The image will survive through the broadcast chain more effectively as aliases are spurious false detail that the broadcast compression has to encode as it doesn't know, and can't know it's not real image detail. Like noise, aliases harm compression schemes. Also, in motion, they screw up motion estimation as they move in the opposite direction.

One of the perceptual issues with motion imagery is that the perception of judder is increased on sharp edges. Inadequate filtering on downsample increases edge sharpness considerably, and hence contributes to the perception of motion judder.

But to answer your question, RED's sensors do have speed limitations. Nothing like the speed limitations on the Canon sensors though... Normally on a CMOS, each line you read out takes time. The more lines you read, the slower it takes to read out a whole frame. That is why we can go faster in 3k than 4k, and a lot faster fps in 2k. All makes sense. If the 5D2 is only reading 1 line in 3 say, then it's not a 30fps sensor, but really a 10fps sensor. I think that really puts what RED is able to do, speed-wise on it's sensors into perspective. Given the mirror shutter is hard to run much faster than 10fps, why would stills photo sensor designers even want to try to make the sensors run faster than that? for more speed would cause all manner of engineering difficulty and cost.

Graeme

Graeme, most of that went straight over my head, as usual. :) I think I will reread it a couple times.

A couple quick things, though. I was talking about fast-FPS stills shooting in RAW vs sRAW1 vs sRAW2. Not video. Although you and Jim and Deanan are kind of dissing the sRAW process, I'm not really seeing it on my end, rendering out gorgeous and (to my eyes) alias-free 2K high-definition video.

You said, "you're effectively comparing 4k RAW at say 36MB/s with 2k RGB at 27MB/s." But I'm not sure I follow. With Canon's 5D2, a full RAW image is 20MB, but the sRAW1 (1/2 res) still is only like 9MB. So the amount of data has been roughly halved.

Are you saying that the REDCODE processing would struggle with the sRAW half-res because it's jagged and has aliasing?

Also, are the limits on FPS, like 6K@100FPS for FF35 Epic limits of the sensors, mainly, or limits of the processing? I remember with R1 Jim said processing was the actual bottleneck.

But what about FF35 Scarlet? Does it use the same exact sensor as FF35 Epic, only with less horse-power for processing? Or is the sensor's read-reset speed actually slower? If they are the same sensor, couldn't you guys offer a 1/2 res onboard sRAW (or whatever term you prefer! :)) that would allow Scarlet FF35 to roughly double its FPS, at 3K REDCODE sRAW, for example? Please. :):innocent:
 
Good discussion...

Do you really use sRAW on your Canon Tom?

Thats one of those things i really never understood.. Its been around for awhile now, i've turned it on once but just don't understand the logic behind it.. it's a bit of an oxymoron to me.
 
It's funny because I didn't think I would use the sRAW, but a couple weeks ago I was shooting some high-speed timelapse (drivelapse) and was basically left with no choice. The fastest CF cards I had were Lexar 8GB x300s, and they would bog down after about 50 frames when I was shooting 1/3s continous RAW.

So I had to switch to 1/3s at sRAW 1/2 res, and then the CF cards could keep up. With sRAW2 (1/4 res) I was able to shoot at 1/4s continuous.

Later that week I started using sRAW just to save card space for some long timelapses and when I processed the clips, I have to say, they looked fantastic at 1080p or 2K.

Not to mention that all my full-on RAW 5.5K 5D2 image sequences nearly brought my desktop to its knees when I was working on and rendering out the clips in AE. :eek:hmy:

So now I am going to use full 21MP RAW for the top-of-line shoots, and sRAW for just average shoots that only require a clean 2K finish.
 
mmmm looks like you found a use for them. Are you using the DNG converter to convert your sRAW files? nothing opened even normal RAW properly when i got my 5D2 a few weeks back.. it sounds like you have had better luck, even working with sRAW in AE?
 
You need CS4 and the new Adobe Camera Raw plugin for CS4. It works great, though. In AE, for rending timelapse clips, I work with the RAW files or sRAW files directly (AE can ingest them, just like Photoshop). The RAW image sequences are right in the timeline. Then I make 1080p or 2K or 4K masters right off the RAWs. The quality is superb.

Come to think of it.. I'm probably one of the few random joe six pack type guys around here who is already doing 5K or 6K RAW video editing and processing right now. :) And I can tell you.. my editing computer is not happy! Luckily we have another year or so before FF35 really hits the streets. Plenty of CPU power is going to be needed for those cameras.
 
yeah just checked.. the ACR plugin from last week covers the 5D2.. good to see.

Your right about CPU power.. I am hoping Apple releases some heavy machinery next month, its kinda been awhile...
 
A couple quick things, though. I was talking about fast-FPS stills shooting in RAW vs sRAW1 vs sRAW2. Not video. Although you and Jim and Deanan are kind of dissing the sRAW process, I'm not really seeing it on my end, rendering out gorgeous and (to my eyes) alias-free 2K high-definition video.

We actually tested decimated raw against redcode a long time back even before we optimized REDcode and there was no comparision.

Aliasing is something that varies alot with subject matter. We tend to do alot of stress testing because while one scene may look ok, another can look horrendously aliased.
 
We actually tested decimated raw against redcode a long time back even before we optimized REDcode and there was no comparision.

In terms of "decimated RAW" (sRAW) vs REDCODE, does it have to be one or the other? REDCODE, right now, does not offer an in-camera 1/2-res RAW option, does it?
 
My long comment above was that decimating to a lower resolution raw, then putting it through REDCODE would not result in a better image than REDCODE as is, and post scaling properly. The file size advantage wouldn't actually be as great because the aliasing would thwart the compression.

Graeme
 
My long comment above was that decimating to a lower resolution raw, then putting it through REDCODE would not result in a better image than REDCODE as is, and post scaling properly. The file size advantage wouldn't actually be as great because the aliasing would thwart the compression.

Graeme

But what about saving data space and processing power (and any benefits that might enable) in camera, in the field? You're certain that REDCODE would struggle with the 3K 1/2-res RAW? If that's the case, then yeah, maybe it's not worth pursuing or thinking about.
 
Increase in fps?

Increase in fps?

when I'm shooting sRAW, my Canon 5D2 can shoot much, much higher FPS... almost twice as fast as with RAW.

Could an sRAW-like option allow for higher framerates with Epic and Scarlet?

Rather than dig any deeper into the design logic behind this Tom, the answer is no.
 
The sRAW approach is the simplest way to reduce RAW file size. It's as simple as it is "wrong" from an imaging point of view. REDCODE RAW is a vastly non-trivial solution to the "problem", but in the end, a more elegant and powerful solution.

Graeme
 
The sRAW approach is the simplest way to reduce RAW file size. It's as simple as it is "wrong" from an imaging point of view. REDCODE RAW is a vastly non-trivial solution to the "problem", but in the end, a more elegant and powerful solution.

Graeme

Graeme,
Couldn't you just throw away the highest frequency information in the redcode frame before the entropy encoding? It seems to me that would achieve the resolution of sRaw without any aliasing. It might seem that a more flexible allocation of bits for each frequency would be better but sometimes the user may already know they don't want the full resolution of the sensor area they are using.
By the way, based on how soft some of the posted Canon sRaw files look I think that Canon may be doing a true anti-aliasing filter for sRaw instead of just binning pixels. [Edit: I was referring to some 5D2 sRaw1 images that have been posted that looked soft. After looking at the image Graeme posted it looks like his camera does have luminance aliasing in sRaw mode.]
 
Sure, that'd work, but at that point, why? You may as well keep the details. It doesn't solve any problems for us though.

Based upon the sRAW image I shot above, you tell me what you think they're doing! I think the exact nature of the sRAW is different in different Canon cameras, but I think it's obvious that what's going on in the sRAW I have access to.

Graeme
 
Back
Top