Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

r3datamanger checksums really that important?

I'm curious about some of your guys procedures for checking the R3D files as to make sure none of them are corrupt. (I have not seen any corrupt files yet).

Is playing the proxy files enough?

And if you find a corrupt file - what is usually the signs (it just not playing)?

For the features I have advised/setup their systems, I always recommend the following during the visual inspection stage of their data management:

- play each proxy file at the base frame rate - ie 23.98, 24, 25, 29.97. This may mean playing a lower res proxy file for their system, then checking focus or other items in a larger res file later. The point here is to decode each and every frame.

- play it from a copied destination first, then if that has issues I would advise to check the source.

- listen for correct/good audio (if applicable)

- watch for green/black frames, stuttering frames or interior frame corruption.

Lots of people I know just scrub through the proxy files. That method will show green/black frames (which are usually dropped frames), but wont show interior frame corruption (i.e. codec errors). With some of the early versions of build 16, you would only see interior frame corruption when you played back at full speed, not when scrubbing.

Usually the footage right up to the frame before a corruption is good and the footage the frame after is fine. You could use all that footage around a corrupted frame without issue, you would just need to cut around the corrupted frame. However, re-shooting it or selecting an alternate take would be better, IMHO.
 
Hans,

In what mode were you using the Nexto?
Copy?
Copy and Verify?
Move? (which copies, verfies and deletes the original card)

Have you tried removing the drive from the Nexto and using an external adapter to mount it to another machine? Would you be able to create an image of the drive rather trying to copy? Can you access the problem r3d through Red Alert?

Copy and Verify.

Yes, I thought of removing the drive and mounting it on a different housing. I have the feeling that Deanan is right and the HD is somehow damaged.

No, I cannot render from RA, ReMaster or SpeedGrade a longer sequenz than to the 1 GB bump.

Thanks for the input Scott!

Hans
 
As I understand it you can access the QT proxies which are playing back without stopping at the 1GB point? (which would indicate the r3d file itself is still intact)Worse case scenario you render out the highest quality you can from Red Rushes or r3d data manager or Clipfinder?

I'll post the name of the file recovery tool we used tomorrow when I'm in the office.

So where do you think the corruption happened?
Ian and Conrad please feel free to educate us :)
Was the file always corrupt and the Nexto simply copied and verified the corrupt file from the Red CF? (impossible to know now that the original has been wiped)
Did the Nexto damage the file in the transfer process?
If the HDD is to blame, why? why would the fault only be the sectors of that one file?

Great thread. Thanks to all that have contributed:beer:
 
I'm curious about some of your guys procedures for checking the R3D files as to make sure none of them are corrupt. (I have not seen any corrupt files yet).

Is playing the proxy files enough?

And if you find a corrupt file - what is usually the signs (it just not playing)?

On set when a shot is fine you normally check the gate. With the RedOne this doesn't make much sense. Instead of checking the gate we now replay the shot - when there is the time. This makes sure that the shot is error-free on the CF.

Keeping the backups error-free is another story. While I fully understand what Conrad is after with his app. (and I use it on a daily base) I'm more like Ian in this regard. Although this whole backup issue is mainly computer tech much of then possible mistakes that can be addressed before they arise. Keeping the exposed media (in my case CFs) until two backups are made and verified is one way that is pretty secure and proven and not rocket science but common sense.

Personally I found working RefQTs sufficient - until now. But in my case I'm pretty sure that my problem is a foulty drive in the Nexto. This wouldn't be a problem if I would have kept the CF... Don't brake the rules.

Hans
 
Copy and Verify.

Yes, I thought of removing the drive and mounting it on a different housing. I have the feeling that Deanan is right and the HD is somehow damaged.

No, I cannot render from RA, ReMaster or SpeedGrade a longer sequenz than to the 1 GB bump.

Thanks for the input Scott!

Hans

You might be able to render from REDline (directly from the nexto) by skipping past the bad section.

Also, a drive recovery program should be able to copy around the bad sectors if it's not too bad.

d
 
But the fundamental question comes down to the same saying used elsewhere in filmmaking: good, fast, cheap; pick two. In this instance I would say secure, fast, easy; pick two. Your method is not secure, but is fast and easy. My software allows people to pick secure and easy or fast and easy.

Quote of the year!

John DeBoer
Director of HD Sales,
SIM VIDEO INTERNATIONAL
 
Actually, text files do a very good job of illustrating my point here. The point is that it takes very little for files that in every other way appear to be exactly the same to actually not be the same. It doesnt matter if its a DOC or a R3D file - it is still a series of zeros and ones that need to be in the proper order. We could generate R3Ds that have the same amount of data wrong, but unless you are looking at a hex editor you may miss it. The point is not that the file is correct 98% of the time, but 100%.

The question is not if those files could exist, in theory, it is if they do exist, in reality. And if the risk of their existence is worth the cost of detecting them.

I say the probability that:
A file can transfer without raising an error.
Has the exact same file size.
and plays back just fine.
yet still has unseen, undetectable differences that make it unusable.

is

extremely low. Lower in fact than the probability of an unforseen bug in an automated system going unnoticed.

But again thats another reason why I developed the software. CRC checks are not the best at determining if a file is valid. In fact there are computer viruses in the wild right now that operate by infecting a file then padding its length to return the same CRC. To do that with MD5 would take 1E-32. To do that with SHA1 is on the order of 1E-128. Its not yet known how to do that with SHA256 because it hasnt been done yet.

A computer program that uses a trapdoor function to defeat CRC is not the same as random errors in data that can defeat CRC. CRC is the basis of almost all digital communication, to call it unreliable is simply put FUD.

In addition, the CRC checks are usually handled by the drives firmware, which varies by drive to drive and manufacturer to manufacturer. One of the biggest reasons the recent seagate 1.5Tb drives were having so many issues was that the firmware thought the drive was having such a high CRC failure rate that the firmware shut it down to prevent further loss, bricking the drive. Turns out it was a firmware bug, not an issue physically with the drive.

More FUD. Checks are also present in the operating system. A firmware error on the drive isn't going to cause random undetectable changes to your data. It's going to cause the process to fail in general.

But the overriding point is this. You have your red media and you make your two copies. Now you properly take your one copy and store it, and take your second and use it to edit. Time passes. Days, months, years later you now need access to that footage again. How do you know its valid? CRC checks here wont help you one bit. You could spend days or weeks re-watching all the footage again, but that wont tell you if it is an exact copy. I know that the first footage I transferred shot on build 8 is valid to this day, because I have proper checksums not reliant on any hardware. By the way, those are the things that are required for completion bonds.

Sounds amazing. Just not necessary for the vast majority of productions.

Well, perhaps we need to define expensive here. On standard 4k footage, on a 2 year old MacBook Pro, my processors have a 30% utilization during the checksum. Using activity monitor I see its transferring at line speed from the red media. So the processor is sitting waiting for data from the drive. So a read pass is a read pass, and theres not much we can do about that besides make faster hardware. It certainly is not waiting on the processor to add up the data.

The issue is seek time likely. I used the term "computationaly expensive" loosely to include read and writing from hard disks. My point is still very valid. The overall cost is very high, too high to complete on most sets with most equipment.

Looking at it from the other side, 36Gb is roughly 18 minutes of footage. But the checksums were created in 9 minutes. So this shows that it created the checksums at 2x real time. It would take you 18 minutes to verify that footage visually, but on 9 minutes to validate the checksum.

I think a general survey of people using R3D data manager and other syncing tools such as AaSync. Will show that a large number of them are turning checksuming off because they don't have the time.

Ok - theres another gotcha that has happened in the past. Lets say you do a finder copy to 2 destinations. You check destination A and see the footage is valid. Now you must also check footage on destination B to make sure it is ALSO valid.

I still have to do that regardless of what software I'm using.

However, using simple logic, we know that if we used checksums on both destinations and the checksums came back correct, then we can watch only ONE destination and the other must also be fine. We know that we had 2 successful reads on the source, and a successful write and read on each destination and all three checksums matched. Therefore, once we check that one destinations data is valid image data, all copies with that same checksum must also be valid image data, down to a certainty of at least 1E-32 or 1E-128 (depending ont he exact checksum used).

Just as a side note no one needs to be using SHA-1 to checksum R3D files. That would truly be overkill.

The gotcha here is, I refer to the instance where a clients drive produced 3 incorrect results. If I had only copied via finder and played back on my RAID, I would either ALSO need to playback on the clients drive or ASSUME it was valid data. So, I would need to either spend more time validating image data (watching takes) or I would have made the faulty mistake and lost footage.

I still have to do that regardless of what software I'm using.

First, I dont know the exact setup the original poster had. I do know that he should be hitting around the 20 to 25 minute mark. 30 minutes indicates to me there is something there that can still be optimized - some hardware in the chain somewhere isnt putting out full speed.

Secondly, it just takes planning and commitment to a workflow to get it done. If you are planning on having multiple cameras, I doubt youd go into it with just a single lens and a pair of sticks. Here too, you would plan for perhaps a couple of download computers and a couple of red media each. Ive done whole multi-cam features (3 cameras, 2 units, always running) using my software and workflow, and never had to stay more than 40 minutes past wrap, if I kept on top of it. Sure, theres a lot of roll numbers that way, but I know for a fact that the producer 10 years down the road will be able to pick up the footage again and know that his backups are still exact copies of the footage as shot.

Again, multiple download computers sounds great, but not possible on most jobs. This month, I'll be on three different continents with two cameras. I have one computer and a boot drive in bubble wrap. Should I tell my client we aren't safe and that we need to haul around several computers just to shoot Red.

But the fundamental question comes down to the same saying used elsewhere in filmmaking: good, fast, cheap; pick two. In this instance I would say secure, fast, easy; pick two. Your method is not secure, but is fast and easy. My software allows people to pick secure and easy or fast and easy.

The point is that we can't call it "secure" because it's only one piece of a much larger puzzle.

In this case people aren't using it because it takes too long. Using a syncing tool is great, it's only half of the battle.

And again, I am not advocating that this is a simple one step process. I have always said the proper data management means you have to make 3 evaluations on the data. R3D Data Manager will make 2 of those in a automated and precise way. You still need to be involved in the downloading process to make the third, and to overseee the first two.

I appreciate that.

Again I'm sad that this has turned into a discussion about your software. It should really be just about whether checksums are turned on or not.

I'm intrenched in the workflow I started developing for myself in 07 which involves a lot of specific actions with the cameras and the set and a script so that I'm not retyping the same commands, but forces me to think very clearly about what I'm doing. It's easy for me to add checksums to that workflow and pipe them into a log if I think it's important. Otherwise I'd likely switch to R3D data manager. I'm glad you built it. And proud of the fact that we both have a similar story.

The good news is that we are on the cusp of the SSD revolution, that I believe will change all this and checksums will become more practicle and ubiquitous. Until then though they are just too difficult for most people, and my point is to say that they aren't necessary if you have experience and take the time to look over your data. You need to do that in any case.

IBloom
 
well thanks for the overwhelming response from everyone. yes i am the original poster, and after 5 pages of intense discussion i would probably have been forgotten about. (also because i was away for the weekend)

well if its not too late, with response to PHOTOCON's analysis of my data transfer "test", i didnt mention that the 30minutes of test footage recorded was entirely on one take. it was conducted by a 1st AC for some other tests and. im pretty sure that the transfer times would differ if it wasnt recorded on a single take. well in most shoots 30minute long takes dont occur very often (almost never) on sets, but it gave us a good gauge of the data transfer rates.

however, i suppose if its possible for a DIT to singlehandedly manage the data transfer from 2-3 REDs using the data manager WITH checksums i guess i can probably do it with 1 RED. especially since i'm quite new to this i willing to take zero risks with any footage be it mine or a client's.
 
I say the probability that:
A file can transfer without raising an error.
Has the exact same file size.
and plays back just fine.
yet still has unseen, undetectable differences that make it unusable.

is

extremely low. Lower in fact than the probability of an unforseen bug in an automated system going unnoticed.

This is exactly the question I would like to see answered. I have nothing against r3Ddatamanager as a tool to be really sure the data is okay or as a help on bonded productions. If you have the time and money than why not?

But scaring people with theoretical failures and saying that you might ruin up to millions of dollars and pushing them into a time and resources consuming workflow is something else, especially if at the same time you are selling a product.

Checksum failure: sure, but does that mean the file was corrupt? Unplayable? Or does it mean in frame 245 in the left corner a pixel is not exactly the color is should be?
 
But scaring people with theoretical failures and saying that you might ruin up to millions of dollars and pushing them into a time and resources consuming workflow is something else, especially if at the same time you are selling a product.

I don't think that Conrad is scaring people for his own benefit. Conrad's tool relies on the ability to create checksums. It's his selling point and naturally he must defend his philosophy. I find the discussion very interesting and profound.

I recon there are two schools:

1. The IT guys, trying to address problems with software.

2. The old school film guys, trying to address problems with "analog" methods such as eyeballing each shot.

I think both worlds have their strenghts. I'm in the 2nd category. I bought R3D-Manager to learn and to streamline my workflow. To be honest I switch off "Checksum". It does take too long - currently. But I do use the ability to copy data conveniently to two different locations.

Hans
 
I use the checksum feature because it's a fail-safe and a paper trail. Besides copy errors, it's a way of ensuring, bit-for-bit, what was shot is on the transfer drives. That way it's not my ass. If there's a problem, it's not in the transfer, it's somewhere else in the chain.

I also use Clipfinder to eyeball the shots, to catch errors in shooting (camera settings, audio, framerates, etc.) quickly before they cost money.

Yes, it's slow(er) than doing it without any security. The transfer time just needs to be factored in to the schedule. I sleep better at night knowing my machine is automatically doing the work and it'll let me know if there's a problem.
 
I think Conrad's program is fantastic.
Working on commercials and corporate jobs, the time required to checksum is not really a factor, even using my 2-year-old Macbook Pro. The few times that we've gotten squeezed at the end of the day time-wise have meant that we do a drag and drop on the last mag, and not use that mag again until the editor and producer have both reviewed the footage and agreed that it's OK. Time management and reloading strategy is important here, as discussed in other threads.
I'll also have my DIT copy the last mag to my own personal drive using r3d Manager at the start of the next job, before it's erased (we'll often make a third copy to my drive anyway, time-permitting, that gets erased a week or so later)
Yes, Ian, the chances that something is corrupt AND the same size AND scrubbable is insanely low. But if it's not a huge black hole of time (and it's not for me) why not make it as bulletproof as possible using a checksum?
Everyone uses a slightly different methodology here. And that's OK.
But I am a big fan of r3d Manager, for sure.
Cheers,
Harry
 
I solved my problem - kind of.

We copied the corrupt file on the Nexto to another folder on the same Nexto and from there were able to copy the R3D to the desired location.

The shot is consisting of 3 R3Ds, the middle one had the bad bits in it. We can render it to RGB with a short green sequenz somewhere in the middle.

I'm happy that I can use most of this particular shot.

Lesson: A checksum would have seen the difference, but simple scrubbing has not shown that the file is corrupt because it was fully playable. It was not possible to to copy the foulty R3D elswhere but to the Nexto itself, nor rendering it to RGB.

Does anyone has a meaningful explanation for this?

Hans
 
But scaring people with theoretical failures and saying that you might ruin up to millions of dollars and pushing them into a time and resources consuming workflow is something else, especially if at the same time you are selling a product.
QUOTE]

I would not consider it to be scaring people, but giving them the proper information so that they can make the decision on the amount of risk.

As a provider of equipment and services, it is your job to inform the client of the possible risks...theoretical and otherwise. Then it is their decision to take on the risk.

John DeBoer
Director of HD Sales
SIM VIDEO INTERNATIONAL
 
My methodology is a bit different. I use both methods, here's why:

Since the highest risk for the footage is when the the only copy is on the RED Drive (RAID 0 = scary!), I like to get that footage copied off as quickly as possible. So the first step is to drag copy the footage to a "non-check-summed" footage drive. At that point, I have a quick backup in case the RED Drive fails during the longer check-summed copy to different drives via R3D Manager.
 
This discussion reminds me of the darkroom routine we were drilled as camera assistants. The one step I soon dropped in my routine was removing the tape from around the mag while in the dark. In order to justify eliminating that step and make a positive decision that I should remove the tape with the door or bag open, I had to take full responsibility if I flashed a mag at any point in the process, and accept that my career was at stake.

This was because the integrity of the system as a whole was tested as a whole. The tried and true method is tried and true because anyone can attest to the sound practices of handling exposed footage due to the traditions of camera assisting.

Anyone can skip a step and be as successful at being a camera assistant, but we are not speaking of individual competence. We are speaking of the quality assurance of trade practice. This is what earns us our wages. Producers, Insurers, Directors, and the time every freelancer put into a days shooting deserve sincere consideration when handling the fruit of their efforts.

I am certain that a qualified software engineer can safely transfer data at zero failure rate, but the concern here is that the majority of data managers are not, and will not be as proficient as someone with those credentials.

The two views championed by Ian and Conrad are both equally valid. We must keep in mind that their backgrounds justify their practices.
 
I solved my problem - kind of.

We copied the corrupt file on the Nexto to another folder on the same Nexto and from there were able to copy the R3D to the desired location.

The shot is consisting of 3 R3Ds, the middle one had the bad bits in it. We can render it to RGB with a short green sequenz somewhere in the middle.

I'm happy that I can use most of this particular shot.

Lesson: A checksum would have seen the difference, but simple scrubbing has not shown that the file is corrupt because it was fully playable. It was not possible to to copy the foulty R3D elswhere but to the Nexto itself, nor rendering it to RGB.

Does anyone has a meaningful explanation for this?

Hans

Wait a minute. Didn't you figure that out by looking at the footage. And doesn't that sound like something that is present in the original footage not an artifact of copying but an artifact of recording. Since you don't have the original footage how can you tell.

Here's a case in point, I'm reviewing all of my footage yesterday and I find a clip from a time where the DP went off on his own and shot out of the back of a minivan on the 405 in LA at night. Beautiful shot but he didn't realize the road was rough enough that his Red drive was dropping frames even with the ET mount. The dropped frames are present in the original, but if I didn't watch every take I wouldn't have been able to let him know that we need to reshoot that shot and handhold the Red drive or shoot on CF.

I dropped this take in SuperSFV to run a checksum on it. It's exactly the same on all three drives.

IBloom
 
Wait a minute. Didn't you figure that out by looking at the footage. And doesn't that sound like something that is present in the original footage not an artifact of copying but an artifact of recording. Since you don't have the original footage how can you tell.

It CAN be in the original footage that I don't have anymore. But the chance that it is in the original footage is pretty low, because it has been a smooth pan shot on CF. Nontheless you are right. My assumings are biased and are not valid before the court.



Here's a case in point, I'm reviewing all of my footage yesterday and I find a clip from a time where the DP went off on his own and shot out of the back of a minivan on the 405 in LA at night. Beautiful shot but he didn't realize the road was rough enough that his Red drive was dropping frames even with the ET mount. The dropped frames are present in the original, but if I didn't watch every take I wouldn't have been able to let him know that we need to reshoot that shot and handhold the Red drive or shoot on CF.

I dropped this take in SuperSFV to run a checksum on it. It's exactly the same on all three drives.

IBloom

Good example. Therefore the rules:

1. Mission critical shots must be checked by playbacking them in the camera (kinda gate check).

2. Shoot on solidstate media when ever possible.

3. Camera media must be backuped as quick as possibel (we use Nextos on set).

4. When backuped, shots must be reviewed by sight, one by one.

5. Camera media can only be erased when the footage is backupd on 2 different locations and verified (here R3DManager comes into play).

6. A tape backup must be made and brought to physically different location/house very other fortnight (we use LTO and BRU, formerly Restrospect).

We only broke the rules once and I got immediately bitten. Never lost footage over a year now.

Hans
 
Back
Top