Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Windows vs OSX Redcine Performance

Richard Lackey

Well-known member
Joined
Apr 14, 2008
Messages
197
Reaction score
0
Points
0
Age
45
Website
www.dcinema.me
Hi All,

I am trying to determine if there are performance differences between Redcine's output of DPX from Redcode on Windows and OSX. On a 8 core Xeon system running Windows I am getting 1hr processing per 1min of Redcode RAW to output 1080p DPX, can anyone confirm that ithis is much faster on a 8-core Mac Pro?

Thanks,

Rich
 
that's almost unusable,
what's the rest of the hardware like?
 
8 cores?

8 cores?

On a 8 core Xeon system running Windows I am getting 1hr processing per 1min of Redcode RAW to output 1080p DPX

Does it support 8 cores at all?

Lets see: 1 min = 1440 frames (at 24fps).

1hr = 3600 seconds.

3600/1440 = 2.5 seconds per frame, sounds good to me for a 4K de-Bayer+CC to uncompressed DPX file, you must have very fast harddisks. The time to harddisk may be more than in CPU, so solid state RAID may help more than more CPU? On a slower single core system it might be 2.5 minutes per frame?
 
that's almost unusable,
what's the rest of the hardware like?

How many seconds per frame are you able to sustain for 1 TB of DPX output files, HD tend to slow down if you have many files in one folder.
 
Sorry,

2 x E5345 Xeon Quad Core CPU's (2.3GHz)
Intel S5000XVNSATA M/Board
8GB RAM
Quadro FX3500

compared to Mac Pro dual quad core

A colleague has told me that DPX transcodes out of Redcine on Mac are fasubstantially faster than on a PC of same spec. I am trying to find out if this is true.

Any input or experience would be appreciatedd. Evebn just your timeiing for a a 1min transcode of 4K Redcode RAW to 1080 DPX out of Redcine on your 8-core Mac Pro so I can compatre.
 
How many seconds per frame are you able to sustain for 1 TB of DPX output files, HD tend to slow down if you have many files in one folder.

The drives aren't a bottleneck. It's a 8 x 500GB SATA RAID5 on a 3ware 8-port controller card. I'm getting plenty of bandwidth of thf the RAID. I'm just trying to find out if the same transcode on a 8-core Mac Pro is substantially faster.

It's more of an OS question, if Redcine on OSX is and different to Redcine on Windows on comparable 8-core systems.
 
Sorry,

2 x E5345 Xeon Quad Core CPU's (2.3GHz)
Intel S5000XVNSATA M/Board
8GB RAM
Quadro FX3500

compared to Mac Pro dual quad core

A colleague has told me that DPX transcodes out of Redcine on Mac are fasubstantially faster than on a PC of same spec. I am trying to find out if this is true.

Any input or experience would be appreciatedd. Evebn just your timeiing for a a 1min transcode of 4K Redcode RAW to 1080 DPX out of Redcine on your 8-core Mac Pro so I can compatre.

running osx & win on the same pcs, we dont see mentionable differences. no matter if win on an apple or osx on a pc.
speed is influences by:
disk bandwidth (you want to have a -fast- array).
graphics cards (upperclass highly recommended)
cores (we dont see rc scaling well above 4 cpus to much however)
ram (lots)

it also helps to have a readdrive (redcode) and a target drive to reduce diskseeks and a buffered raid.
multiple graphic card array as in sli or crossfire arent supported yet, rc doesnt process on multiple cards yet.

best speed and price/performance right now is obtained by having a group of 2.6ghz quadcores / GF8800 / 4GB (<500€ here) munching redcode.
 
running osx & win on the same pcs, we dont see mentionable differences. no matter if win on an apple or osx on a pc.
speed is influences by:
disk bandwidth (you want to have a -fast- array).
graphics cards (upperclass highly recommended)
cores (we dont see rc scaling well above 4 cpus to much however)
ram (lots)

it also helps to have a readdrive (redcode) and a target drive to reduce diskseeks and a buffered raid.
multiple graphic card array as in sli or crossfire arent supported yet, rc doesnt process on multiple cards yet.

best speed and price/performance is obtained by having a group of ~500€ 2.6ghz quadcores munching redcode.

Thanks, this is exactly what I thought.
 
Great idea!

Great idea!

I think what would be really useful if someone could post a 1 min clip, and everyone could then use this clip to get a "benchmark" for their system.

That is a great idea!

I would expect a 8 core CPU to be three times faster than a 1 core CPU because of the memory bottle neck.

I would expect the 8 drive RAID to be 4 times faster than a single HD, so the time you get of 2.5 seconds seems very good to me.

Just try coping 1 min of DPX 4K frame files to a USB HD for backup and see how many hours that takes! Just using the copy test can tell you much about how you system will work with DPX files, try to copy 1 min of 4K DPX from one drive set to another drive set, you cannot process the files faster than just to copy them, well sort of...
 
Well, in this case I am only dealing with 1080 or 2K DPX files. The 1080 files are just over 8MB per frame. I also thought 2.5secs/frame was good. I've been accused of losing a Red job for someone because I told the post house to expect 1hr transcoding time for every minute of 4K Redcode RAW. I have also been told that it takes 24mins to render out a 1min 19 sec clip on a MbPro, and even faster on a 8-core Mac Pro because Redcode is optimized for OSX not Windows.

I want to find out if I am at fault for telling the post house the transcode would take 1hr per min of Redcode RAW, it seems from your responses that the 24mins results for 1min 19secs must be bogus for some reason (settings? I can't think what) and that I was actually telling the post house the truth.

Can anyone else shed some light on this? I don't use a Mac, I only have a 8-core PC and I need to make sure that I am quoting people the facts. If I am right, I don't want miraculously faster figures being thrown around that will confuse post facilities not used to dealing with Red footage.

At the same time, if I am wrong, I need to know I am wrong, and not go telling post houses that the transcode will take 1hr /min.
 
Does it support 8 cores at all?

Lets see: 1 min = 1440 frames (at 24fps).

1hr = 3600 seconds.

3600/1440 = 2.5 seconds per frame, sounds good to me for a 4K de-Bayer+CC to uncompressed DPX file, you must have very fast harddisks. The time to harddisk may be more than in CPU, so solid state RAID may help more than more CPU? On a slower single core system it might be 2.5 minutes per frame?

Ok, here might be a discrepancy, you are thinking I am outputting 4K DPX, I am going out to 1080p DPX, does this make any difference?
 
On a 8 core Xeon system running Windows I am getting 1hr processing per 1min of Redcode RAW to output 1080p DPX

I've a dual core AMD 64 X2 5200 on Windows XP and it takes 1 hour per 1 min to convert 4k RAW to 1080p Qt MPEG4. Same speed like you but different hardware. Maybe it's the other settings that matter.

We should all get the same 4k RAW clip and convert it to the same type of output using the same settings.
 
The best rendered results I have seen from REDCINE (or SCRATCH, for that matter) are on the BOXX & Globalstor machines that are overclocked dual quad-core Windows machines.

These far outstrip the Mac dual-quads or "normal" XP dual-quads.

Rendering Redcode is all about raw CPU and FSB. When the Penryn systems are officially released, that will be an *amazing* platform for R3D.

Lucas
------
ASSIMILATE, inc.
LA, CA, USA
 
I've a dual core AMD 64 X2 5200 on Windows XP and it takes 1 hour per 1 min to convert 4k RAW to 1080p Qt MPEG4. Same speed like you but different hardware. Maybe it's the other settings that matter.

We should all get the same 4k RAW clip and convert it to the same type of output using the same settings.

I'm opening Redcine, the project is set to HD1080, my framing is set to "fit width" all shots, output is set to DPX, process full, show 1/2res, high (I've been getting best results with 1/2res, high when outputting anything at 1080 /2K).

I've also got a dual core AMD 64 5600, but never tried a transcode on it. I'll try it with the same settings and see how long it takes.

Can anyone actually load up a clip in Redcine for me on a 8-Core Mac Pro with the same settings as above and see how long it takes?

Lucas, I've got real-time native .r3d playback in Scratch with a primary CC as long as the project and construct are 1920x1080 and the framing in process is set to fit width, in 1/2res, high? Am I kidding myself somehow that this is real-time 1080p playback?

I tried to install a questionably hacked copy of OSX on the 8-Core PC to get comparitive results but am having trouble convincing it to install. It works on some hardware configurations, but I guess not this one. I know this is illegal, I just want a comparitive time.

Can anyone futher confirm whether OS on the same platform is going to make any difference at all to the transcode time?

If it's purely down to CPU cycles, memory bandwidth, hard disk bandwidth etc, as I thought, and as many here have said, then OS shouldn't make much difference?
 
Yes, you've got realtime playback on SCRATCH. : )

But rendering is not realtime. There is a good reason for that.

For video output, the R3D files go into the GPU, and essentially never leave. They are processed entirely in the GPU, and then stay in NVidia-land when the data goes from the main board to the SDI daughtercard.

To render, a file has to be loaded in the CPU, and then also converted into another format. If Quicktime is that format, then you have to depend on the Quicktime libraries, which are *not* optimized for any kind of serious efficiency. And the CPU is also acting as the traffic cop to move bits around in disk reads (from R3D0 and in disk writes (to Quicktime, etc.)

As far as rendering goes - Quicktime is an awful way to do any kind of benchmarking, because of the instabilities and variances in the Quicktime libraries. Render to DPX or 16-bit TIFF for a much clearer benchmark of rendering capability.

Best,

Lucas
-----
ASSIMILATE, inc.
LA, CA, USA
 
1080p vs. 4K DPX vs. 4K TIFF

1080p vs. 4K DPX vs. 4K TIFF

Ok, here might be a discrepancy, you are thinking I am outputting 4K DPX, I am going out to 1080p DPX, does this make any difference?

Things are clearer, 2.5 seconds per frame for 1080p may turn out to be 5 to 7 seconds for making 4K DPX, or 6 to 8 seconds for 4K uncompressed TIFF.

Maybe some one who has enough free disk space can do that bench mark, so maybe it works out to something like this.

1m 4K RED CODE to 1080p = 1hr

1m 4K RED CODE to 4K DPX = about 2.5hr?

1m 4K RED CODE to 4K TIFF = about 3hr?
 
Things are clearer, 2.5 seconds per frame for 1080p may turn out to be 5 to 7 seconds for making 4K DPX, or 6 to 8 seconds for 4K uncompressed TIFF.

Maybe some one who has enough free disk space can do that bench mark, so maybe it works out to something like this.

1m 4K RED CODE to 1080p = 1hr

1m 4K RED CODE to 4K DPX = about 2.5hr?

1m 4K RED CODE to 4K TIFF = about 3hr?

I will do that benchmark in a few hours and post the results, is anyone willing to do it on a 8-core Mac Pro for me?

At the moment it looks to me like consensus is that there is no real difference in the speed of Redcine output between OSX and Windows. I just want to see the numbers.
 
Back
Top