Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

Dragon Update

Status
Not open for further replies.
Tech question:
is red rocket X going to work via thunderbolt? And if so, which enclosure will work good with it?
Im using a sonnet thunderbolt enclosure with the original rocket and it works like a charm.

The tech specs page says it is "Thunderbolt Compliant", but it also wants a 16X PCI-e hookup. Thunderbolt at the moment limits you to 4X speed, though the new Thunderbolt 2 spec should solve that problem. It probably functions like the current Rocket (a warning in Redcine-X, but you can still use the card)

EDIT: Removed wrong information.
 
Last edited:
Red Rocket X should work with Thunderbolt 1. They where showing one, if I'm not mistaken, in a Thunderbolt enclosure in NAB. (Correct me if I'm wrong!)

You're right; I don't know what I was thinking. The newer Sonnets and the Magma both have internal power options, so my fear about the 6 pin was unfounded. I felt a little dense when I opened my enclosure and the connection was right in front of me.
 
Red Rocket X should work with Thunderbolt 1. They where showing one, if I'm not mistaken, in a Thunderbolt enclosure in NAB. (Correct me if I'm wrong!)

as long as the enclosure is long enough it will work, remember the Rocket-X is a full length card.

But its going to be slow. Thunderbolt 1 cripples the existing Rocket... and the Rocket X is many times faster. Thunderbolt 2 will be faster, but not as fast as the card.

Lets hope Thunderbolt 3 is just around the corner :)
 
as long as the enclosure is long enough it will work, remember the Rocket-X is a full length card.

But its going to be slow. Thunderbolt 1 cripples the existing Rocket... and the Rocket X is many times faster. Thunderbolt 2 will be faster, but not as fast as the card.

Lets hope Thunderbolt 3 is just around the corner :)

Let's hope Intel is paying attention here. Is it only me, or does the 2x spec of the 2nd generation TB seem like a very much controlled trickle release by Intel to maximize milking the client for gradual ROI?

Look at Ethernet - every consecutive generation was (and still is) 10x faster then the previous one. And fully backward compatible...

2nd gen TB should have been at least 5x faster, if not 10x in order to justify it's release. I'm pretty sure there is already a roadmap to do TB3 (2x TB2) and TB4 (2x TB3). I say cut the crap Intel and go straight to TB5 (2x TB4, or in another words = 16x TB1)!!!

:sifone: Peter
 
Thunderbolt 1 cripples the existing Rocket... and the Rocket X is many times faster. Thunderbolt 2 will be faster, but not as fast as the card.

Lets hope Thunderbolt 3 is just around the corner :)
I was just thinking about this with all the new Mac Pro rumors. Sounds like it's possible Apple is going to go with the "no slots at all" concept - and all expansion is via TB - including GPU expansion which I have heard Apple was trying to enable over TB. So ... for max. transcoding power on the Rocket-X - it really might make the most sence with respect to $$ and performance - to roll a custom QUO computer for Rocket-X transcoding - http://quocomputer.com/
 
as long as the enclosure is long enough it will work, remember the Rocket-X is a full length card.

But its going to be slow. Thunderbolt 1 cripples the existing Rocket... and the Rocket X is many times faster. Thunderbolt 2 will be faster, but not as fast as the card.

Lets hope Thunderbolt 3 is just around the corner :)

If the existing Rocket is crippled with Tbolt 1, will we see any benefit running the new Rocket over Tbolt 1?
 
Let's hope Intel is paying attention here. Is it only me, or does the 2x spec of the 2nd generation TB seem like a very much controlled trickle release by Intel to maximize milking the client for gradual ROI?

Look at Ethernet - every consecutive generation was (and still is) 10x faster then the previous one. And fully backward compatible...

2nd gen TB should have been at least 5x faster, if not 10x in order to justify it's release. I'm pretty sure there is already a roadmap to do TB3 (2x TB2) and TB4 (2x TB3). I say cut the crap Intel and go straight to TB5 (2x TB4, or in another words = 16x TB1)!!!

:sifone: Peter
They're working on a fiber optic version that's supposed to reach 100Gbps in its first incarnation. That was originally laid out on the Thunderbolt roadmap as the first major revision to the standard - the "version two" they just came out with isn't really an overall speed boost so much as a consolidation of the disparate upstream and downstream channels from the original version:
http://appleinsider.com/articles/13...-official-with-20gbps-speeds-late-2013-launch
 
If the existing Rocket is crippled with Tbolt 1, will we see any benefit running the new Rocket over Tbolt 1?

Anything that involves data going to the Rocket, but not dependent on feeding it back through the TB port will benefit. So the new Rocket-X should still offer silky smooth 6K playback to an attached monitor, for example.

If some of the latest rumors are true and Apple is going with a slot-less design for the new Mac Pro, they're fucking up. They have to know this, but at times I think they're blinded by their own innovation. They also have a track record of jumping to the next big thing before it's truly ready.

The QUO looks like an OK solution for a compact and simple desktop that can run any OS. I wish they would make an equivalent motherboard product with some power on it. QUO is nice, but it's obvious in a SFF motherboard adhering to current form factor guidelines. I'd like to see someone innovate on that front -- PC design. I'll see what Apple does, but I'm already frustrated as hell with the state of the PC market right now. ...Almost tempted to spend a fortune and do something about it. No one out there right now builds a desktop computer or workstation that I truly want to buy. HP and Dell are both in a position to do something awesome, but they're not doing it. The motherboard manufacturers out there rarely stray from Intel or AMD spec and we haven't seen anything new, outside of expected tech/speed increases, for the past 15 years. I'm sick of buying or assembling a rectangular box with a ton of wasted space and inefficient power supplies, poorly designed slot layouts, bloated with legacy components and connectors, etc..

They're working on a fiber optic version that's supposed to reach 100Gbps in its first incarnation. That was originally laid out on the Thunderbolt roadmap as the first major revision to the standard - the "version two" they just came out with isn't really an overall speed boost so much as a consolidation of the disparate upstream and downstream channels from the original version:
http://appleinsider.com/articles/13...-official-with-20gbps-speeds-late-2013-launch

Even at 100Gbps, it's only 20% faster than a single PCIe 2.0 X16, with about 4 to 8 times the latency. Anyway, that's at some point in the future... Like Thunderbolt 3 or 4...

And yes, v2 is more an expansion of the existing capabilities. The Falcon Ridge TB controllers will move to offering the originally promised 4 channels per port, meaning 40Gbps max per TB port. Additionally, V2 ports and peripherals can now use two channels simultaneously for a bonded 20Gbps connection. When two TB channels are "bonded" as a V2 20Gbps link, they are not downward compatible with V1 peripherals. You can still have V1 peripherals chained on the same TB port as V2, but they will have to operate off of one of the unbonded 10Gbps channels. And we'll have to buy new cables to use all this, in addition to new devices that can take advantage. We should start seeing a larger number of TB devices hitting the market with the intro of v2.0.
 
Last edited:
Anything that involves data going to the Rocket, but not dependent on feeding it back through the TB port will benefit. So the new Rocket-X should still offer silky smooth 6K playback to an attached monitor, for example.

in my situation, i am looking to use a rocket over an external TB enclosure exclusively....i am not getting a mac pro unless something(really) magical comes out next week (which i doubt)...
the only reason i want a rocket is to work/be able to watch full 5k (6k) in red cineX and do the same in fcpx on my retina (or future portables)....i am not really concerned about final render times....do i even need the new rocketX? since i am looking at the dragon upgrade as well, a used rocket is really "only" about 1500 less then the new rocketX, but if i can't even use the full speed of the old card over TB, what is the point (other then being more future proof)?
 
as long as the enclosure is long enough it will work, remember the Rocket-X is a full length card.

But its going to be slow. Thunderbolt 1 cripples the existing Rocket... and the Rocket X is many times faster. Thunderbolt 2 will be faster, but not as fast as the card.

Lets hope Thunderbolt 3 is just around the corner :)

Sheeeesh... Never ending road blocks.

As I keep saying, We are all about 3 or 4 computers away from RED ROCKET free bliss.

O well, at least we have good cameras.
 
Sheeeesh... Never ending road blocks.

As I keep saying, We are all about 3 or 4 computers away from RED ROCKET free bliss.

O well, at least we have good cameras.

I was pretty sure of that until I realized CPU speeds haven't increased since late 2011. The 3930/3970k is still the fastest consumer chip available without spending $4,000 on a Xeon. I'm growing concerned that battery life/performance-watt considerations are all that are driving consumer and server CPUs now. I'm more and more worried that in 2015 we'll have the same speed still but fanless in everyone's workstations.

News is the 4930/4970 is only going to boost speeds by 10%. 10% in 2 years.... yay.
 
the fact is that Intel need some real competition. AMD just isn't doing good enough to make intel really push what is capable. They are able to sit back on their heels and keep doing these mid range updates on consumer cpu's. The next release should be double the capabilities of the previous high end range. But its just not going to happen until Intel is put into a position that they really have too.
 
I was pretty sure of that until I realized CPU speeds haven't increased since late 2011. The 3930/3970k is still the fastest consumer chip available without spending $4,000 on a Xeon. I'm growing concerned that battery life/performance-watt considerations are all that are driving consumer and server CPUs now. I'm more and more worried that in 2015 we'll have the same speed still but fanless in everyone's workstations.

News is the 4930/4970 is only going to boost speeds by 10%. 10% in 2 years.... yay.
I think things are going more multi-core rather then speed, for example my new linux pc has 2 cpu's of 8 cores, giving 32 parallel processes. Also the motherboard chipsets that do the parallelization on my pc, i have notice are smoothly allowing 32 processes talk to the ~2k gpu cores without any hiccups. Since I don't decode to progressive formats, I see almost all my processes being used on moving from r3d to dpx or exr. So i think the new linux super computers put us in a great spot to take advantage of the new 6k dragon. Another thing i have noticed is how much disk i can order standardly with a pc. I can now order 32 terabytes per pc, which allows a lot of parallel disk i/o ... and most of the higher end packages just take advantage of this automatically. With the 6k r3d optimizations, I'm thinking disk space and speed are the least of my worries. Finally the new 10 gig ethernet is just killer, 10g isn't like 1g, in that 10g is more of a switched ring ... and you can have 20gigabit throughput in a 3 pc ring, and it all works just by plugin in the ethernet cables(if you have two ports per pc). I think the main prep a lot of us have to do with 6k dragon is less with 6k, and more with getting quadhd vfx and compositing down to a more efficient timeline.
 
I was pretty sure of that until I realized CPU speeds haven't increased since late 2011. The 3930/3970k is still the fastest consumer chip available without spending $4,000 on a Xeon. I'm growing concerned that battery life/performance-watt considerations are all that are driving consumer and server CPUs now. I'm more and more worried that in 2015 we'll have the same speed still but fanless in everyone's workstations.

News is the 4930/4970 is only going to boost speeds by 10%. 10% in 2 years.... yay.

I think things are going more multi-core rather then speed, for example my new linux pc has 2 cpu's of 8 cores, giving 32 parallel processes. Also the motherboard chipsets that do the parallelization on my pc, i have notice are smoothly allowing 32 processes talk to the ~2k gpu cores without any hiccups. Since I don't decode to progressive formats, I see almost all my processes being used on moving from r3d to dpx or exr. So i think the new linux super computers put us in a great spot to take advantage of the new 6k dragon. Another thing i have noticed is how much disk i can order standardly with a pc. I can now order 32 terabytes per pc, which allows a lot of parallel disk i/o ... and most of the higher end packages just take advantage of this automatically. With the 6k r3d optimizations, I'm thinking disk space and speed are the least of my worries. Finally the new 10 gig ethernet is just killer, 10g isn't like 1g, in that 10g is more of a switched ring ... and you can have 20gigabit throughput in a 3 pc ring, and it all works just by plugin in the ethernet cables(if you have two ports per pc). I think the main prep a lot of us have to do with 6k dragon is less with 6k, and more with getting quadhd vfx and compositing down to a more efficient timeline.

You are betting on the wrong horse. Don't look at the CPUs. With the latest RCXP comments about it utilizing better our GPUs - it is GPU that will define the need for RR in the future. But then again I think of the RR as a GPU. And I like to have a dedicated GPU for my grading app. So ultimately this is great news for compact mobile platforms. On the workstation end it will eventually be up to you whether you add another GPU for R3Ds, or stick with the purpose coded RR...

:sifone: Peter
 
You are betting on the wrong horse. Don't look at the CPUs. With the latest RCXP comments about it utilizing better our GPUs - it is GPU that will define the need for RR in the future. But then again I think of the RR as a GPU. And I like to have a dedicated GPU for my grading app. So ultimately this is great news for compact mobile platforms. On the workstation end it will eventually be up to you whether you add another GPU for R3Ds, or stick with the purpose coded RR...

:sifone: Peter

GPUs are still absurdly limiting though. And the highest end GPUs only have 6GB of RAM, their core level caches are also just too small for a lot of tasks. Many core is of course the future but GPUs aren't going to be multi-purpose enough in the near future nor does intel have any plans on the immediate horizon for an affordable 8,10 or 12 core CPU. Sure you can put together a 20 core machine for about $8,000 but that's not exactly economical.

Meanwhile, SSDs are increasing at an incredible pace right now. Seems like the immediate future should be a hybrid solution where REDCINE decompresses R3Ds using your CPU (Or a renderfarm) to 2x ZIPs compression. Then uses GPU decompression. For archival and capture you can use 4:1, 6:1 whatever and then once it hits the RAID start a background process of decompressing.
 
I'll second that...FF is gorgeous...as long as it's in focus:) I just don't know when it became acceptable for people not hitting focus and call it art or good work. Don't get me wrong, love the aesthetic FF DOF...just sick of soft crap on my telly! Sorry...derail off topic.
 
The point of a FF sensor is that super shallow focus is a choice rather than a requirement. Lenses do not have to be run wide open and with 2000asa sensitivity you just need to drop some ND out when you need more focus. A FF sensor is just more flexible allowing for more photographic possibilities.
 
A FF sensor is just more flexible allowing for more photographic possibilities.

Why stop there? I envision a future in which high-end cameras like the Epic have only one sensor size developed, and it's a 12K+ 645 sensor. From there, you can crop to FF or S35 or whatever you want and still have at least 5K to play with, and still have the option of shooting 645 at 12K (can you imagine the IMAX footage we'll see?)

Now that RED is adding the option for users to pick their horizontal and vertical resolutions independently, I think this is not only doable, but inevitable. Why spend R&D money on multiple sensors when you can make one that serves (almost) everyone? Presumably the idea of a smaller sensor for a cheaper camera went out the window with the 2/3" Scarlet, so why not just go all the way to 645 and everyone can pick whatever lens they want to suit the project?

Seeing as 645 was on the original list of sensors RED wanted to make, this must at least be something that has been discussed at camp RED.

Or is there some technical limitation to this that I'm not considering? I mean besides read/reset speeds (which I assume are at least somewhat subject to Moore's Law).
 
Status
Not open for further replies.
Back
Top