Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

New iMac with ThunderBolt ... Plausible to Use?

Hi Jeff, I will attempt to answer your reply briefly.

I think you're trying to over-analyze the speed difference. It's not simple cut and dry MHz difference between the previous CPUs, there's actually a bit more going on there.

I don't see why you would need to edit at full resolution...? There are currently no solutions out there that allow this anyway. Not for less than $3K.. Not for less than $30K...

No, I wasnt, I was taking into account the efficency diffences, though there were few tests results for this at the time.

There are cerain niose/interference patterns that you need full resolution to see. I suppose you can clean the footage if you want to, but you have to view at full resolution sometime to notice it. Then there are other little things to check in the scene, that should be done while you have a chance to re-shoot. You could add a blur filter to make it look the same asvthe lower resolution edit if you want. Believe me, I dont know if any of these cameras will suffer this with environment or age, but the first time this happens on a major project I reckon you wish you had. Apart from that it is just a truer look.

Now back to reality, I want a true shd/4K camera one day as a minium format to work with, even if it is a 200mb/s h264 system.

This iMac would be a very good platform for working with RED footage, and should plow right through 3K Scarlet footage without breaking a sweat. I was cutting R3D files natively, without transcoding or jumping through other hoops, in Adobe CS4 on a 2006 model Macbook Pro with 2.33GHz C2D CPU and 2GB RAM

People have been claiming much more than this on the computer for scarlet thread, and have been transcoding to other formats to do it on those systems (which probably support accelerated processing in those formats). But we are talking about full resolution in realtime, for every post production process aswell, and no unoticed skipped frames?

Ive just read your fcp mac commets, I thought there was adobe memberd in their. Thanks fir the warning, I was considering fcp. I thought it was strange nobody seemed to have acceleration except vegas and even brain dead. I understand that on the red side none of their soft has gpu acceleration. I aim to do work with shd h264 and h265 anyway. I was figuring that by the time ultra hd was available the acceleration would be figured out and the mac would then be able to handle. I still pefer 6-8 core, but would have gone for the imac if only it had a Shd+ screen, the processing was just a compromise to wait out till acceleration was worked through.

There are a lot of advantages to them, which are not readily apparent until you do start using them.

That is why I have been wanting to convert over to them fir years, now the orice is reasonable. But apart from a few let downs, the commercial philosiphy can get in the way.

no bd
USB3, etc.. are almost laughable to read at this point. Same with full screen multitouch... Jobs has already addressed that issue on more than one occasion. ...I'm still waiting for someone in the Windows world to show me a Windows-based touch interface that doesn't completely suck. I'd even settle for one that just works half as efficiently as a keyboard.

Im using one, an andriod tablet (but no spell checker, yet) ;-) Hardly laughable, but the opposite as to the reason they dont want them. Jobs was refering to the style of arm aching touch they are using, not the ms surface like touch both compsnies, and myself, are pursuing.

Lay-down mode for systems like the iMac??? Really? What are we going to stand all day at a countertop with a 27" screen lying on its back? You going to put a 22" iMac or all-in-one PC in your lap and work on it? Microsoft already tried this with Surface. Total flop.

Apple has a system, the inforation about the it has benn out for a while. You can sit. Depending on the thickness, it can sit like your keyboard, or, as I think Apple plans, tilted, or it canbe the table and tilted. When you do multiple page technical dicument design and drafting you quickly realise how good it is, and for most things post it is just going to be excellent (before you say it, oled or other for color constancy at angles). I am currently considering how to make my color graded 42 inch into a drafting mode (if the panel can handle laying on an angle).
 
I was around for the USB vs. Firewire ordeal myself. In this case, Thunderbolt is the winner. There's not a whole lot of bets going on. Thunderbolt is stupid simple and cheap to implement. Directly integrated in new Intel chipsets. Starting with the E-series Sandy Bridge chipset components, just about every decent workstation or gaming PC is going to have Thunderbolt. Or at least the Intel-based ones are. AMD is stuck with USB3 for now. Yes, they can scale it to 10Gbps... So that's half of a Thunderbolt port and it has far more overhead. You have to keep in mind that Intel, the ones who created USB, the ones who championed USB2.0, have effectively shot USB3.0 in the head by not endorsing it as a continued standard and by refusing to support it. USB3.0 is a layered topology that still has legacy USB 1.1 as its foundation. It is to date, the most inefficient and bloated interface standard to ever grace a PC motherboard.

I understood that there is support for usb3 from a number of manufactures, and you get back compatibility. I thought I heard that they were redoing the protocole again. I like tb, but it may not be over and intel has the priority format apple was involved with. Usb can be a thorn in its dide for years. So I would not expect everything to be on tb soon.

Apple can afford wonders, but that's also beside the point. There have been high density LCD displays for many years. Most have significant limitations, only the ones hitting the market now are viable for use as fully functional displays. Many of the high-resolution displays of the past have found their niche in medical and satellite imaging, most had terribly slow refresh times or were even monochrome or had limited color reproduction.

Seen how slow the apple ips ones are. The displays have been used for cinema editibg for years, plans for 30 inch ones have been put off for 4-5 years. What is wrong wuth stills, cad, desktop publishing or visual design, or the old 21 inch. The truth is that, apart from this years problems with supply, theyvarecjust hsnging onto the old display before the model revision, and they are actually able to do it. Apple is the outsider, much like here, and they cant afford to be the same or second. Sure they might be a big manufacture but they are just part of a much larger market. Get a small fhd monitor for pixel size and somthing the size of 4 panels is possible for simular price. 30 inch I dont know, do they have a cheap enough existing process of that pixel size. A big enough run is the main problem for cost reduction, but they now have the numbers with a long overdue cinema display.


The issues to do with gaming perfprmance is irrelevent. We have the redcode rendering and processing power, getting it out is rekatively low end, display port supports some for a while, nvidia had a system and i think a dual link dvi port or hdmi 1.4 can handle it at a basic frame rate.

Processing power. Shd is around double that of the imac 27inch. Some applications might require four times.

Know about the autostereoscopic problems, I have a few ideas to get around this. Actually I could do a glasses free stereoscopic. The tishiba one is good just because if you want three people to view they could eaily position themselves together. I too dont see what the problem is with stereo vision on the pc for oeople, as mostly it is just one oerson sitting right ibfront of it most of the time.


Nintendo did an admirable job for being first to market with such a product in their 3DS portable.

Well last at launch time. Sharp had cheap phones and laptops years ago, and ut is an Australian invention since the late 80's or something. That will be why you can buy so many dispkays cheap on ebay.


I see you browse AppleInsider and/or Macrumors. That's one of many such patents spanning many years. That patent was originally applied for nearly 3 years ago. And Apple had patented a table-top touch interface several years before that. Apple doesn't file all of their patents under the Apple name, either. They have IP holding companies with arbitrary names, so in order to not draw direct attention. It's usually pretty laughable watching the Apple patent reports that show up online. Half of what gets filed directly under the Apple namesake is a smoke and mirrors show.

Not so laughable, as they are too often simular to my ideas from a decade before hand.

I don't think we're going to do better in an all-in-one... Some of the Windows PC makers' all-in-ones have some advantages, some disadvantages, when compared. Just the nature of the beast. I'm anxious to see what Apple does with the Mac Pro refresh. I have a hunch that we will see a more versatile system than the previous tower.

I think it might have sonething to do with the popularity of sumo pc's previously in asia, which were big fat all in one crt monitor affairs from what I understand.


As for real-time 3D REDCODE... I don't see that being a real problem as long as you don't need stereoscopically synced editing at full resolution. I've got Mac and Windows notebooks that can handle real-time dissolves and transitions between 4K R3Ds in real-time under the newer NLE systems like Adobe CS. The horsepower to deliver stereoscopic abilities is all there, it's just down to the software. Sony Vegas actually does just this and can supposedly do it with REDCODE on a common desktop PC. I personally have not worked with the new 3D/stereo abilities in Vegas....

Of course I definitely want to do stereo footage, so it is an issue. Gpu processing is worth many times the cpu, so it should not be to much of an issue as long as i have powerfull enough gpu, and the top imac has.ki

About how close the price of the top imac is to the basic macpro, of course you have to buy the monitor and extras that come with the imac with the macpro. The imac is close to being able to do everything I want forva while (i can get quality external usb 2 sound equipment and tb raid).

Well Jeff, thanks for steering me straight about viable alternatives on the mac. Pity vegas is not on the imac, that would be cheaper.
 
Here's to add some fuel to the Thunderbolt debate!

By the way, Vegas Pro 10 does native R3D 3D on mainstream desktops - quite an achievement there!
 
So the new Imac has two thunderbolt ports... would it be possible to add a red rocket on a sonnet pcie cage? (http://www.sonnettech.com/news/nab2011/) on one port... and storage on the other port... to use red rocket on an imac?... the sonnet page even says the boxes can be daisy chained... I wonder if something like an nvidia card, or a decklink card could work in daisy chain with the red rocket and run something like resolve, or scratch on an imac....

Just a bunch of thoughts, but the idea sounds interesting to me.
 
Here's to add some fuel to the Thunderbolt debate!

By the way, Vegas Pro 10 does native R3D 3D on mainstream desktops - quite an achievement there!

Not really, good programming (a portion of stuff is not) and gpu programming should do it. About shd 3D,o shd 25fps is approx 4.8 gbs plus signalling overhead, 50 9.6, 100 19.2. 10 - 16 bit would be extra. Present thubderbolt dual channel and display port is around this but only rated at 60fps. However, there was talk about 40 gbs thunderbolt this year, and thunderbolt has room for 4 channels. There are shd tv's I think this year, so I wonder if a shd 3D version is due that the new macpro might also have a 40 gbs thunderbolt for a new digital cinema display for and the nextt imac model might have a screen for and maybe a surprise release. About speeds, labs had copper traces working at 20 gbs 4 or so years ago, maybe they plan to top 4 channels at 100 gbs. Optical is a different matter, they have to have enough traces to supply it or optical traces. So once optical technology comes through on the motherboard between and in chips thuderbolt 3.0 (the original optical tb i imagine might be referred to as version 2) might see a great speed up enough to handle the 70k horizontal surround stereo image of the future and the speed of optical computers starting within 5 years (why it is taking so long, I know seperate people that had a sneak peak over ten years ago). The predicted speed of optical technology and bandwidth make the red camera processing engines look like fire crackers.

Interesting article Subhadip, thanks. I have seen a diagram of a smaller display port that seems to suitable for optical. I understand they plan to use copper ended cables with longer haul optical internally until connection problems are sorted out. Still for most desktop devices (ie mouse keyboard controller) a simpler one channel coonector is all that is needed (better look at the plug diagram again to verify that the back channels are seperate). I havent mentioned they plan on have usb over thunderbolt. I wonder what happened with the usb 3 process? Intel showed optical at the same data rate as tb on usb plug as the future of usb 3, and the tb propsal follows more closely a proposal i had for usb 3 in simpler layers etc (i used to design networking schemes for my realtime os years ago that would do everything, so i have a few ideas on the technical side).
 
it doesn't look like apple will have much up their sleeve. the new high end intel xeons are 10 core , but each chip is 5k , the next one down is back to a 4 core chip. so unless intel magically pulls out another 6 or 8 core , we're going back to octo mac pros for the next ones. right now it doesn't look like there is a 6 or 8 core on intels roadmap

how about the rumours that Apple will switch to ARM-CPU? Any news on that? Maybe things will change dramatically in the next two years or so ..... just my 2c
 
I agree with Jeff.
Question for Wayne - What kind of resolution do you expect to get from a computer LCD monitor? You are not going use any computer LCD monitor to QC or grade anything. For transferring files, creating dailies and doing an offline, that screen if perfectly fine.

Craig
 
Well most of us do not have 50k for a 56inch digital cinema monitor, so is there a point. 30 inch+ preferably at least 40. I need one monitor I can see the pixel level, preferably with fast refresh, and I have a color graded 42 here, that I admitedly need to adjust the gama profile on. 30 inch is a bit of a strain at that distance and maybe a bit small for feild of veiw on a desktop, but it is all compromise for price, somethibg people often seem obliviouse to, but which most scarlet customers will probably be more interesred in.

Just because two people agree does not nessacarily make it correct. You have deeply read the replies? You will read I am pursuing at least super hd resolution, which is slightly less then 4K horizontally, and provides a nice on screen controls gap if desired for single monitor operation, and the ability to take real shd resolution when it becomes available and when there is a scarlet with it. The world can divide into two parts the haves asnd the haves not, the fixed scarlet at least is going to attract a lot of hsve nots ;-) . Besides I thought even the 27 inch imacs were often used in professional color operations. There is also faster switching ips lcd technology coming, not that you cant get that allready with others, just not the same angle of veiw ussually, but as Im sitting in the sweetspot that is ok.
 
how about the rumours that Apple will switch to ARM-CPU? Any news on that? Maybe things will change dramatically in the next two years or so ..... just my 2c

You are in luck. What i havent seen mentioned on the apple sites is that Microsoft is apparently planning a arm version if the next desktop windows. I think this was genuine and not an april first thing. There was also some cobfusion before with a mrnyion of amd in a presentation which wad put down to use of amd in apple products rather than amd supplying chipsets, but now arm has sild a license to amd people are thinking is for the tablet market. Via which does cheap pc processorsbsnd chipsets, and nvidia have had licenses fir a while. Intel sold their division, which had the top performance strong arm division from the dec breakup back in the beggining, I dont know if they have replaced it as they are concentrating on the atom which they reckon can compete, but is like comparing a big modern v8 to a smaller simpler car on traffic, one looks more impressive and one gets there cheaper and quicker in design terms.

So, there might be a bit of a parrallel general move on to the arm for pc applications. In this case Apple is well positioned and unlikely to want to be left out. It is a shame for Apple, Steve abandoned the power pc to go with intel, but a lot of stuff intel had planned did not arrive quickly (i have given up waiting for larrabee, the terascale processor, i dont know what happened with under 1 watt processors and low energy to mhz targets and the optical stuff is not yet here except thunderbolt may have it in the nearer future . An arm based fusion array would answer larrabee). Now to go back to arm, which is what I said they should do instead of the power pc back in the first place. And there is less reasion that you could not build a more powerfull array of arms on a chip then pc processors on a simularly complex chip, the V12 is almost dead the 2 cylinder car fleet is here. Unfortunately my crowd is working on the solar powered vehicle of the processor world at greenarraychips.

So, it is far from clear yet, but there us opportubity there (modern software systems support virtual systems, on load compilation to multiple system targets etc abstracted api making software cross compstible, plus just keep adding less pc cores and more arm cores fir compatibility). But beware, in the next 5 years we expect optical computers to rise and eventually displace conventional processors with some very seriouse orocessing power. So, in five years time it could all start changing again.
 
Jeff, I think what we are facing here is a lack of rwal competition, like what we faced before Red came along. Everybody us content in moving in step and making as much money as they can off present standards so they have something new to sell in future. So even though they could have been selling shd computer monitors in around 2007 like nvidia wanted (one of the cheapest msnufactures announced panels in 2006) which made perfect sense for professional pc applications, they have subjected us to many years of hd+ displays like in the imac. Unfortunately we are now often limited to very dated hd standards, which is ok for home videoists.

Back in the days of the DIY digital cinema camera group, which Silicon Imaging and Drake spun out of, and Jim was inspiried to start red from. I had deliberate intent on promoting the cause to worry camera manufacturers enough to improve cameras and sensors on the low end (which were being dumbed down and going backwards at a rate of knots) and hopefully get manufactures involved and ultimately a new manufacture like red, and a cheap camera almost exactly like the scarlet fixed (which i openly specified back then to not compete with a second 15-20k range more professinally featured camera) for the 2006 time frame. It worked treat we got the original Sony consumer hd camera that was a huge step forwards from the bavkwards trend and often compared to the zx1 series in some ways. We then got hdmi, which hdmi recording diverted a lot of attention away from the diy groups. Sony released a 6mp 60 fps quality (for those days in consumer) exmor sensor that mimicked the sort of numbers i put forwards so as not to compete with larger cameras, used in the casio f1, announced within 8 months, and completed within a couple of years. Other cameras went forward in quality, Sony even annoncing removable lens camera, and companies improving codecs. I got onto cheap hd cameras and improving the quality of them, putting feed back into them on other well watched groups.

But the resson for this, is Red lacks real competition too. In 2006 2007 even 2008, 3K is great, but now 4K is the new standard, and for $3K camera, but there is no real competiton out there yet to get a quslity 4K product at price, every body probably stopped when the dcarlet project first came up. I am too far out of it in health to design a cheaper camera system (estimated costs start around a few hundred for something with quality for professional eng or basic film (plus lens for removable) and the greenarray group only really do somrthing suitable for hd cameras). So Red has no real econmic or competitive incentive for 4K in this model, maybe next (by then ill want 4K and 32mp stills in a micro 4/3rds camera :) ) for commercial and stills work).
 
Here's to add some fuel to the Thunderbolt debate!

By the way, Vegas Pro 10 does native R3D 3D on mainstream desktops - quite an achievement there!

Stupidest article I've seen in a while. Apple is not going to make it so only people with the newest MacBook Pro or the newest iMac can use an iPhone 5... It would be nice in addition to the dock connector (as long as the flash storage can handle it), but I doubt the new iPhone will even have TB at all. ARM chips do not have PCIe lanes as far as I can tell so the only way they could do it would be to add a bunch more electronics to process the TB stream, after the TB demux. I really don't think they would be able to fit all that in until the tech becomes more mature.
 
how about the rumours that Apple will switch to ARM-CPU? Any news on that? Maybe things will change dramatically in the next two years or so ..... just my 2c

The ARM CPU move makes some sense for Apple, but only for Macbooks.

ARM has a long way to go to catch up to x86 in terms of pure performance. The quad-core ARM 2.5 GHz in the roadmap for 2013 will only perform alongside today's dual core. The significant gain is that it will do so at a fraction of the power, die real estate and price - and that has always been ARM's mantra. As for Apple, there is a clear focus on style and marketing than actual performance or features. To blindly speculate, I think Apple are preparing a new range of MacBooks which look radically different from today's notebooks. Perhaps a MacBook as thin as an iPad? Asus' EeePad Transformer seems to be a big hit - perhaps we will see something like that, except costing thrice as much and several times more powerful. The solution is ARM, of course. The performance takes a big hit, but Apple is quick to realize that how a laptop feels and looks is far more important for most computer users. And of course, profit margins. All are benefited heavily moving to an ARM architecture. It also fits well with Apple's new "Mobile Device Company" philosophy. And indeed, 70% of their revenue comes from the iPad, iPhone and iPod along. These have taken Apple from being an also ran to being the biggest tech company in the world. It just makes sense all around for Apple to switch to ARM. By now, Apple have established an incredible "reality distortion field" - I don't think any Apple users would care about the missing performance, yet they will be blown away by the new sleek looks. This is all just speculation, of course.

Having said all of this, Apple must also realize that there's a market for high performance Macs (albeit very small, comparatively) and there's nothing ARM is releasing in the near future that can match up to Intel or AMD x86 in terms of pure performance. The second generation of NVIDIA's Project Denver, due out in 2015, might just rival AMD's entry level Fusion APUs at that point - but that's a long way off.

Stupidest article I've seen in a while. Apple is not going to make it so only people with the newest MacBook Pro or the newest iMac can use an iPhone 5... It would be nice in addition to the dock connector (as long as the flash storage can handle it), but I doubt the new iPhone will even have TB at all. ARM chips do not have PCIe lanes as far as I can tell so the only way they could do it would be to add a bunch more electronics to process the TB stream, after the TB demux. I really don't think they would be able to fit all that in until the tech becomes more mature.

I would think twice before calling anything Charlie Demerjian writes "stupidest". He has written some of the most insightful, though outspoken, articles over the last decade, and many of his boldest predictions have come true. In the world of hardware journalism, he's up there. He does address your concern in the very same article - ARM do have PCIe lanes as an option. Besides, iPhone 5 having TB is a rumour that has been around for longer and through multiple sources. It is a simple commentary on Thunderbolt and monopolistic behaviour. Thunderbolt didn't end up being the revolution Light Peak was.
 
The ARM CPU move makes some sense for Apple, but only for Macbooks.

Agreed, and yes only for the smaller end of their lines. For now it's just a rumor, but I would not be surprised if Intel and/or ARM are given the opportunity to supply CPUs for the next iterations of iDevices. With the current issues and tension between Apple and Samsung, who is the manufacturer for the A4 and A5 CPUs, I would say there is good reason for change.

ARM has a long way to go to catch up to x86 in terms of pure performance.

Yep. And with Intel moving to Tri-Gate later this year, we could see them competing within ARM's space of the market. Something Intel does not do currently.


I would think twice before calling anything Charlie Demerjian writes "stupidest".

Er... He has his moments. And this article was definitely one of those moments. He did some of his homework, but not enough and there are some glaring errors in there. For example, he failed to mention that the mini-DisplayPort connector (possibly alluding to ThunderBolt) was present on both iPhone and iPad mock-ups at CES. And the standard dock port was still in its usual place, in all of its glory. He erroneously refers to high manufacture costs of ThunderBolt, when that is simply unfounded. He calls it a proprietary standard. But it's not, it's an open standard. In fact, it's PCI Express topology using the completely open mini-DisplayPort connector. It's super simple to develop for. Intel is shipping out ThunderBolt Developer Kits for device makers and integrators and the rate of adoption seems to be quite fast. It's hard to see because nothing has really hit the market yet. Intel and most of their OEM partners will be releasing motherboards with integrated ThunderBolt in the next few months as Intel begins to roll support for it right into their chipsets.

It is a simple commentary on Thunderbolt and monopolistic behaviour. Thunderbolt didn't end up being the revolution Light Peak was.

Yeah, it was definitely commentary. I'm not sure I see the monopolistic behavior. I definitely caught his comments about it, but they seemed somewhat forced. As for Thunderbolt not being the revolution that Light Peak was, I'm not really sure what you're getting at there??? Light Peak was just the name the interface was developed under. I guess people liked it and wanted it to stick. The only thing Thunderbolt doesn't have that the original Light Peak claims stated is the long cable distance due to using optical fiber. But if you consider the reasons why the optical fiber interface was dropped, it makes total sense. First and foremost, cable costs. The necessary grade of dual mode fiber needed is not cheap... At least not within the pricing realm of consumer products. They still had to run power through metal wires in the cable anyway. There's a lot of people yammering on about how Light Peak was supposed to be 100Gbps... Uh, yeah, it was supposed to eventually scale to that. Just as ThunderBolt is to do the same. Initial LightPeak as demonstrated here and there over the past two years was always 10Gbps per channel. Just as ThunderBolt is. The mini-DisplayPort connector makes so much obvious sense it's almost uncanny. It's a totally open connector design. Simple, robust, can support up to 4 channels of TB. Currently it has two... Each TB port with dual 10Gbps channels eats up 4 PCIe 2.0 lanes. People who actually thought that Light Peak would release with 100Gbps out of the box were dreaming if they thought it would happen this year, let alone last year.... 100Gbps is roughly half the PCIe bandwidth of most Intel X58 chipset based workstations /servers.

Anyway, ThunderBolt still seems pretty revolutionary to me. I guess I'm a little bit fuzzy as to what you thought Light Peak was going to be. It's hard for people to quantify any of that until they see it in action. For now, it's just some stupid port on the side of their Macbook Pro that doesn't do anything.
 
Last edited:
Interesting points, Jeff. I really do hope Thunderbolt remains an open standard - it has a lot of potential for the future. It is no doubt cleaner and forward looking than USB3. I have a few concerns which I must read about - such as how AMD motherboard manufacturers are going to implement it. If it's truly an open tech, won't AMD be able to implement it in their southbridge/APUs?
 
The ARM CPU move makes some sense for Apple, but only for Macbooks.

ARM has a long way to go to catch up to x86 in terms of pure performance. The quad-core ARM 2.5 GHz in the roadmap for 2013 will only perform alongside today's dual core. The significant gain is that it will do so at a fraction of the power, die real estate and price - and that has always been ARM's mantra. As for Apple, there is a clear focus on style and marketing than actual performance or features. To blindly speculate, I think Apple are preparing a new range of MacBooks which look radically different from today's notebooks. Perhaps a MacBook as thin as an iPad? Asus' EeePad Transformer seems to be a big hit - perhaps we will see something like that, except costing thrice as much and several times more powerful. The solution is ARM, of course. The performance takes a big hit, but Apple is quick to realize that how a laptop feels and looks is far more important for most computer users. And of course, profit margins. All are benefited heavily moving to an ARM architecture. It also fits well with Apple's new "Mobile Device Company" philosophy. And indeed, 70% of their revenue comes from the iPad, iPhone and iPod along. These have taken Apple from being an also ran to being the biggest tech company in the world. It just makes sense all around for Apple to switch to ARM. By now, Apple have established an incredible "reality distortion field" - I don't think any Apple users would care about the missing performance, yet they will be blown away by the new sleek looks. This is all just speculation, of course.

Having said all of this, Apple must also realize that there's a market for high performance Macs (albeit very small, comparatively) and there's nothing ARM is releasing in the near future that can match up to Intel or AMD x86 in terms of pure performance. The second generation of NVIDIA's Project Denver, due out in 2015, might just rival AMD's entry level Fusion APUs at that point - but that's a long way off.



I would think twice before calling anything Charlie Demerjian writes "stupidest". He has written some of the most insightful, though outspoken, articles over the last decade, and many of his boldest predictions have come true. In the world of hardware journalism, he's up there. He does address your concern in the very same article - ARM do have PCIe lanes as an option. Besides, iPhone 5 having TB is a rumour that has been around for longer and through multiple sources. It is a simple commentary on Thunderbolt and monopolistic behaviour. Thunderbolt didn't end up being the revolution Light Peak was.

I did not pay much attention, but assumed it was only about the short term use of thubderbolt first by Apple. As far as control of chipsets i dont have any information.

As far as Arm it is a macro cosom, people licence cores and designs, they dissapear out the door and different things are done with them by licencees and not even srm knew. I know i tried to enquire about what was available out there through arm years ago. So what is on the arm website is only the tip of the iceberg and arm dedigned cires only one version. This also means that there is a microcosom of chip design processes they canbe based on, so the intel process may not be the end of it. But does anybody have a link to summaries of the intel process.


Whst people are missing, which i posted before, is an array of simpler processors can get more performance for the same chip space than a much larger more complrx processor. This is how gpu's reach such a higher level if pricessing performance and how the greenarray chips work and take on fpga processing. An array of arms hooked to pc arcitecture can compete, and apple is designing their oen arm chips as are others. Lets not forget the arm version of desktop windows i mentioned.

The group I have been involved with focuses around one of the leading cutting performance edge processor desgners in the world. He is the designer at greenarraychips, but is aiming at lowest power instesd of speed. But 144 processors on one chip with each one probably under 10k transistors plus memory and for cents in large lots (though i am uncertain how presently this is with this particular design). A few modifications to the architecture and retargeting to a performance process would produce a performance array. So many thousands of processors in the same space maybe running upto
10-20ghz knowing how much the designer might be able to milk available processes. I am actually looking at using the established design for my third world pc based on some emulated proposed modifications.

But Arm is a more targeted design for destop (it started as a desktop processor) much much faster than pc processors. You can also add much of the surrounding pc design stuff to an arm design. It is two different worlds but much is transferable.
 
He calls it a proprietary standard. But it's not, it's an open standard. In fact, it's PCI Express topology using the completely open mini-DisplayPort connector. It's super simple to develop for. Intel is shipping out ThunderBolt Developer Kits for device makers and integrators and the rate of adoption seems to be quite fast. It's hard to see because nothing has really hit the market yet. Intel and most of their OEM partners will be releasing motherboards with integrated ThunderBolt in the next few months as Intel begins to roll support for it right into their chipsets.

The only thing Thunderbolt doesn't have that the original Light Peak claims stated is the long cable distance due to using optical fiber. But if you consider the reasons why the optical fiber interface was dropped, it makes total sense. First and foremost, cable costs. The necessary grade of dual mode fiber needed is not cheap... At least not within the pricing realm of consumer products. They still had to run power through metal wires in the cable anyway. There's a lot of people yammering on about how Light Peak was supposed to be 100Gbps... Uh, yeah, it was supposed to eventually scale to that. Just as ThunderBolt is to do the same. Initial LightPeak as demonstrated here and there over the past two years was always 10Gbps per channel. Just as ThunderBolt is. The mini-DisplayPort connector makes so much obvious sense it's almost uncanny. It's a totally open connector design. Simple, robust, can support up to 4 channels of TB. Currently it has two... Each TB port with dual 10Gbps channels eats up 4 PCIe 2.0 lanes. People who actually thought that Light Peak would release with 100Gbps out of the box were dreaming if they thought it would happen this year, let alone last year.... 100Gbps is roughly half the PCIe bandwidth of most Intel X58 chipset based workstations /servers.

Anyway, ThunderBolt still seems pretty revolutionary to me. I guess I'm a little bit fuzzy as to what you thought Light Peak was going to be. It's hard for people to quantify any of that until they see it in action. For now, it's just some stupid port on the side of their Macbook Pro that doesn't do anything.

I know, I am planning on using tb and making products on my platform into devices, but my bad health is preventing me. It should be suitable as a physical layer on the proposed network buss for my vos ten years ago. I did mention the optical problem and the proposed solution previously. Present chipsets may have problems with 100 gbs but does that mean they could not have designed a chip set to support that this or last year?
 
Interesting points, Jeff. I really do hope Thunderbolt remains an open standard - it has a lot of potential for the future. It is no doubt cleaner and forward looking than USB3. I have a few concerns which I must read about - such as how AMD motherboard manufacturers are going to implement it. If it's truly an open tech, won't AMD be able to implement it in their southbridge/APUs?

There is still a lot of industry politics and posturing to deal with here. Even though it's open, Intel is not licensing their own TB host chipset designs. Manufacturers must either develop their own or buy the host controllers from Intel. So when it comes to a competitor like AMD, I think the only way they will adopt TB is after it has become a truly accepted standard within the industry. All we can do is predict and speculate. USB3.0 still has a lot of steam... Joe Average Consumer doesn't understand its shortcomings, nor do they really care, nor do they need the potential 4.8Gbps it offers. Thunderbolt is poised to be a runaway success, if it's handled right. I think Intel is handling it right so far. I'm just surprised that Intel and Apple did not take the time to release a Thunderbolt peripheral or two along with it. Most manufacturers who are now supporting it only found out about it when Apple and Intel unveiled it in the new Macbook Pro. A few select ones, like G-Tech, LaCie and Promise found out about it, literally, just a few days, maybe even a whole week, before the launch.

It should be suitable as a physical layer on the proposed network buss for my vos ten years ago.

Not sure what you're smoking...

I did mention the optical problem and the proposed solution previously. Present chipsets may have problems with 100 gbs but does that mean they could not have designed a chip set to support that this or last year?

In terms of PCIe, one lane is 2.52Gbps. In PCIe 2.0, we have DDR signaling and an 8/10 byte overhead, which gives us a theoretical maximum of 480MB/s per lane. Most people just call it 500MB/s, but it's not. Thunderbolt extends PCIe topology in a cascade arrangement from the host over the external connector at 10Gbps per channel. Effectively two PCIe 2.0 lanes are constituted in one of these channels.

Yes, 100Gbps could have been done right from the start by simply stacking on more channels. It would take a different style connector. As of right now, it would require three mini-DisplayPort, but that's not a huge deal. One does have to ask what you would do immediately with 100Gbps of external bandwidth? The 20Gbps of aggregate bandwidth in a current Thunderbolt port has a real-world peak throughput of 1920MB/s or just shy of 2 GigaBytes per second. That's enough bandwidth to handle RED EPIC 5K 5120x2700 @ 48bpp (16bits/channel RGB) at 24fps. And a new 27" iMac has two of those ports. You could theoretically do 5K transcode to an uncompressed container such as TIFF at up to 24fps on a new iMac if you connected the Rocket to one port and a fast enough RAID (2GigaBytes per second! SUSTAINED!) to the other port.

But what if they did 100Gbps externally, what would you realistically use it for? Seriously? The PCIe host on the X58 has 40 lanes or approximately 200Gbps total PCIe throughput at 5GT/s.

I would gladly take 100Gbps externally if I could have it. But the only application it would serve is massive PCIe expansion. So, on that note, I'd rather just have a system with more expansion slots and maybe one or two 20GBbps TB ports at the rear.
 
Not sure what you're smoking...

Not smoking in that way. I communicate with some rather intelleigent people and normally deal with some very difficult technical areas, despite my fairly substantial and consistant physical sickness blunting my ability and holding me back. I tend to be able to do this with professionals even though sick. It is ussual for consensus through reason to only be reached with experts that are reasoned and have forsight. It is normal for people not to get it, Im normally cosidering more than ten times relevent facts in my statements then the average professional around here. So their lack of reasoning skills prevent people from considering what they dont know, or that they could be wrong. It is not knowledge that is absolute but the ability to work that knowledge to what is right, and many professionals fail because they believe their knowledge. Until you get to the very smart, who know their is more and can reach for it, or the dumb who are wise enough to know they dont know everything, there is problems.

Communicating around here can be a bit of a strain at times, at least you try to reason which is more than many professionals I come accross. My convations with you here are probably the best Ive had here. If I had a business running I would consider hiring such a person.
 
In terms of PCIe, one lane is 2.52Gbps. In PCIe 2.0, we have DDR signaling and an 8/10 byte overhead, which gives us a theoretical maximum of 480MB/s per lane. Most people just call it 500MB/s, but it's not. Thunderbolt extends PCIe topology in a cascade arrangement from the host over the external connector at 10Gbps per channel. Effectively two PCIe 2.0 lanes are constituted in one of these channels.

But what if they did 100Gbps externally, what would you realistically use it for? Seriously? The PCIe host on the X58 has 40 lanes or approximately 200Gbps total PCIe throughput at 5GT/s.

I would gladly take 100Gbps externally if I could have it. But the only application it would serve is massive PCIe expansion. So, on that note, I'd rather just have a system with more expansion slots and maybe one or two 20GBbps TB ports at the rear.

You have forgotten Pci-e 3.0 which is 1GB/s per lane with encoding overhead of apparently 1.5% using a rather obviouse feedback correction scheme (so long to get this). They will probably move to 20 ghz soon. Isnt tb up to 4 lanes in the connector?

It is not what you can do today but what you can and will do in future with a product. Nintendo suffered from this with the wii not setup for hdtv, because not many people had hdtv at launch, but they quickly did. Nintendo survived well enough because of the new controller mechanism, and because of the stylish form factor i would say. Nintendo also had cartridges with the 64 when Sony went for cd's. Sony gained ground. With the ps2 they included dvd which was not well supported but the people did move to it and the ps2 was instrumental I feel in doing this. Sony did the same with bd on ps3 and it is helping and you can get many cheap disks even.

Lets look at it this way, the ability to stream data off of one device to another quickly is useful and effective. Real rates these days might be cobstricted to 100's of MBytes a second from flash, but we can expect this to improve in the future, over 10 GB a second maybe what we are looking at. Rambus have also announced a 1 terabyte a second memory interface a while ago, so the inbetween bits are there.

As for other uses see my previouse posts. To get a super hd 60fps you are just about satuating the present interface, to go to 3d stereo you are looking at double. As i said, we might see a 40Gbs version of displayport/tb this year, as alsi previously indicated in the press. It is reasonable this or next year. It should also cover professional editing at 10+ bits depending on how it is done (incase you do not know that a high brightness screen can reveal banding when the pupil opens up to observe non bright parts of thevscreen at 8 bit pixel depth so these high bit modes are important on big fov screens). Ultra hd 8k will take up 80Gbs +, in a professional sense this resolution will be used in the next 5 years. So as a professional interface the 100Gbs will be needed in the life time of todays new equipment. Past that the upper limit of human vision is far above this, and then surround, and then volumetric occulsion mirror display. You are looking at well over 1 terabit a second. I would prefer the full capacity just to at as a replacement for a graphic card port. I am also interested in using it for co computing. Medical scanning, simular live scanning for home use, makerbot images, the list of new uses within the life time of tb is great.

Now back track, the imac has two thunderbolt ports. If you ad a shd monitor you can satuate it and be left with little for daisy chaining other devices. The other will take some devices, which maybe ok, but what happens to the bandwidth when you add professional euipment, raid and another monitor etc. I know you might say why would you want more than two tb devices, but if it is to replace usb then everything is going to go on it. So for the moment it is not much of an issue 5 years down the track it would be for the current eqipment. For us who design things everything had to be considered, otherwise we would be at cave dwelling level eith people not seeing the reason for more.

It is just a first step.
 
Have just skimmed over this thread to see if it has been mentioned, appears not. Little thing i read today: apparently these new imacs are using some exotic and proprietary sata connector for the internal HDD. Making it impossible for the user to replace a broken drive or put in a larger capacity drive down the track without a trip to the apple service centre. Once thunderbolt is up and running with raids and such i guess its not going to be much of an issue as the internal drive will just be for system level stuff. But nice to know apple is still up to its old tricks, finding new and exciting ways to control and limit the user experience and lock out cheap third party accessorising, way to go
 
Back
Top