Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

New MacPro predictions? Hopes? Fears?

Any resources out there for pre-built Hackintosh / Davinci hero workstations? I have a loaded nMP that chokes on Neat Video applied to clips - even in 1080P proxy mode. Not real keen on going to the Dark Side just for a single program nor being a test pilot for building my own. Got plenty of other projects way more interesting than that.
 
Any resources out there for pre-built Hackintosh / Davinci hero workstations? I have a loaded nMP that chokes on Neat Video applied to clips - even in 1080P proxy mode. Not real keen on going to the Dark Side just for a single program nor being a test pilot for building my own. Got plenty of other projects way more interesting than that.

Get a Linux Resolve. Unlimited power and Prores. None of the Hackintosh headaches.
Said that, even most powerful Resolve can't do Neat Video in real time. It needs to calculate vectors for up to 11 frames in real time, which means, you need to supply and process 11 frames at a time in real time and that is a tall order for any system even in HD. For real time playback, you must cache it, always...
 
Sounds like you largely just described what I asked for (a Z840 Mac Edition) with the main difference being the manufacturer.

And, perhaps, the price...

And the fact you can't put 4 real GPUs in a Z840, nor the flash storage, it does not have proper internal storage solutions for performance applications and the cooling design, in fact the entire design of the Z820 and Z840 was a giant step in the wrong direction. The Z800 was just OK at the time. Could have used some innovation there... Then along came the Z820, oh, it's the same system, they just shoe-horned the larger Xeon platform into the same space and made it work. Z840 fixed a few of the design flaws, but didn't innovate or improve. But HP does not innovate these days. I just see stale reference designs and more attention placed on what's needed to appease corporate purchase accounts than to address the actual needs of those who use the systems.

The sad thing is, everything I listed above can fit into a space much smaller than a Z840, if they were to take a bold step away from the 20 year old tower and cooling design.

OTOH, like I said above, the performance workstation market is rapidly shrinking. I'm not sure how much R&D or other resources I would pour in to such a venture as making the next great workstation. Ironically, Apple actually did a phenomenal job with the new Mac Pro in terms of identifying a future vision for pro workstations. Unfortunately I think they were too soon with the design as Thunderbolt 2 was not up to snuff for proper I/O and the internal storage capacity as well as processing power are just not there yet to fully commit to such a design shift. I don't think they're far off the mark of where serious workstations will be in the next 8 to 10 years. Compact cylinders or cubes on a desk with all the CPU, GPU and storage needed within the single device and ample I/O ports to attach the few peripherals we need. ...And yes, Apple flubbed it with the GPUs in the new Mac Pro, but that's a different story and not that they were jumping the technological gun there. They and AMD just failed to fully realize the potential of the hardware.
 
Wow! That 2012 tower was a really smart purchase! Better value than an Obsolescence Obsolete Camera ... No? Sounds like great value!

On average I've been getting about 6 years out of each Apple system I've owned, spending about $5-6k each time. $1k a year for a machine I use everyday for work is pretty good if you ask me.
 
Get a Linux Resolve. Unlimited power and Prores. None of the Hackintosh headaches.
Said that, even most powerful Resolve can't do Neat Video in real time. It needs to calculate vectors for up to 11 frames in real time, which means, you need to supply and process 11 frames at a time in real time and that is a tall order for any system even in HD. For real time playback, you must cache it, always...

You should be able to get RT playback if the plugin is quick enough. Try Resolve's temporal NR and you should have no issues getting RT playback on a half-decent GPU. And it's multi GPU accelerated as well.
 
Apple is waiting for Intel's Kaby Lake, which is going to enter mass-manufacturing shortly. That will be the refresh coming to the main line. Intel doesn't really have anything on deck for a Mac Pro refresh, but they could poop something out for Apple this year, if they are pushed. I can really only see a refresh happening next year, unless Apple wants to forsake higher core counts/more powerful CPUs and better chipsets for availability. I also think Apple wants some alignment from other things as well, such as Intel's new 3D Memory stuff and more Thunderbolt 3/USB-C peripherals.
 
Apple is waiting for Intel's Kaby Lake, which is going to enter mass-manufacturing shortly. That will be the refresh coming to the main line. Intel doesn't really have anything on deck for a Mac Pro refresh, but they could poop something out for Apple this year, if they are pushed. I can really only see a refresh happening next year, unless Apple wants to forsake higher core counts/more powerful CPUs and better chipsets for availability. I also think Apple wants some alignment from other things as well, such as Intel's new 3D Memory stuff and more Thunderbolt 3/USB-C peripherals.

Kaby Lake won't make it into the Xeon line for 2 years. They're just now getting Broadwell Xeon's out the door.
 
Just notice it...

Not a great fun of Linux, so I havent look for it... Now that I have fed up with Apple I'm more open to other options...

And I see the ecosystem around Linux is getting really mature...
 
Kaby Lake won't make it into the Xeon line for 2 years. They're just now getting Broadwell Xeon's out the door.

Skylake Xeons are the most current Xeons used in workstations and servers, but they are the low-end models and not the mid-range/high-end models that Apple would use.
 
Skylake Xeons are the most current Xeons used in workstations and servers, but they are the low-end models and not the mid-range/high-end models that Apple would use.

Aware of that. Still ~2 years until any Kaby Lake Xeon's suitable for the Mac Pro will be available. E5-26xxV4 just now out. ~1 year until v5 Skylake then Kaby is V6.

So yes technically there may be a Kaby Lake SKU or two out soon but they are so low end to be virtually useless for any use case discussed here.
 
Skylake Xeons are the most current Xeons used in workstations and servers, but they are the low-end models and not the mid-range/high-end models that Apple would use.

Mid and high end versions are available, been circulating in the channel for a few weeks now. Unfortunately, they really don't offer any true performance enhancements other than the DDR4 memory ability. That is a significant difference for internal throughput / memory bandwidth, but not so sure if it's enough to justify the overhaul of the little cylinder workstation. Thunderbolt 3 can be made to work... Same with USB 3.1. Although both of those won't be as elegant to add since Intel has not updated chipsets with full support for either standard. Given that, I'm not sure if this is the time for an upgrade. ...We should also mention GPUs. Next generation GPUs are only now hitting the market in the form of the GTX 1080. Mor nVidia variations will come by the end of the year. Including the mobile editions suitable for iMac and MacBook Pro. Of course, that's assuming Apple will consider nVidia options. AMD is in a similar situation, although their new GPUs seem to be running a few months behind. So I'm not expecting an updated MacBook Pro until we see the alignment of a new generation mobile GPU and the more streamlined dual-channel Thunderbolt 3 interface that Intel hasn't shipped just yet. Notebooks out there with dual TB3 ports are doing so with 2 X single-channel controllers that take up more space.

With the exception of the 2006 Mac Pro, and then again in 2008, Apple has always been late to the party, but usually worth the wait. Lately they just don't bother to show up and when they do it's usually some distorted vision of what most of their target audience was expecting and wrapped up in some new innovative package that somehow manages to blend insanely cool with WTF? in a seemingly improbable mutation. 2013 Mac Pro could have been a killer Mac desktop if they had only gone with an i7 CPU and more widely supported and accepted GPUs. Doing that, and cutting the price by 40%, and they would have sold a bazillion of them. They missed the mark. They took a cool desktop idea and tried to make it "Pro" by putting in a Xeon CPU, and <cough> "pro" GPUs all while charging a "pro" price.

WWDC has historically been all about the software. I can only think of two instances in the last 20~12 years where Apple actually announced new system hardware at WWDC. The last of which was the 2013 Mac Pro.
 
Mid and high end versions are available, been circulating in the channel for a few weeks now. Unfortunately, they really don't offer any true performance enhancements other than the DDR4 memory ability. That is a significant difference for internal throughput / memory bandwidth, but not so sure if it's enough to justify the overhaul of the little cylinder workstation. Thunderbolt 3 can be made to work... Same with USB 3.1. Although both of those won't be as elegant to add since Intel has not updated chipsets with full support for either standard. Given that, I'm not sure if this is the time for an upgrade. ...We should also mention GPUs. Next generation GPUs are only now hitting the market in the form of the GTX 1080. Mor nVidia variations will come by the end of the year. Including the mobile editions suitable for iMac and MacBook Pro. Of course, that's assuming Apple will consider nVidia options. AMD is in a similar situation, although their new GPUs seem to be running a few months behind. So I'm not expecting an updated MacBook Pro until we see the alignment of a new generation mobile GPU and the more streamlined dual-channel Thunderbolt 3 interface that Intel hasn't shipped just yet. Notebooks out there with dual TB3 ports are doing so with 2 X single-channel controllers that take up more space.

With the exception of the 2006 Mac Pro, and then again in 2008, Apple has always been late to the party, but usually worth the wait. Lately they just don't bother to show up and when they do it's usually some distorted vision of what most of their target audience was expecting and wrapped up in some new innovative package that somehow manages to blend insanely cool with WTF? in a seemingly improbable mutation. 2013 Mac Pro could have been a killer Mac desktop if they had only gone with an i7 CPU and more widely supported and accepted GPUs. Doing that, and cutting the price by 40%, and they would have sold a bazillion of them. They missed the mark. They took a cool desktop idea and tried to make it "Pro" by putting in a Xeon CPU, and <cough> "pro" GPUs all while charging a "pro" price.

WWDC has historically been all about the software. I can only think of two instances in the last 20~12 years where Apple actually announced new system hardware at WWDC. The last of which was the 2013 Mac Pro.
Any chance you can get yourself on the design team at Apple and build the system everyone wants? :-)
 
Random thoughts....

What would it take for Apple to decide to move off Intel - and for you wanting to buy another Trash Can from them?

+) Apples core's all comfortably outperform all other ARM cores in terms of instructions per clock (IPC) - including ARMs own designs.
I would speculate that a single Apple A9X core would still comfortably outperform a single (yet to be released) ARM A73 core (lets estimate Apple as 30% faster )

https://en.wikipedia.org/wiki/Comparison_of_ARMv8-A_cores

+) ARM A73 core is ~ 2 x IPC of the ARM A57

+) Each Core in Xeon E5-2699 v4 is 1.6 times x the speed of the Xeon D-1581.
And each Xeon D-1581 core is Approx 2.5 x the speed of each of the 48 cores in the Cavium (which is comparable to an ARM 57)

http://www.anandtech.com/show/10353/investigating-cavium-thunderx-48-arm-cores/11

+) Xeon E5-2699 v4 core = 1.6 x 2.5 speed of A57 = 4 x A57 core
+) Apple A9X core = 2 x 1.33 = 2.6 x A57 core

I make that (very roughly) that 3 x Apples A9X cores == 2 x Xeon E5-2699 v4 cores.

Which roughly means that a 32 core Apple on TSMC 16nm (Intel ~20nm) would run about the same speed as a 22 core Intel on 14nm.

TSMC are rolling out 10 nm (Intel 14nm) this year .... and then 7 nm in 2017 (Intel 10nm).


If Cavium managed to squeeze in a 48 core chip in 28nm ... Could apple manage more than 32 cores at TSMC_16nm, 10nm or 7nm.

I have a feeling that if Apple wanted to - they COULD replace the XEON with a more potent in-house designed chip and for far less $$$ than Intel charges for their top end Xeon.

If the 48 core Cavium costs $800 .... I am guessing that Apple could create 48 core x A10 on the TSMC_10nm for far less (as they would be selling million for each thousand that Cavium are shipping).

Dreaming...

AJ
 
Last edited:
Interesting week:

+) I wonder if Apple could get their hands on a Fujistu ARMv8 (as in Post-K)
http://www.isc-hpc.com/isc16_ap/presentationdetails.htm?t=presentation&o=782&a=select&ra=search

+) And on the architecture front .. this chip looks like it solves the (thus far) unsolvable issues of parallelising tasks, with simple code, and incredible scalability. Nvidia's brilliance is involved in this design.
http://news.mit.edu/2016/parallel-programming-easy-0620

+) And China have smashed the performance per watt record using a homegrown SIMD chip. (they beat their last supercomputer .. which is now #2). So ... even though Intel was prevented from selling start of the art 14nm Xeons to China, China's homegrown 28nm process chips are now used in a Chinese super that is 5 x faster than that the fastest US supercomputer!

Out of all of these, the first one would make me want to buy ARM shares. Fujistu is the AMG of chip design.
2020 will likely bring with it the fastest supercomputer on the planet that indroduces the worlds first 1000 PETA-flop machine. (that's a 10x jump from Sunlight Taihu Light).

I noted that Apple are introducing Texture compression into their Metal framework .. based on an ARM technology (ASTC). As Metal is also used within macOS, I wondered if Apple have a distant change for their GPU roadmap?

Predictions : I now think Apple will introduce a new Mac Pro in 2017 on Sierra.
+) It will be based on APFS (new flash optimised File system. Optimised for LOW LATENCY, not High throughout)
+) it will use P3 (Apple's added the GPU goodness and pixel formats for up to 16 bits per pixels .. Ie more than 10bits)
+) With even more of the Mac OSes GPU backed, and steps being taken to let GPUs directly access main memory (ie alleviating memory bandwidth issues), I guess Apple will wait for AMD to release Vega with 32GB GPU's, and put 4 of them into a new cylinder.

And I wonder if Apple will roll out a 'TSMC' 7nm A11 multi-core machine?

Still Dreaming...

AJ
 
Come on. There's usually 5 posts before someone drags it down that rabbit hole.

It is reduser, maybe they think a Mac is a.Scottish kilt.

What they need is a blade tower rack system where you can pur the power on before some Linux system provider does a 1st tier like version and scorches things.
 
Interesting week:

+) I wonder if Apple could get their hands on a Fujistu ARMv8 (as in Post-K)
http://www.isc-hpc.com/isc16_ap/presentationdetails.htm?t=presentation&o=782&a=select&ra=search

+) And on the architecture front .. this chip looks like it solves the (thus far) unsolvable issues of parallelising tasks, with simple code, and incredible scalability. Nvidia's brilliance is involved in this design.
http://news.mit.edu/2016/parallel-programming-easy-0620

+) And China have smashed the performance per watt record using a homegrown SIMD chip. (they beat their last supercomputer .. which is now #2). So ... even though Intel was prevented from selling start of the art 14nm Xeons to China, China's homegrown 28nm process chips are now used in a Chinese super that is 5 x faster than that the fastest US supercomputer!

Out of all of these, the first one would make me want to buy ARM shares. Fujistu is the AMG of chip design.
2020 will likely bring with it the fastest supercomputer on the planet that indroduces the worlds first 1000 PETA-flop machine. (that's a 10x jump from Sunlight Taihu Light).

I noted that Apple are introducing Texture compression into their Metal framework .. based on an ARM technology (ASTC). As Metal is also used within macOS, I wondered if Apple have a distant change for their GPU roadmap?

Predictions : I now think Apple will introduce a new Mac Pro in 2017 on Sierra.
+) It will be based on APFS (new flash optimised File system. Optimised for LOW LATENCY, not High throughout)
+) it will use P3 (Apple's added the GPU goodness and pixel formats for up to 16 bits per pixels .. Ie more than 10bits)
+) With even more of the Mac OSes GPU backed, and steps being taken to let GPUs directly access main memory (ie alleviating memory bandwidth issues), I guess Apple will wait for AMD to release Vega with 32GB GPU's, and put 4 of them into a new cylinder.

And I wonder if Apple will roll out a 'TSMC' 7nm A11 multi-core machine?

Still Dreaming...

AJ

So frustrating Antony, too much reading around.

The swarm seems to be achieving what I figured out some time back for my OS, microprocessor architecture, but we are talking about spot processors here, with hundreds of thousands to millions of them in the same space as one Intel chip. I have been coming up systematically with new ways to do calculations, inter core communications, processing, and programming over the decades to likely archieve a thousand times performance increase per unit of energy. As a bridge before more quantum/optical computing devices. I now looking into things involving quantum affects, and far smaller cores again. But the most frustrating thing is that after figuring out a new likely solution path calculation processing methodology (which I mentioned Iin the last year) did not record the solution insight and forgot it. Once I can figure that out again I can go forwards. The most interesting work.
 
Back
Top