Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

New MacPro predictions? Hopes? Fears?

What I like most about my trash can is how quite it is... I don't miss my Windows PC with dual cards sounding like a jet was about to take off inside of my office...do I want more, of course. I have been looking at adding an external graphics card maybe the Titan, for my trash can.
 
What I like most about my trash can is how quite it is... I don't miss my Windows PC with dual cards sounding like a jet was about to take off inside of my office...do I want more, of course. I have been looking at adding an external graphics card maybe the Titan, for my trash can.

You can look all you want, but that can't be done, at least not on the current design. The only GPU's you can have in a trashcan Mac are the ATI internal cards that are already there. External GPU's cannot be used on the Thunderbolt bus with Mac OS X. This is one of the primary problems of the design.
 
Which versions have you used? The app really matured and blossomed after 10.2 ....personally I can't live without it.

It's an NLE for the 21st century digital file-based age and not the 20th century tape/film age (AVID, premiere, FCP 7, et al.). I guess many people can't accept that.

The problem I have with FCPX (which I use daily) is I'm really tired of the animated UI. I wish Apple would release a version, or an option, with less glitz and glamour and give us a flat-looking UI and stop animating every clip, and drill-down arrow, when using the magnetic timeline. I wish it were a little more Pro and a little less X.

I mean squeeze everything out that can cause even the slightest performance slow-down.

I have to say I'm kind of becoming disappointed in Apple lately. They seem to be fading away from the professionals. With my setup I have two Thunderbolt Displays (which they've stopped making) and I consistently have a problem with only one of them turning on and the thunderbolt connector is so loose my second display flickers periodically during the day and I DON'T DARE TOUCH IT because both displays will disconnect if I just breathe on it.

Just wish Apple would blow us away like they used to. :(
 
It had become, they would include it, if they felt like it. No 8k display, 30 inch display, bluray, blyray 4k, slate touch mode. A trend is forming. Tablets that are big like the competitions. A bit thinner, looks the same, faster like the competition. Watches too, like the competition. Glasses? It is hard to take them seriously as a seriouse professional system when you couldn't edit your 3D movies or handle bluray up front, rec2020, as many cores as the top BBox, or beef up GPU performance. Apple became a trend setter for a while, but what they needed to be professionally is a feature leader, that you could take home to the office and just use, even if you didn't use all the features. Something worth spending more on. As they took real features away I stopped getting ready to buy one.

Now, the Mac Pro needs to be 4 to 8 times the volume, have a bluray 4k 3D recorder, VR functionality, 8k, touch screen functionality 30-40 inch rec2020 3D display, room for two more GPU cards, + PCI cards, twice as many CPU's supported, 5,1/4 bays (all bays can be at the base to avoid hindering the cooling core). That would be a real Mac Pro. The IMac needs to be more like the existing Mac Pro in performance at the top with 32 inch 8k, slate touch, bluray 4k, vr, 3D etc. That would be a real IMac.
 
The problem I have with FCPX (which I use daily) is I'm really tired of the animated UI. I wish Apple would release a version, or an option, with less glitz and glamour and give us a flat-looking UI and stop animating every clip, and drill-down arrow, when using the magnetic timeline. I wish it were a little more Pro and a little less X.

I mean squeeze everything out that can cause even the slightest performance slow-down.

I have to say I'm kind of becoming disappointed in Apple lately. They seem to be fading away from the professionals. With my setup I have two Thunderbolt Displays (which they've stopped making) and I consistently have a problem with only one of them turning on and the thunderbolt connector is so loose my second display flickers periodically during the day and I DON'T DARE TOUCH IT because both displays will disconnect if I just breathe on it.

Just wish Apple would blow us away like they used to. :(
Actually, they seem to be fading away from consumers too. Yes, they had a bit of an uptick this quarter but their numbers had been going down. Of course, we're still talking billions of dollars, but perhaps their dominance is fading a bit. Still a heck of a lot of iPhones and iPads out there. I'm hoping that there is a plan in place to eventually get all those devices that will be obsolete in a few years recycled properly and perhaps repurposed.

Tim is trying his best, but I think love him or hate him, Steve Jobs is what made Apple tick. I wonder if people are getting bored with the current Apple.

As for professional products, I think their stance is that the line between professionals and consumers will continue to blur and that some little 8 year old girl in Idaho will make an incredible movie shot on iPhone and edited in FCP X. I think they envision that happening on a tablet eventually. So, the Mac Pro with a version of the OS that caters to the needs of professionals doing some serious heavy lifting is probably not exactly in Apple's vision of the future right now, if I had to guess.
 
As for professional products, I think their stance is that the line between professionals and consumers will continue to blur and that some little 8 year old girl in Idaho will make an incredible movie shot on iPhone and edited in FCP X. I think they envision that happening on a tablet eventually. .

I think it was Coppola that predicted this, and Apple is likely doing all they can to bring it to fruition.

The classical concept of "professional" is dead anyway. It's been re-purposed as a marketing term for tech products.
 
Reading the tea leaves I think Apple is waiting for Kaby Lake to refresh the tubes. Unlike TB2 backed by PCIe2 lanes, we'd have TB3 direct to the chipset and eGPU support. Fingers crossed...

Cheers - #19

If they are skipping Skylake also, and waiting for Kaby, then forget it. Then you're into 2018+ until you'll see Xeons with that tech.

I guess when Schiller said it was the design for the next 10 years he meant the internals too. :eek6:
 
Steve,

I think what tends to happen is when you have charasmatic leadership competing leadership tends to drop off. Once the accounts/money men, move in, what is left is not much to choose from and the fanciful, or book keeping, takes over and declines starts. For instance, the former head of Panasonic, was it, I forget, was a credible lead, but he got Steve back, then he was gone. Apple has been a brand with less substance, it has been a brand with innovation before that, it became a brand with style but I don't know how much you could call it innovation. As good as Ives is, you can't hang things on past style, where is the value and innovation? That got me trapped waiting for Apple for years to innovate and bring out a leading featured product. Frankly a Mac Pro tube doesn't do it, while small, your work demands better and better performance, size vs performance is a fashionistas feature not a professional one here. The fashionistas have to learn their place, behind the drivers of performance, function and features, as a servant to them making them look good, towards the back of the bus. Then Apple can.be Apple.

Now, as for Apples any novice can be a professional, OK, but that is not how real professionalism works. Real professionalism in out field is moving towards ever larger data streams and processing requirements.demanding the top machine. A render farm (or rack in today's terms), indicates, demands, a product that can do it in one machine, cheaper. I'm seeing vertical blades here vertically stacked here between expandable stand like they do with consoles. You just buy one blade band add/and replace as many as you need. The vertical rack allows natural cooling and for you to add features, cards, storage as needed, and allows Apple continuous sales in a continuous upgrade path, like modules on Reds. If you want a render farm, just stick 20 of these on your desk behind your monitirm. There was talk Apple was actually replacing the previous Mac with a blade, before they went with the tube. Time to get back there, when I want to replace the main board for something better, I'll just replace the blade for that, or hang the new blade on.
 
Are we forgetting the potential Arm replacement plans that keep cropping up. Nvidia is capable of delivering an leader low energy per performance ARM based GPU system, that would eclipse the Intel processors (though I think Intel has something up.their sleeves too).
 
Apple is too busy building mountain of money and if they spend let's say $50M on something, whether a software which doesn't make you want to chew your leg off, or on a new computer, it means Cupertino will only have about $199 950 000 000 and not $200 000 000 000. And that sucks.

When they reach 300 000 000 000 they'll add another button somewhere, or new folders color and Ive will have an orgasm on a promo.

It takes quite a bit of idiocy to throw FCP7 in the trash and look away to tens of thousands signing the petition, and screw up the professional's reputation built for decades while sitting on $100B + , so I'm suspecting next MP will have GPS and heartbeat monitor with new chat and iTunes controls. It will be up to 60% faster than the last one and better in every way.
 
Well, not spending $50m to make $100m extra net, is a good way to loose not only $100 million but note and more over time.

If it is not suitable, then best bet is to go elsewhere, and sign a petitom with somebody like Nvidia to develop 1kw+ Linux Arm cuda systems running x86 binarues through code translators. You will actually get what you need. The Power Mac is newspaper, ENG level.
 
It's easy to say the tube was too weak for real time AND high resolution video/motion graphics work, clearly. Yet it can also be true that it rocked for stills/dev work/rendered graphics/1080 24/30fps video, etc. At the end of the day I agree with Mike Most's assertion that it was both the "stuck in time" internals AND the limitations of TB2 for things like eGPU applications that crippled the 6,1 tube.

TB3, especially straight to the chip, is going to make living without internal slots viable. Is it a "better" solution?..., more elegant?..., or just more external ;-)... Hoping somebody makes a bomb ass external chassis that can take multiple (how about 4x) TB3 connections to a common chassis of slots, Proper PSU and cooling, including support for SSD on PCIe, M.2 NVM, etc. How about an aluminum box with a handle on the top, mini with 4 full length slots and studio with 8 full length PCIe slots. Sell me a "snake" with 4x TB3 runs all together so I can quickly patch into four discrete ports on the MoBo. Short copper and 100m tactical optical please.

If Apple offers a BTO variant of a 7,1 tube with a kick ass CPU that supports 256GB of DDR4 RAM, lots of fast I/O - including TB3 direct - it could be the centerpiece of a legit workstation, especially if SSD on GPU provides enough local memory to avoid so much data "commuting" to and from the MoBo, leaving room for everything else ;-D.

Is it a dream? No, no... don't wake me...

Cheers - #19
 
Not sure how much of a 'fruitloop' idea this would be ....

+) Apple creating a hardware acceleration chip for MAC PRO and MAC LAPTOPS
+) That includes a secure enclave to afford the same security of their iOS devices
+) That includes their own dedicated tile based rendering GPU that is turned for Metal for display an Hardware acceleration (at very low power)
+) That includes a Many core version of the A10X but with a power budget that is 10 x the TDP
+) A 'low' ish core count Kanby CO-processor with top shelf GHz to maximise single thread performance
+) Up to 1TB HBM2 that is accessible from ARM (A10X) and Intel (Kanby)

(If technically possible) This could (fingers crossed) mean that we get an 'affordable' Intel chip with, say, 8 very fast cores for Applications, and 32 super efficient Apple cores that 'at no cost' scale the performance of the machine to something more like a main-frame.

It would also provide a bridge for Applications to port themselves over to the ARM ISA (if not already ported). In two years time Fujitsu's High-performance extensions will be available to the ARM licensees, and Apple will be (fingers crossed) be using TSMC 7nm ovens.

At that point, Apple could have a software stack that is macOS, ARM ISA, Intel (compatibility mode) ... lots of power, reasonable cost and a relatively quiet shiny new cylinder...

AJ
 
Blair, like I said, it is suitable for newspapers and Eng work. My beef is that they did not have a bigger version that you could fit more in, and diy cards and drive bay panels. 4-8 times the volume, they can keep the fire concept in it to. To get the infrastructure everywhere it has to be suitable for everywhere, so people with different areas can work together, that is connectivity tech companies talk about. Anyway, if they want to stuff around, be on their own time. We don't need it. We could get valve and Nvidia to do an alternative Linux machine.
 
Antony, there is a better alternative to ARM on the horizon. Risc-V, what Arm used to be, but refined. Be interesting to see what comes up. With x86 binary translators we don't need to stick to x86 or Arm. We can theotectically run x86 binaries at lower energy and with more performance than x86 processors. I would be interested in a 1kw version I could throttle up from 1w.
 
Research into Instruction Set Architecture is a great thing - which is what I envision RISC-V to be a vehicle for.

When in comes to Core design, ARM v8 takes fewer transistors and less silicon area than than x86.
RISC-V has more carefully matched its ISA to cover the use case of current software, and take (speculation) 30% less power than an ARM v8 core to handle single core requirements.

However, IMHO the ability to 'do more' heavily depends on :
+) designs that leverage coherency with external hardware accelerators,
+) for tiers of storage to have a 'boatload' of cheap cores strapped to them to ameliorate data processing bottlenecks, and
+) intellectual fingers to be in the next-nm-process methodologies 'pies'.

I could be wrong (wouldn't be the first time!), but my gut feeling is that x86 and mass core coherence both internal and external is somewhat more tricky than ARM, and is not currently on the RISC-V agenda. RISC-V is pitched at ISA research, not next nm process, and so is working towards 28nm tech for 2017; (most likely) Intel and (definitely) ARM are 'all in' on 7nm design considerations to give them a couple of years optimisation for their ISA, before the h/w is out. Intel are working hard to monetise their Altera FPGA accelerators as big Bolt-ons for their Big x86 chips to sell to Enterprises.

In the short term I would expect big core x86 + Altera FPGA to provide top performance per watt .... but at huge cost.
In 5 years, I would guess that Xilinx will be selling FPGA implementations of RISC-V to dynamically reconfigurable research needs.
And for Fujitsu HPC extensions to the ARM ISA to allow those that wish science and machine learning oriented workloads to work intra core at far higher IPC (at similar TDP).

AJ
 
Well, it is a couple of years time we have to look to compete anyway, so mass parrallel product at that time could be timely. They have hardware investors, who bring processes (and the design of instruction sets are generally optimised sepetate from process). RISC V is meant to be a practically optimised implementation instead of a research vehicle (which they acuse others of coming from). 4 tiers of storage and accelerators are not such an issue, support for these tend to be add on circuitry hardware partners already have. Besides, a lot of processing does not need much past local array memory and local storage, to start with. This simple grid like processing is less complex allowing more cores per area, but you have to be able to break down the processing into streams. Low energy products tend to use low energy processes that are larger than 7nm, which is very leaky. This ditectly affects how many cores in the array. That is why many mass arrays are clocked so relatively low compared to simple numbered cored products based on start of the art products. It is where they are likely deliberately aiming it for the markets they are aiming it. A much tighter optimised instruction set drastically reduces transistor accounts (look at the size of x86 in comparison) but math units dwarf that anyway, and where are all the extra complete data instructions support like you can get with ARM and x,86 since MMX and SSE? But this is the start to a better future, not the end.

We can't cut it off because there is an existing worse solution, how will we get better ones. The competition motivates the existing solutions to get better. But let's be realistic, how many times has Intel announced something that did not pan out. Larrabee, really low powered cores delivered years ago, FPGA whatever, others can do that. What about AMD combining CPU and GPU, I was wondering how that was supposed to go, and what we have is nice integrated mobile chips. Unless you drastically abandon past inefficencies, you are going to be ham strung by it. I have secretly kept this to myself for many years but Intel's best choice x86 wise, was ColdFire, only then could they get the actual advantages they want in x86. Look it up, and the design history of where it came from. I however, suspect Intel is finally planning this to compete with Arm. Hence Atom is gone, but I read Apple is planning to use Intel in IPhone. If this is at all correct, then something is afoot.

I can see the advantages. Technically it is an exciting opportunity to start again.
 
Back
Top